Sep 8 23:53:42.890661 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 8 23:53:42.890687 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Sep 8 22:15:05 -00 2025 Sep 8 23:53:42.890699 kernel: KASLR enabled Sep 8 23:53:42.890705 kernel: efi: EFI v2.7 by EDK II Sep 8 23:53:42.890711 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Sep 8 23:53:42.890717 kernel: random: crng init done Sep 8 23:53:42.890732 kernel: secureboot: Secure boot disabled Sep 8 23:53:42.890738 kernel: ACPI: Early table checksum verification disabled Sep 8 23:53:42.890745 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Sep 8 23:53:42.890759 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:53:42.890766 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:42.890773 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:42.890779 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:42.890785 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:42.890793 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:42.890801 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:42.890807 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:42.890813 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:42.890831 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:42.890837 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 8 23:53:42.890843 kernel: NUMA: Failed to initialise from firmware Sep 8 23:53:42.890850 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:53:42.890856 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 8 23:53:42.890863 kernel: Zone ranges: Sep 8 23:53:42.890869 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:53:42.890877 kernel: DMA32 empty Sep 8 23:53:42.890883 kernel: Normal empty Sep 8 23:53:42.890889 kernel: Movable zone start for each node Sep 8 23:53:42.890895 kernel: Early memory node ranges Sep 8 23:53:42.890901 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Sep 8 23:53:42.890907 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Sep 8 23:53:42.890914 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Sep 8 23:53:42.890920 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 8 23:53:42.890926 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 8 23:53:42.890932 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 8 23:53:42.890938 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 8 23:53:42.890945 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 8 23:53:42.890952 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 8 23:53:42.890959 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:53:42.890965 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 8 23:53:42.890974 kernel: psci: probing for conduit method from ACPI. Sep 8 23:53:42.890981 kernel: psci: PSCIv1.1 detected in firmware. Sep 8 23:53:42.890988 kernel: psci: Using standard PSCI v0.2 function IDs Sep 8 23:53:42.890996 kernel: psci: Trusted OS migration not required Sep 8 23:53:42.891002 kernel: psci: SMC Calling Convention v1.1 Sep 8 23:53:42.891009 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 8 23:53:42.891016 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 8 23:53:42.891022 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 8 23:53:42.891029 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 8 23:53:42.891036 kernel: Detected PIPT I-cache on CPU0 Sep 8 23:53:42.891042 kernel: CPU features: detected: GIC system register CPU interface Sep 8 23:53:42.891049 kernel: CPU features: detected: Hardware dirty bit management Sep 8 23:53:42.891056 kernel: CPU features: detected: Spectre-v4 Sep 8 23:53:42.891064 kernel: CPU features: detected: Spectre-BHB Sep 8 23:53:42.891071 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 8 23:53:42.891077 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 8 23:53:42.891084 kernel: CPU features: detected: ARM erratum 1418040 Sep 8 23:53:42.891090 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 8 23:53:42.891097 kernel: alternatives: applying boot alternatives Sep 8 23:53:42.891104 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:53:42.891112 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:53:42.891118 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:53:42.891125 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:53:42.891132 kernel: Fallback order for Node 0: 0 Sep 8 23:53:42.891140 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 8 23:53:42.891146 kernel: Policy zone: DMA Sep 8 23:53:42.891153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:53:42.891159 kernel: software IO TLB: area num 4. Sep 8 23:53:42.891166 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 8 23:53:42.891173 kernel: Memory: 2387408K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184880K reserved, 0K cma-reserved) Sep 8 23:53:42.891180 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:53:42.891186 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:53:42.891194 kernel: rcu: RCU event tracing is enabled. Sep 8 23:53:42.891201 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:53:42.891207 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:53:42.891214 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:53:42.891223 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:53:42.891229 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:53:42.891236 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 8 23:53:42.891243 kernel: GICv3: 256 SPIs implemented Sep 8 23:53:42.891249 kernel: GICv3: 0 Extended SPIs implemented Sep 8 23:53:42.891256 kernel: Root IRQ handler: gic_handle_irq Sep 8 23:53:42.891262 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 8 23:53:42.891269 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 8 23:53:42.891275 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 8 23:53:42.891282 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 8 23:53:42.891289 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 8 23:53:42.891297 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 8 23:53:42.891304 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 8 23:53:42.891311 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:53:42.891318 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:53:42.891324 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 8 23:53:42.891331 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 8 23:53:42.891338 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 8 23:53:42.891345 kernel: arm-pv: using stolen time PV Sep 8 23:53:42.891352 kernel: Console: colour dummy device 80x25 Sep 8 23:53:42.891359 kernel: ACPI: Core revision 20230628 Sep 8 23:53:42.891366 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 8 23:53:42.891374 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:53:42.891381 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:53:42.891388 kernel: landlock: Up and running. Sep 8 23:53:42.891394 kernel: SELinux: Initializing. Sep 8 23:53:42.891401 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:53:42.891408 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:53:42.891415 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:53:42.891422 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:53:42.891429 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:53:42.891438 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:53:42.891445 kernel: Platform MSI: ITS@0x8080000 domain created Sep 8 23:53:42.891452 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 8 23:53:42.891458 kernel: Remapping and enabling EFI services. Sep 8 23:53:42.891465 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:53:42.891472 kernel: Detected PIPT I-cache on CPU1 Sep 8 23:53:42.891479 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 8 23:53:42.891486 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 8 23:53:42.891492 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:53:42.891501 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 8 23:53:42.891508 kernel: Detected PIPT I-cache on CPU2 Sep 8 23:53:42.891521 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 8 23:53:42.891530 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 8 23:53:42.891538 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:53:42.891545 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 8 23:53:42.891573 kernel: Detected PIPT I-cache on CPU3 Sep 8 23:53:42.891581 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 8 23:53:42.891589 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 8 23:53:42.891598 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:53:42.891605 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 8 23:53:42.891612 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:53:42.891620 kernel: SMP: Total of 4 processors activated. Sep 8 23:53:42.891627 kernel: CPU features: detected: 32-bit EL0 Support Sep 8 23:53:42.891634 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 8 23:53:42.891641 kernel: CPU features: detected: Common not Private translations Sep 8 23:53:42.891648 kernel: CPU features: detected: CRC32 instructions Sep 8 23:53:42.891657 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 8 23:53:42.891665 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 8 23:53:42.891672 kernel: CPU features: detected: LSE atomic instructions Sep 8 23:53:42.891680 kernel: CPU features: detected: Privileged Access Never Sep 8 23:53:42.891687 kernel: CPU features: detected: RAS Extension Support Sep 8 23:53:42.891694 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 8 23:53:42.891702 kernel: CPU: All CPU(s) started at EL1 Sep 8 23:53:42.891709 kernel: alternatives: applying system-wide alternatives Sep 8 23:53:42.891716 kernel: devtmpfs: initialized Sep 8 23:53:42.891728 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:53:42.891738 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:53:42.891745 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:53:42.891753 kernel: SMBIOS 3.0.0 present. Sep 8 23:53:42.891761 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 8 23:53:42.891768 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:53:42.891775 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 8 23:53:42.891783 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 8 23:53:42.891790 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 8 23:53:42.891797 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:53:42.891806 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 8 23:53:42.891816 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:53:42.891826 kernel: cpuidle: using governor menu Sep 8 23:53:42.891837 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 8 23:53:42.891845 kernel: ASID allocator initialised with 32768 entries Sep 8 23:53:42.891854 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:53:42.891862 kernel: Serial: AMBA PL011 UART driver Sep 8 23:53:42.891869 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 8 23:53:42.891877 kernel: Modules: 0 pages in range for non-PLT usage Sep 8 23:53:42.891885 kernel: Modules: 509248 pages in range for PLT usage Sep 8 23:53:42.891893 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:53:42.891900 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:53:42.891907 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 8 23:53:42.891914 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 8 23:53:42.891922 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:53:42.891929 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:53:42.891936 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 8 23:53:42.891943 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 8 23:53:42.891952 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:53:42.891959 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:53:42.891966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:53:42.891973 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:53:42.891981 kernel: ACPI: Interpreter enabled Sep 8 23:53:42.891988 kernel: ACPI: Using GIC for interrupt routing Sep 8 23:53:42.891995 kernel: ACPI: MCFG table detected, 1 entries Sep 8 23:53:42.892002 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 8 23:53:42.892010 kernel: printk: console [ttyAMA0] enabled Sep 8 23:53:42.892019 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:53:42.892181 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:53:42.892258 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 8 23:53:42.892329 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 8 23:53:42.892401 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 8 23:53:42.892471 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 8 23:53:42.892481 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 8 23:53:42.892492 kernel: PCI host bridge to bus 0000:00 Sep 8 23:53:42.892587 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 8 23:53:42.892658 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 8 23:53:42.892734 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 8 23:53:42.892797 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:53:42.892889 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 8 23:53:42.892971 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:53:42.893045 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 8 23:53:42.893112 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 8 23:53:42.893179 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:53:42.893247 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:53:42.893315 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 8 23:53:42.893382 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 8 23:53:42.893447 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 8 23:53:42.893510 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 8 23:53:42.893585 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 8 23:53:42.893595 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 8 23:53:42.893603 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 8 23:53:42.893610 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 8 23:53:42.893618 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 8 23:53:42.893625 kernel: iommu: Default domain type: Translated Sep 8 23:53:42.893636 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 8 23:53:42.893643 kernel: efivars: Registered efivars operations Sep 8 23:53:42.893654 kernel: vgaarb: loaded Sep 8 23:53:42.893663 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 8 23:53:42.893670 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:53:42.893678 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:53:42.893685 kernel: pnp: PnP ACPI init Sep 8 23:53:42.893780 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 8 23:53:42.893791 kernel: pnp: PnP ACPI: found 1 devices Sep 8 23:53:42.893801 kernel: NET: Registered PF_INET protocol family Sep 8 23:53:42.893808 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:53:42.893820 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:53:42.893828 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:53:42.893835 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:53:42.893843 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:53:42.893850 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:53:42.893858 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:53:42.893867 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:53:42.893875 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:53:42.893882 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:53:42.893889 kernel: kvm [1]: HYP mode not available Sep 8 23:53:42.893896 kernel: Initialise system trusted keyrings Sep 8 23:53:42.893903 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:53:42.893910 kernel: Key type asymmetric registered Sep 8 23:53:42.893917 kernel: Asymmetric key parser 'x509' registered Sep 8 23:53:42.893925 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 8 23:53:42.893933 kernel: io scheduler mq-deadline registered Sep 8 23:53:42.893941 kernel: io scheduler kyber registered Sep 8 23:53:42.893949 kernel: io scheduler bfq registered Sep 8 23:53:42.893956 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 8 23:53:42.893964 kernel: ACPI: button: Power Button [PWRB] Sep 8 23:53:42.893972 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 8 23:53:42.894044 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 8 23:53:42.894054 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:53:42.894061 kernel: thunder_xcv, ver 1.0 Sep 8 23:53:42.894068 kernel: thunder_bgx, ver 1.0 Sep 8 23:53:42.894078 kernel: nicpf, ver 1.0 Sep 8 23:53:42.894085 kernel: nicvf, ver 1.0 Sep 8 23:53:42.894161 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 8 23:53:42.894225 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-08T23:53:42 UTC (1757375622) Sep 8 23:53:42.894235 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 8 23:53:42.894242 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 8 23:53:42.894249 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 8 23:53:42.894256 kernel: watchdog: Hard watchdog permanently disabled Sep 8 23:53:42.894266 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:53:42.894273 kernel: Segment Routing with IPv6 Sep 8 23:53:42.894280 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:53:42.894287 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:53:42.894294 kernel: Key type dns_resolver registered Sep 8 23:53:42.894302 kernel: registered taskstats version 1 Sep 8 23:53:42.894309 kernel: Loading compiled-in X.509 certificates Sep 8 23:53:42.894316 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: 98feb45e0c7a714eab78dfe8a165eb91758e42e9' Sep 8 23:53:42.894323 kernel: Key type .fscrypt registered Sep 8 23:53:42.894332 kernel: Key type fscrypt-provisioning registered Sep 8 23:53:42.894339 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:53:42.894347 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:53:42.894354 kernel: ima: No architecture policies found Sep 8 23:53:42.894361 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 8 23:53:42.894368 kernel: clk: Disabling unused clocks Sep 8 23:53:42.894375 kernel: Freeing unused kernel memory: 38400K Sep 8 23:53:42.894382 kernel: Run /init as init process Sep 8 23:53:42.894389 kernel: with arguments: Sep 8 23:53:42.894398 kernel: /init Sep 8 23:53:42.894405 kernel: with environment: Sep 8 23:53:42.894412 kernel: HOME=/ Sep 8 23:53:42.894431 kernel: TERM=linux Sep 8 23:53:42.894438 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:53:42.894446 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:53:42.894456 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:53:42.894467 systemd[1]: Detected virtualization kvm. Sep 8 23:53:42.894474 systemd[1]: Detected architecture arm64. Sep 8 23:53:42.894482 systemd[1]: Running in initrd. Sep 8 23:53:42.894489 systemd[1]: No hostname configured, using default hostname. Sep 8 23:53:42.894498 systemd[1]: Hostname set to . Sep 8 23:53:42.894505 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:53:42.894514 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:53:42.894522 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:53:42.894532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:53:42.894540 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:53:42.894548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:53:42.894556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:53:42.894573 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:53:42.894582 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:53:42.894591 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:53:42.894601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:53:42.894609 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:53:42.894617 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:53:42.894625 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:53:42.894633 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:53:42.894640 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:53:42.894648 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:53:42.894656 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:53:42.894664 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:53:42.894674 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:53:42.894682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:53:42.894690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:53:42.894698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:53:42.894705 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:53:42.894713 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:53:42.894727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:53:42.894736 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:53:42.894746 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:53:42.894754 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:53:42.894762 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:53:42.894770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:53:42.894778 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:53:42.894786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:53:42.894796 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:53:42.894824 systemd-journald[238]: Collecting audit messages is disabled. Sep 8 23:53:42.894843 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:53:42.894853 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:53:42.894862 systemd-journald[238]: Journal started Sep 8 23:53:42.894881 systemd-journald[238]: Runtime Journal (/run/log/journal/9950ca41460d4be8a139df3cf740be56) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:53:42.890804 systemd-modules-load[239]: Inserted module 'overlay' Sep 8 23:53:42.903600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:53:42.905473 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:53:42.906161 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 8 23:53:42.907310 kernel: Bridge firewalling registered Sep 8 23:53:42.907421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:53:42.908918 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:53:42.919780 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:53:42.921607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:53:42.923738 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:53:42.926965 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:53:42.934680 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:53:42.936757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:53:42.939323 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:53:42.952778 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:53:42.953925 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:53:42.956746 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:53:42.971930 dracut-cmdline[283]: dracut-dracut-053 Sep 8 23:53:42.974742 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:53:42.985221 systemd-resolved[278]: Positive Trust Anchors: Sep 8 23:53:42.985244 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:53:42.985275 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:53:42.991040 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 8 23:53:42.992134 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:53:42.996016 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:53:43.044616 kernel: SCSI subsystem initialized Sep 8 23:53:43.048609 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:53:43.057614 kernel: iscsi: registered transport (tcp) Sep 8 23:53:43.071603 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:53:43.071642 kernel: QLogic iSCSI HBA Driver Sep 8 23:53:43.115924 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:53:43.126790 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:53:43.143016 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:53:43.143105 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:53:43.143954 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:53:43.190599 kernel: raid6: neonx8 gen() 15776 MB/s Sep 8 23:53:43.207575 kernel: raid6: neonx4 gen() 15792 MB/s Sep 8 23:53:43.224579 kernel: raid6: neonx2 gen() 13214 MB/s Sep 8 23:53:43.241578 kernel: raid6: neonx1 gen() 10535 MB/s Sep 8 23:53:43.258579 kernel: raid6: int64x8 gen() 6780 MB/s Sep 8 23:53:43.275579 kernel: raid6: int64x4 gen() 7340 MB/s Sep 8 23:53:43.292575 kernel: raid6: int64x2 gen() 6101 MB/s Sep 8 23:53:43.309576 kernel: raid6: int64x1 gen() 5049 MB/s Sep 8 23:53:43.309597 kernel: raid6: using algorithm neonx4 gen() 15792 MB/s Sep 8 23:53:43.326578 kernel: raid6: .... xor() 12398 MB/s, rmw enabled Sep 8 23:53:43.326594 kernel: raid6: using neon recovery algorithm Sep 8 23:53:43.331641 kernel: xor: measuring software checksum speed Sep 8 23:53:43.331668 kernel: 8regs : 21613 MB/sec Sep 8 23:53:43.332675 kernel: 32regs : 21161 MB/sec Sep 8 23:53:43.332687 kernel: arm64_neon : 27974 MB/sec Sep 8 23:53:43.332696 kernel: xor: using function: arm64_neon (27974 MB/sec) Sep 8 23:53:43.380588 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:53:43.390318 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:53:43.400772 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:53:43.414056 systemd-udevd[465]: Using default interface naming scheme 'v255'. Sep 8 23:53:43.417703 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:53:43.423921 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:53:43.434998 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Sep 8 23:53:43.462480 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:53:43.471755 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:53:43.516619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:53:43.527770 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:53:43.538618 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:53:43.540047 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:53:43.541669 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:53:43.542873 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:53:43.551835 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:53:43.561323 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:53:43.580586 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 8 23:53:43.583581 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:53:43.587802 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:53:43.587853 kernel: GPT:9289727 != 19775487 Sep 8 23:53:43.587865 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:53:43.587874 kernel: GPT:9289727 != 19775487 Sep 8 23:53:43.587885 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:53:43.590060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:53:43.599392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:53:43.590356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:53:43.601667 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:53:43.602733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:53:43.603529 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:53:43.605821 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:53:43.616861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:53:43.623585 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (512) Sep 8 23:53:43.630589 kernel: BTRFS: device fsid 75950a77-34ea-4c25-8b07-0ac9de89ed80 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (523) Sep 8 23:53:43.633706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:53:43.648253 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:53:43.657150 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:53:43.671408 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:53:43.678955 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:53:43.680234 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:53:43.692790 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:53:43.694746 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:53:43.699281 disk-uuid[555]: Primary Header is updated. Sep 8 23:53:43.699281 disk-uuid[555]: Secondary Entries is updated. Sep 8 23:53:43.699281 disk-uuid[555]: Secondary Header is updated. Sep 8 23:53:43.702166 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:53:43.715811 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:53:44.710617 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:53:44.711415 disk-uuid[556]: The operation has completed successfully. Sep 8 23:53:44.737473 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:53:44.737618 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:53:44.787784 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:53:44.790858 sh[576]: Success Sep 8 23:53:44.801584 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 8 23:53:44.833847 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:53:44.842248 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:53:44.845611 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:53:44.856432 kernel: BTRFS info (device dm-0): first mount of filesystem 75950a77-34ea-4c25-8b07-0ac9de89ed80 Sep 8 23:53:44.856488 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:53:44.856500 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:53:44.856511 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:53:44.857699 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:53:44.861398 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:53:44.862902 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:53:44.873778 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:53:44.875478 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:53:44.894170 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:53:44.894235 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:53:44.894247 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:53:44.897596 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:53:44.902591 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:53:44.907637 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:53:44.914831 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:53:44.976177 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:53:44.982818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:53:44.986540 ignition[668]: Ignition 2.20.0 Sep 8 23:53:44.986550 ignition[668]: Stage: fetch-offline Sep 8 23:53:44.986614 ignition[668]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:53:44.986624 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:53:44.986841 ignition[668]: parsed url from cmdline: "" Sep 8 23:53:44.986845 ignition[668]: no config URL provided Sep 8 23:53:44.986850 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:53:44.986858 ignition[668]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:53:44.986883 ignition[668]: op(1): [started] loading QEMU firmware config module Sep 8 23:53:44.986887 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:53:44.992531 ignition[668]: op(1): [finished] loading QEMU firmware config module Sep 8 23:53:45.012982 systemd-networkd[766]: lo: Link UP Sep 8 23:53:45.012994 systemd-networkd[766]: lo: Gained carrier Sep 8 23:53:45.013868 systemd-networkd[766]: Enumeration completed Sep 8 23:53:45.014142 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:53:45.014260 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:53:45.014264 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:53:45.014972 systemd-networkd[766]: eth0: Link UP Sep 8 23:53:45.014975 systemd-networkd[766]: eth0: Gained carrier Sep 8 23:53:45.014983 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:53:45.016353 systemd[1]: Reached target network.target - Network. Sep 8 23:53:45.029627 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:53:45.044765 ignition[668]: parsing config with SHA512: b6dcf9a7a961f806e964e75220fac19a4a6b86406195f928a8d40a156c15b8de71bca53c725e54b25a19e240c7e3b4368e73b45b473703aba06a77a040c7432a Sep 8 23:53:45.049610 unknown[668]: fetched base config from "system" Sep 8 23:53:45.049620 unknown[668]: fetched user config from "qemu" Sep 8 23:53:45.050245 ignition[668]: fetch-offline: fetch-offline passed Sep 8 23:53:45.050490 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.98 Sep 8 23:53:45.050327 ignition[668]: Ignition finished successfully Sep 8 23:53:45.050497 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Sep 8 23:53:45.053607 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:53:45.055147 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:53:45.069815 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:53:45.082380 ignition[774]: Ignition 2.20.0 Sep 8 23:53:45.082390 ignition[774]: Stage: kargs Sep 8 23:53:45.082576 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:53:45.082587 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:53:45.083499 ignition[774]: kargs: kargs passed Sep 8 23:53:45.083547 ignition[774]: Ignition finished successfully Sep 8 23:53:45.086595 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:53:45.100832 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:53:45.110809 ignition[783]: Ignition 2.20.0 Sep 8 23:53:45.110818 ignition[783]: Stage: disks Sep 8 23:53:45.110984 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:53:45.110995 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:53:45.111962 ignition[783]: disks: disks passed Sep 8 23:53:45.113389 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:53:45.112017 ignition[783]: Ignition finished successfully Sep 8 23:53:45.114429 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:53:45.115374 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:53:45.116908 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:53:45.118202 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:53:45.119592 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:53:45.128755 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:53:45.138955 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:53:45.145630 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:53:45.147472 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:53:45.192591 kernel: EXT4-fs (vda9): mounted filesystem 3b93848a-00fd-42cd-b996-7bf357d8ae77 r/w with ordered data mode. Quota mode: none. Sep 8 23:53:45.192630 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:53:45.193694 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:53:45.208693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:53:45.210404 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:53:45.212574 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:53:45.212653 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:53:45.212683 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:53:45.218783 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (801) Sep 8 23:53:45.216659 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:53:45.218859 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:53:45.222853 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:53:45.222874 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:53:45.222884 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:53:45.225575 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:53:45.226404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:53:45.255672 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:53:45.260362 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:53:45.264802 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:53:45.268741 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:53:45.338978 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:53:45.349709 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:53:45.351213 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:53:45.356587 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:53:45.370871 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:53:45.378221 ignition[915]: INFO : Ignition 2.20.0 Sep 8 23:53:45.378221 ignition[915]: INFO : Stage: mount Sep 8 23:53:45.379575 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:53:45.379575 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:53:45.379575 ignition[915]: INFO : mount: mount passed Sep 8 23:53:45.379575 ignition[915]: INFO : Ignition finished successfully Sep 8 23:53:45.381792 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:53:45.391696 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:53:45.987414 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:53:46.000790 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:53:46.009596 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (929) Sep 8 23:53:46.009637 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:53:46.011125 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:53:46.011634 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:53:46.013592 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:53:46.014746 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:53:46.036944 ignition[946]: INFO : Ignition 2.20.0 Sep 8 23:53:46.036944 ignition[946]: INFO : Stage: files Sep 8 23:53:46.038227 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:53:46.038227 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:53:46.038227 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:53:46.041185 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:53:46.041185 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:53:46.041185 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:53:46.044630 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:53:46.044630 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:53:46.044630 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 8 23:53:46.044630 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 8 23:53:46.041626 unknown[946]: wrote ssh authorized keys file for user: core Sep 8 23:53:46.091354 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:53:46.423675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 8 23:53:46.423675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:53:46.423675 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 8 23:53:46.458764 systemd-networkd[766]: eth0: Gained IPv6LL Sep 8 23:53:46.629060 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 8 23:53:46.764836 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:53:46.764836 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:53:46.768703 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 8 23:53:47.066546 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 8 23:53:47.480930 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:53:47.480930 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 8 23:53:47.483763 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:53:47.483763 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:53:47.483763 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 8 23:53:47.483763 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 8 23:53:47.483763 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:53:47.483763 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:53:47.483763 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 8 23:53:47.483763 ignition[946]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:53:47.497834 ignition[946]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:53:47.500879 ignition[946]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:53:47.502109 ignition[946]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:53:47.502109 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:53:47.502109 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:53:47.502109 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:53:47.502109 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:53:47.502109 ignition[946]: INFO : files: files passed Sep 8 23:53:47.502109 ignition[946]: INFO : Ignition finished successfully Sep 8 23:53:47.504424 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:53:47.520784 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:53:47.522486 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:53:47.524991 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:53:47.525080 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:53:47.537216 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:53:47.540800 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:53:47.540800 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:53:47.547371 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:53:47.548470 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:53:47.551747 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:53:47.563738 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:53:47.585436 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:53:47.585650 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:53:47.587647 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:53:47.589141 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:53:47.590620 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:53:47.591520 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:53:47.607202 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:53:47.622781 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:53:47.630521 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:53:47.631727 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:53:47.633548 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:53:47.634981 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:53:47.635107 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:53:47.637227 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:53:47.638851 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:53:47.640283 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:53:47.641811 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:53:47.643414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:53:47.644950 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:53:47.646621 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:53:47.648483 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:53:47.650239 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:53:47.651671 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:53:47.652983 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:53:47.653111 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:53:47.655022 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:53:47.656552 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:53:47.658453 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:53:47.661594 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:53:47.663095 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:53:47.663223 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:53:47.665463 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:53:47.665613 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:53:47.667550 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:53:47.669024 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:53:47.673595 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:53:47.675808 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:53:47.676694 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:53:47.678087 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:53:47.678178 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:53:47.679401 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:53:47.679485 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:53:47.680667 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:53:47.680797 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:53:47.682234 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:53:47.682343 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:53:47.693788 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:53:47.694525 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:53:47.694677 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:53:47.697423 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:53:47.698845 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:53:47.698967 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:53:47.700675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:53:47.700809 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:53:47.707397 ignition[1001]: INFO : Ignition 2.20.0 Sep 8 23:53:47.707397 ignition[1001]: INFO : Stage: umount Sep 8 23:53:47.708846 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:53:47.708846 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:53:47.708846 ignition[1001]: INFO : umount: umount passed Sep 8 23:53:47.708846 ignition[1001]: INFO : Ignition finished successfully Sep 8 23:53:47.708914 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:53:47.709005 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:53:47.710934 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:53:47.711024 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:53:47.713058 systemd[1]: Stopped target network.target - Network. Sep 8 23:53:47.714051 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:53:47.714117 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:53:47.715644 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:53:47.715700 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:53:47.717285 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:53:47.717344 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:53:47.719325 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:53:47.719387 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:53:47.721082 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:53:47.722441 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:53:47.725165 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:53:47.725815 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:53:47.725921 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:53:47.729914 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:53:47.730160 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:53:47.730264 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:53:47.733247 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:53:47.734326 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:53:47.734378 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:53:47.745719 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:53:47.746403 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:53:47.746464 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:53:47.748203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:53:47.748251 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:53:47.750700 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:53:47.750750 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:53:47.752341 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:53:47.752382 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:53:47.754751 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:53:47.758614 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:53:47.758678 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:53:47.764935 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:53:47.765043 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:53:47.773384 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:53:47.773558 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:53:47.775578 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:53:47.775623 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:53:47.777036 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:53:47.777066 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:53:47.778682 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:53:47.778750 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:53:47.781178 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:53:47.781229 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:53:47.783334 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:53:47.783386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:53:47.795798 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:53:47.796658 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:53:47.796727 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:53:47.799262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:53:47.799307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:53:47.802341 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:53:47.802395 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:53:47.802743 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:53:47.802839 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:53:47.804447 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:53:47.804531 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:53:47.808093 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:53:47.808979 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:53:47.809045 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:53:47.811236 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:53:47.820003 systemd[1]: Switching root. Sep 8 23:53:47.847228 systemd-journald[238]: Journal stopped Sep 8 23:53:48.586849 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 8 23:53:48.586917 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:53:48.586929 kernel: SELinux: policy capability open_perms=1 Sep 8 23:53:48.586939 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:53:48.586948 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:53:48.586957 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:53:48.586966 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:53:48.586979 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:53:48.586988 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:53:48.586998 systemd[1]: Successfully loaded SELinux policy in 31.588ms. Sep 8 23:53:48.587021 kernel: audit: type=1403 audit(1757375628.006:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:53:48.587031 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.391ms. Sep 8 23:53:48.587046 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:53:48.587058 systemd[1]: Detected virtualization kvm. Sep 8 23:53:48.587068 systemd[1]: Detected architecture arm64. Sep 8 23:53:48.587078 systemd[1]: Detected first boot. Sep 8 23:53:48.587090 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:53:48.587101 zram_generator::config[1047]: No configuration found. Sep 8 23:53:48.587112 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:53:48.587121 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:53:48.587132 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:53:48.587142 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:53:48.587153 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:53:48.587163 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:53:48.587175 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:53:48.587186 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:53:48.587196 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:53:48.587206 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:53:48.587217 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:53:48.587229 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:53:48.587240 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:53:48.587250 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:53:48.587261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:53:48.587273 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:53:48.587283 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:53:48.587293 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:53:48.587304 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:53:48.587314 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:53:48.587325 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 8 23:53:48.587335 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:53:48.587345 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:53:48.587357 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:53:48.587367 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:53:48.587377 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:53:48.587387 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:53:48.587397 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:53:48.587407 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:53:48.587418 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:53:48.587428 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:53:48.587438 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:53:48.587451 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:53:48.587461 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:53:48.587471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:53:48.587482 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:53:48.587492 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:53:48.587502 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:53:48.587511 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:53:48.587521 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:53:48.587531 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:53:48.587543 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:53:48.587554 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:53:48.587596 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:53:48.587611 systemd[1]: Reached target machines.target - Containers. Sep 8 23:53:48.587621 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:53:48.587632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:53:48.587643 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:53:48.587653 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:53:48.587666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:53:48.587676 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:53:48.587696 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:53:48.587707 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:53:48.587719 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:53:48.587729 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:53:48.587739 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:53:48.587749 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:53:48.587762 kernel: fuse: init (API version 7.39) Sep 8 23:53:48.587772 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:53:48.587783 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:53:48.587794 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:53:48.587805 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:53:48.587815 kernel: loop: module loaded Sep 8 23:53:48.587824 kernel: ACPI: bus type drm_connector registered Sep 8 23:53:48.587834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:53:48.587845 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:53:48.587857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:53:48.587868 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:53:48.587878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:53:48.587889 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:53:48.587899 systemd[1]: Stopped verity-setup.service. Sep 8 23:53:48.587936 systemd-journald[1126]: Collecting audit messages is disabled. Sep 8 23:53:48.587961 systemd-journald[1126]: Journal started Sep 8 23:53:48.587982 systemd-journald[1126]: Runtime Journal (/run/log/journal/9950ca41460d4be8a139df3cf740be56) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:53:48.404656 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:53:48.417473 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:53:48.417908 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:53:48.589995 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:53:48.590694 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:53:48.591650 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:53:48.592644 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:53:48.593488 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:53:48.594590 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:53:48.595519 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:53:48.596655 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:53:48.597879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:53:48.599120 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:53:48.599288 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:53:48.600526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:53:48.600715 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:53:48.601848 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:53:48.602021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:53:48.603097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:53:48.603258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:53:48.604692 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:53:48.604880 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:53:48.605963 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:53:48.606125 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:53:48.607323 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:53:48.608751 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:53:48.609980 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:53:48.611259 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:53:48.624271 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:53:48.633689 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:53:48.635592 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:53:48.636483 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:53:48.636525 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:53:48.638430 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:53:48.640501 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:53:48.642515 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:53:48.643549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:53:48.644963 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:53:48.646728 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:53:48.647698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:53:48.651814 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:53:48.652782 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:53:48.653161 systemd-journald[1126]: Time spent on flushing to /var/log/journal/9950ca41460d4be8a139df3cf740be56 is 21.779ms for 870 entries. Sep 8 23:53:48.653161 systemd-journald[1126]: System Journal (/var/log/journal/9950ca41460d4be8a139df3cf740be56) is 8M, max 195.6M, 187.6M free. Sep 8 23:53:48.689594 systemd-journald[1126]: Received client request to flush runtime journal. Sep 8 23:53:48.689690 kernel: loop0: detected capacity change from 0 to 207008 Sep 8 23:53:48.653937 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:53:48.658891 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:53:48.662226 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:53:48.666641 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:53:48.668494 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:53:48.669780 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:53:48.670933 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:53:48.674082 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:53:48.678206 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:53:48.693770 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:53:48.697808 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:53:48.700594 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:53:48.700653 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:53:48.702340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:53:48.703846 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:53:48.710730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:53:48.715284 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 8 23:53:48.721720 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:53:48.726593 kernel: loop1: detected capacity change from 0 to 113512 Sep 8 23:53:48.737876 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 8 23:53:48.737894 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 8 23:53:48.742694 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:53:48.756595 kernel: loop2: detected capacity change from 0 to 123192 Sep 8 23:53:48.791584 kernel: loop3: detected capacity change from 0 to 207008 Sep 8 23:53:48.797593 kernel: loop4: detected capacity change from 0 to 113512 Sep 8 23:53:48.802623 kernel: loop5: detected capacity change from 0 to 123192 Sep 8 23:53:48.805900 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:53:48.806607 (sd-merge)[1189]: Merged extensions into '/usr'. Sep 8 23:53:48.810352 systemd[1]: Reload requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:53:48.810374 systemd[1]: Reloading... Sep 8 23:53:48.876598 zram_generator::config[1216]: No configuration found. Sep 8 23:53:48.926642 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:53:48.984428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:53:49.046549 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:53:49.046809 systemd[1]: Reloading finished in 235 ms. Sep 8 23:53:49.063413 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:53:49.064829 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:53:49.079973 systemd[1]: Starting ensure-sysext.service... Sep 8 23:53:49.081902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:53:49.090922 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:53:49.091076 systemd[1]: Reloading... Sep 8 23:53:49.098028 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:53:49.098233 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:53:49.098878 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:53:49.099086 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 8 23:53:49.099137 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Sep 8 23:53:49.101974 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:53:49.101988 systemd-tmpfiles[1253]: Skipping /boot Sep 8 23:53:49.111163 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:53:49.111180 systemd-tmpfiles[1253]: Skipping /boot Sep 8 23:53:49.146656 zram_generator::config[1288]: No configuration found. Sep 8 23:53:49.231450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:53:49.292681 systemd[1]: Reloading finished in 201 ms. Sep 8 23:53:49.309581 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:53:49.325213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:53:49.333414 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:53:49.336139 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:53:49.338536 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:53:49.342994 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:53:49.346938 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:53:49.349216 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:53:49.353347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:53:49.356219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:53:49.360612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:53:49.368247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:53:49.369545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:53:49.369713 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:53:49.372020 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:53:49.374655 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:53:49.376222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:53:49.377618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:53:49.379109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:53:49.379266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:53:49.380995 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:53:49.381178 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:53:49.381718 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Sep 8 23:53:49.390158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:53:49.398018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:53:49.403073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:53:49.408328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:53:49.409635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:53:49.409834 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:53:49.416220 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:53:49.421026 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:53:49.428375 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:53:49.429882 augenrules[1369]: No rules Sep 8 23:53:49.431380 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:53:49.431590 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:53:49.434628 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:53:49.436140 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:53:49.436308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:53:49.437825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:53:49.438637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:53:49.441239 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:53:49.441408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:53:49.450181 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:53:49.458581 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1358) Sep 8 23:53:49.466266 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:53:49.482755 systemd[1]: Finished ensure-sysext.service. Sep 8 23:53:49.501410 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 8 23:53:49.510449 systemd-resolved[1322]: Positive Trust Anchors: Sep 8 23:53:49.510469 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:53:49.510501 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:53:49.511819 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:53:49.512695 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:53:49.515391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:53:49.517907 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:53:49.518901 systemd-resolved[1322]: Defaulting to hostname 'linux'. Sep 8 23:53:49.521822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:53:49.524126 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:53:49.525272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:53:49.525321 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:53:49.527341 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:53:49.530834 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:53:49.532887 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:53:49.533265 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:53:49.534492 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:53:49.534709 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:53:49.535760 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:53:49.535916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:53:49.537455 augenrules[1398]: /sbin/augenrules: No change Sep 8 23:53:49.538318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:53:49.538490 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:53:49.539943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:53:49.540098 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:53:49.549096 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:53:49.550250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:53:49.551275 augenrules[1424]: No rules Sep 8 23:53:49.552601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:53:49.554157 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:53:49.554219 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:53:49.554530 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:53:49.554767 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:53:49.570210 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:53:49.594325 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:53:49.595818 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:53:49.599455 systemd-networkd[1412]: lo: Link UP Sep 8 23:53:49.599463 systemd-networkd[1412]: lo: Gained carrier Sep 8 23:53:49.601146 systemd-networkd[1412]: Enumeration completed Sep 8 23:53:49.601260 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:53:49.601557 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:53:49.601575 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:53:49.602079 systemd-networkd[1412]: eth0: Link UP Sep 8 23:53:49.602086 systemd-networkd[1412]: eth0: Gained carrier Sep 8 23:53:49.602099 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:53:49.602838 systemd[1]: Reached target network.target - Network. Sep 8 23:53:49.608777 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:53:49.612342 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:53:49.614647 systemd-networkd[1412]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:53:49.616766 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Sep 8 23:53:49.618712 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:53:49.618777 systemd-timesyncd[1413]: Initial clock synchronization to Mon 2025-09-08 23:53:49.539341 UTC. Sep 8 23:53:49.629084 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:53:49.642865 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:53:49.653782 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:53:49.656321 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:53:49.668264 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:53:49.678273 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:53:49.702109 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:53:49.703464 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:53:49.704503 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:53:49.705486 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:53:49.706535 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:53:49.707725 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:53:49.708659 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:53:49.709586 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:53:49.710489 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:53:49.710523 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:53:49.711301 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:53:49.713115 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:53:49.715325 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:53:49.718537 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:53:49.719693 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:53:49.720630 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:53:49.723484 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:53:49.724815 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:53:49.726760 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:53:49.728111 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:53:49.729088 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:53:49.729880 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:53:49.730633 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:53:49.730677 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:53:49.731598 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:53:49.735658 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:53:49.733362 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:53:49.736750 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:53:49.738824 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:53:49.740784 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:53:49.742587 jq[1456]: false Sep 8 23:53:49.742835 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:53:49.744884 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:53:49.747809 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:53:49.751882 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:53:49.757183 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:53:49.759072 dbus-daemon[1455]: [system] SELinux support is enabled Sep 8 23:53:49.759535 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:53:49.760100 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:53:49.760974 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:53:49.765868 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:53:49.768537 extend-filesystems[1457]: Found loop3 Sep 8 23:53:49.769138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:53:49.769475 extend-filesystems[1457]: Found loop4 Sep 8 23:53:49.772303 extend-filesystems[1457]: Found loop5 Sep 8 23:53:49.772303 extend-filesystems[1457]: Found vda Sep 8 23:53:49.772303 extend-filesystems[1457]: Found vda1 Sep 8 23:53:49.772303 extend-filesystems[1457]: Found vda2 Sep 8 23:53:49.772303 extend-filesystems[1457]: Found vda3 Sep 8 23:53:49.772303 extend-filesystems[1457]: Found usr Sep 8 23:53:49.772303 extend-filesystems[1457]: Found vda4 Sep 8 23:53:49.772303 extend-filesystems[1457]: Found vda6 Sep 8 23:53:49.772303 extend-filesystems[1457]: Found vda7 Sep 8 23:53:49.772303 extend-filesystems[1457]: Found vda9 Sep 8 23:53:49.772303 extend-filesystems[1457]: Checking size of /dev/vda9 Sep 8 23:53:49.773274 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:53:49.785029 jq[1470]: true Sep 8 23:53:49.786011 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:53:49.786212 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:53:49.786485 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:53:49.786680 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:53:49.789213 extend-filesystems[1457]: Resized partition /dev/vda9 Sep 8 23:53:49.792225 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:53:49.793011 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:53:49.795742 update_engine[1469]: I20250908 23:53:49.794246 1469 main.cc:92] Flatcar Update Engine starting Sep 8 23:53:49.793206 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:53:49.798625 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:53:49.804596 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1356) Sep 8 23:53:49.808464 update_engine[1469]: I20250908 23:53:49.808390 1469 update_check_scheduler.cc:74] Next update check in 5m36s Sep 8 23:53:49.809148 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:53:49.819446 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:53:49.829550 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:53:49.834198 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:53:49.835778 tar[1479]: linux-arm64/LICENSE Sep 8 23:53:49.835778 tar[1479]: linux-arm64/helm Sep 8 23:53:49.834225 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:53:49.835601 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:53:49.835627 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:53:49.839687 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:53:49.839687 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:53:49.839687 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:53:49.839601 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (Power Button) Sep 8 23:53:49.849889 extend-filesystems[1457]: Resized filesystem in /dev/vda9 Sep 8 23:53:49.842497 systemd-logind[1468]: New seat seat0. Sep 8 23:53:49.851171 jq[1481]: true Sep 8 23:53:49.848108 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:53:49.850631 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:53:49.854096 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:53:49.854322 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:53:49.903105 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:53:49.904969 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:53:49.906898 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:53:49.920049 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:53:49.958247 containerd[1483]: time="2025-09-08T23:53:49.958162320Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:53:49.986633 containerd[1483]: time="2025-09-08T23:53:49.986347520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:53:49.987952 containerd[1483]: time="2025-09-08T23:53:49.987917840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:53:49.988045 containerd[1483]: time="2025-09-08T23:53:49.988029400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:53:49.988109 containerd[1483]: time="2025-09-08T23:53:49.988096400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:53:49.988315 containerd[1483]: time="2025-09-08T23:53:49.988295200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:53:49.988395 containerd[1483]: time="2025-09-08T23:53:49.988377120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:53:49.988511 containerd[1483]: time="2025-09-08T23:53:49.988492320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:53:49.988591 containerd[1483]: time="2025-09-08T23:53:49.988555120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:53:49.988876 containerd[1483]: time="2025-09-08T23:53:49.988850080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:53:49.989658 containerd[1483]: time="2025-09-08T23:53:49.988927960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:53:49.989658 containerd[1483]: time="2025-09-08T23:53:49.988958280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:53:49.989658 containerd[1483]: time="2025-09-08T23:53:49.988967840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:53:49.989658 containerd[1483]: time="2025-09-08T23:53:49.989050680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:53:49.989658 containerd[1483]: time="2025-09-08T23:53:49.989238600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:53:49.989658 containerd[1483]: time="2025-09-08T23:53:49.989360560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:53:49.989658 containerd[1483]: time="2025-09-08T23:53:49.989373200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:53:49.989658 containerd[1483]: time="2025-09-08T23:53:49.989451920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:53:49.989658 containerd[1483]: time="2025-09-08T23:53:49.989489360Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:53:49.992940 containerd[1483]: time="2025-09-08T23:53:49.992912960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:53:49.993057 containerd[1483]: time="2025-09-08T23:53:49.993040200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:53:49.993134 containerd[1483]: time="2025-09-08T23:53:49.993120240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:53:49.993195 containerd[1483]: time="2025-09-08T23:53:49.993182480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:53:49.993246 containerd[1483]: time="2025-09-08T23:53:49.993234840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:53:49.993468 containerd[1483]: time="2025-09-08T23:53:49.993442320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:53:49.993931 containerd[1483]: time="2025-09-08T23:53:49.993892040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:53:49.994075 containerd[1483]: time="2025-09-08T23:53:49.994056160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:53:49.994110 containerd[1483]: time="2025-09-08T23:53:49.994079280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:53:49.994110 containerd[1483]: time="2025-09-08T23:53:49.994094520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:53:49.994145 containerd[1483]: time="2025-09-08T23:53:49.994127480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:53:49.994145 containerd[1483]: time="2025-09-08T23:53:49.994141360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:53:49.994177 containerd[1483]: time="2025-09-08T23:53:49.994154920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:53:49.994177 containerd[1483]: time="2025-09-08T23:53:49.994168360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:53:49.994213 containerd[1483]: time="2025-09-08T23:53:49.994183080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:53:49.994213 containerd[1483]: time="2025-09-08T23:53:49.994196200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:53:49.994213 containerd[1483]: time="2025-09-08T23:53:49.994208400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:53:49.994262 containerd[1483]: time="2025-09-08T23:53:49.994219440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:53:49.994262 containerd[1483]: time="2025-09-08T23:53:49.994240200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994262 containerd[1483]: time="2025-09-08T23:53:49.994253480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994311 containerd[1483]: time="2025-09-08T23:53:49.994264640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994311 containerd[1483]: time="2025-09-08T23:53:49.994276920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994311 containerd[1483]: time="2025-09-08T23:53:49.994288200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994311 containerd[1483]: time="2025-09-08T23:53:49.994300040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994311 containerd[1483]: time="2025-09-08T23:53:49.994310640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994404 containerd[1483]: time="2025-09-08T23:53:49.994323880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994404 containerd[1483]: time="2025-09-08T23:53:49.994341120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994404 containerd[1483]: time="2025-09-08T23:53:49.994358120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994404 containerd[1483]: time="2025-09-08T23:53:49.994368960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994404 containerd[1483]: time="2025-09-08T23:53:49.994380240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994404 containerd[1483]: time="2025-09-08T23:53:49.994393040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994502 containerd[1483]: time="2025-09-08T23:53:49.994407160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:53:49.994502 containerd[1483]: time="2025-09-08T23:53:49.994429360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994502 containerd[1483]: time="2025-09-08T23:53:49.994442520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994502 containerd[1483]: time="2025-09-08T23:53:49.994453360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:53:49.994823 containerd[1483]: time="2025-09-08T23:53:49.994787000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:53:49.994823 containerd[1483]: time="2025-09-08T23:53:49.994818320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:53:49.994910 containerd[1483]: time="2025-09-08T23:53:49.994896040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:53:49.994945 containerd[1483]: time="2025-09-08T23:53:49.994915160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:53:49.994945 containerd[1483]: time="2025-09-08T23:53:49.994925240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.994945 containerd[1483]: time="2025-09-08T23:53:49.994939680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:53:49.994994 containerd[1483]: time="2025-09-08T23:53:49.994949680Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:53:49.995021 containerd[1483]: time="2025-09-08T23:53:49.995005440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:53:49.995461 containerd[1483]: time="2025-09-08T23:53:49.995399480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:53:49.995583 containerd[1483]: time="2025-09-08T23:53:49.995521720Z" level=info msg="Connect containerd service" Sep 8 23:53:49.995583 containerd[1483]: time="2025-09-08T23:53:49.995576280Z" level=info msg="using legacy CRI server" Sep 8 23:53:49.995622 containerd[1483]: time="2025-09-08T23:53:49.995585240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:53:49.995959 containerd[1483]: time="2025-09-08T23:53:49.995936040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:53:49.999139 containerd[1483]: time="2025-09-08T23:53:49.999090080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:53:49.999379 containerd[1483]: time="2025-09-08T23:53:49.999257160Z" level=info msg="Start subscribing containerd event" Sep 8 23:53:49.999412 containerd[1483]: time="2025-09-08T23:53:49.999401040Z" level=info msg="Start recovering state" Sep 8 23:53:49.999480 containerd[1483]: time="2025-09-08T23:53:49.999468520Z" level=info msg="Start event monitor" Sep 8 23:53:49.999499 containerd[1483]: time="2025-09-08T23:53:49.999483520Z" level=info msg="Start snapshots syncer" Sep 8 23:53:49.999499 containerd[1483]: time="2025-09-08T23:53:49.999493160Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:53:49.999552 containerd[1483]: time="2025-09-08T23:53:49.999501000Z" level=info msg="Start streaming server" Sep 8 23:53:50.000397 containerd[1483]: time="2025-09-08T23:53:50.000346640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:53:50.000522 containerd[1483]: time="2025-09-08T23:53:50.000504560Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:53:50.000685 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:53:50.002251 containerd[1483]: time="2025-09-08T23:53:50.002219712Z" level=info msg="containerd successfully booted in 0.045511s" Sep 8 23:53:50.221130 tar[1479]: linux-arm64/README.md Sep 8 23:53:50.233967 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:53:50.682693 systemd-networkd[1412]: eth0: Gained IPv6LL Sep 8 23:53:50.684343 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:53:50.688481 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:53:50.703865 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:53:50.706268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:53:50.708323 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:53:50.723978 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:53:50.726625 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:53:50.726809 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:53:50.729362 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:53:51.250645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:53:51.254940 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:53:51.453130 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:53:51.472134 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:53:51.485826 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:53:51.492474 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:53:51.494611 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:53:51.497544 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:53:51.509547 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:53:51.513058 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:53:51.515636 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 8 23:53:51.517150 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:53:51.518236 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:53:51.522640 systemd[1]: Startup finished in 591ms (kernel) + 5.307s (initrd) + 3.548s (userspace) = 9.447s. Sep 8 23:53:51.603401 kubelet[1553]: E0908 23:53:51.603346 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:53:51.606022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:53:51.606173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:53:51.606480 systemd[1]: kubelet.service: Consumed 750ms CPU time, 259.9M memory peak. Sep 8 23:53:55.182616 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:53:55.183821 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:55746.service - OpenSSH per-connection server daemon (10.0.0.1:55746). Sep 8 23:53:55.238549 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 55746 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:53:55.239719 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:55.250376 systemd-logind[1468]: New session 1 of user core. Sep 8 23:53:55.251376 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:53:55.269884 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:53:55.278808 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:53:55.281924 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:53:55.287089 (systemd)[1586]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:53:55.289086 systemd-logind[1468]: New session c1 of user core. Sep 8 23:53:55.399852 systemd[1586]: Queued start job for default target default.target. Sep 8 23:53:55.409555 systemd[1586]: Created slice app.slice - User Application Slice. Sep 8 23:53:55.409604 systemd[1586]: Reached target paths.target - Paths. Sep 8 23:53:55.409639 systemd[1586]: Reached target timers.target - Timers. Sep 8 23:53:55.410990 systemd[1586]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:53:55.419709 systemd[1586]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:53:55.419769 systemd[1586]: Reached target sockets.target - Sockets. Sep 8 23:53:55.419807 systemd[1586]: Reached target basic.target - Basic System. Sep 8 23:53:55.419834 systemd[1586]: Reached target default.target - Main User Target. Sep 8 23:53:55.419859 systemd[1586]: Startup finished in 125ms. Sep 8 23:53:55.420053 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:53:55.421635 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:53:55.506236 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:55748.service - OpenSSH per-connection server daemon (10.0.0.1:55748). Sep 8 23:53:55.542398 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 55748 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:53:55.543598 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:55.548382 systemd-logind[1468]: New session 2 of user core. Sep 8 23:53:55.560743 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:53:55.614223 sshd[1599]: Connection closed by 10.0.0.1 port 55748 Sep 8 23:53:55.614734 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:55.631998 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:55748.service: Deactivated successfully. Sep 8 23:53:55.633436 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:53:55.635160 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:53:55.636297 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:55750.service - OpenSSH per-connection server daemon (10.0.0.1:55750). Sep 8 23:53:55.637012 systemd-logind[1468]: Removed session 2. Sep 8 23:53:55.678820 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 55750 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:53:55.680063 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:55.684224 systemd-logind[1468]: New session 3 of user core. Sep 8 23:53:55.696209 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:53:55.746618 sshd[1607]: Connection closed by 10.0.0.1 port 55750 Sep 8 23:53:55.747002 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:55.755773 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:55750.service: Deactivated successfully. Sep 8 23:53:55.757904 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:53:55.758996 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:53:55.767851 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:55764.service - OpenSSH per-connection server daemon (10.0.0.1:55764). Sep 8 23:53:55.768777 systemd-logind[1468]: Removed session 3. Sep 8 23:53:55.804652 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 55764 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:53:55.805729 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:55.810060 systemd-logind[1468]: New session 4 of user core. Sep 8 23:53:55.819717 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:53:55.871653 sshd[1615]: Connection closed by 10.0.0.1 port 55764 Sep 8 23:53:55.871999 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:55.884781 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:55764.service: Deactivated successfully. Sep 8 23:53:55.886370 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:53:55.887051 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:53:55.895994 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:55778.service - OpenSSH per-connection server daemon (10.0.0.1:55778). Sep 8 23:53:55.897151 systemd-logind[1468]: Removed session 4. Sep 8 23:53:55.933920 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 55778 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:53:55.935233 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:55.939760 systemd-logind[1468]: New session 5 of user core. Sep 8 23:53:55.954887 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:53:56.014842 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:53:56.015112 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:53:56.029530 sudo[1624]: pam_unix(sudo:session): session closed for user root Sep 8 23:53:56.031436 sshd[1623]: Connection closed by 10.0.0.1 port 55778 Sep 8 23:53:56.031244 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:56.048856 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:55778.service: Deactivated successfully. Sep 8 23:53:56.050463 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:53:56.051222 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:53:56.062866 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:55780.service - OpenSSH per-connection server daemon (10.0.0.1:55780). Sep 8 23:53:56.063899 systemd-logind[1468]: Removed session 5. Sep 8 23:53:56.100379 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 55780 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:53:56.101615 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:56.105164 systemd-logind[1468]: New session 6 of user core. Sep 8 23:53:56.114767 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:53:56.165057 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:53:56.165329 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:53:56.168257 sudo[1634]: pam_unix(sudo:session): session closed for user root Sep 8 23:53:56.172589 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:53:56.172844 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:53:56.204939 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:53:56.227406 augenrules[1656]: No rules Sep 8 23:53:56.228597 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:53:56.228830 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:53:56.229712 sudo[1633]: pam_unix(sudo:session): session closed for user root Sep 8 23:53:56.231081 sshd[1632]: Connection closed by 10.0.0.1 port 55780 Sep 8 23:53:56.231482 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Sep 8 23:53:56.240775 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:55780.service: Deactivated successfully. Sep 8 23:53:56.242294 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:53:56.243598 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:53:56.244748 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:55784.service - OpenSSH per-connection server daemon (10.0.0.1:55784). Sep 8 23:53:56.245493 systemd-logind[1468]: Removed session 6. Sep 8 23:53:56.285679 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 55784 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:53:56.287286 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:53:56.291211 systemd-logind[1468]: New session 7 of user core. Sep 8 23:53:56.302777 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:53:56.352625 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:53:56.352935 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:53:56.636957 (dockerd)[1689]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:53:56.637402 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:53:56.844712 dockerd[1689]: time="2025-09-08T23:53:56.844278342Z" level=info msg="Starting up" Sep 8 23:53:57.094065 dockerd[1689]: time="2025-09-08T23:53:57.092943217Z" level=info msg="Loading containers: start." Sep 8 23:53:57.241372 kernel: Initializing XFRM netlink socket Sep 8 23:53:57.316201 systemd-networkd[1412]: docker0: Link UP Sep 8 23:53:57.343744 dockerd[1689]: time="2025-09-08T23:53:57.343694182Z" level=info msg="Loading containers: done." Sep 8 23:53:57.356476 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck937750833-merged.mount: Deactivated successfully. Sep 8 23:53:57.361392 dockerd[1689]: time="2025-09-08T23:53:57.361332578Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:53:57.361498 dockerd[1689]: time="2025-09-08T23:53:57.361445102Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 8 23:53:57.361735 dockerd[1689]: time="2025-09-08T23:53:57.361700534Z" level=info msg="Daemon has completed initialization" Sep 8 23:53:57.394234 dockerd[1689]: time="2025-09-08T23:53:57.394169729Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:53:57.394663 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:53:57.924628 containerd[1483]: time="2025-09-08T23:53:57.924528402Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 8 23:53:58.453128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529327040.mount: Deactivated successfully. Sep 8 23:53:59.395045 containerd[1483]: time="2025-09-08T23:53:59.394983594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:59.396173 containerd[1483]: time="2025-09-08T23:53:59.395893105Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 8 23:53:59.397029 containerd[1483]: time="2025-09-08T23:53:59.396984033Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:59.400208 containerd[1483]: time="2025-09-08T23:53:59.400174724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:53:59.401906 containerd[1483]: time="2025-09-08T23:53:59.401874639Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.477248502s" Sep 8 23:53:59.401998 containerd[1483]: time="2025-09-08T23:53:59.401983537Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 8 23:53:59.402745 containerd[1483]: time="2025-09-08T23:53:59.402718534Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 8 23:54:00.494338 containerd[1483]: time="2025-09-08T23:54:00.494290732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:00.494836 containerd[1483]: time="2025-09-08T23:54:00.494792863Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 8 23:54:00.495730 containerd[1483]: time="2025-09-08T23:54:00.495697307Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:00.498538 containerd[1483]: time="2025-09-08T23:54:00.498500563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:00.500817 containerd[1483]: time="2025-09-08T23:54:00.500781022Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.098027105s" Sep 8 23:54:00.500897 containerd[1483]: time="2025-09-08T23:54:00.500821078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 8 23:54:00.501260 containerd[1483]: time="2025-09-08T23:54:00.501237034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 8 23:54:01.699261 containerd[1483]: time="2025-09-08T23:54:01.699206837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:01.700274 containerd[1483]: time="2025-09-08T23:54:01.700199572Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 8 23:54:01.701305 containerd[1483]: time="2025-09-08T23:54:01.700955646Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:01.704177 containerd[1483]: time="2025-09-08T23:54:01.704143261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:01.705573 containerd[1483]: time="2025-09-08T23:54:01.705217358Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.203948326s" Sep 8 23:54:01.705573 containerd[1483]: time="2025-09-08T23:54:01.705249001Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 8 23:54:01.705811 containerd[1483]: time="2025-09-08T23:54:01.705771325Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 8 23:54:01.856554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:54:01.865771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:01.970009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:01.973452 (kubelet)[1955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:54:02.009314 kubelet[1955]: E0908 23:54:02.009251 1955 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:54:02.012574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:54:02.012732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:54:02.013107 systemd[1]: kubelet.service: Consumed 135ms CPU time, 109.7M memory peak. Sep 8 23:54:02.748717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518054209.mount: Deactivated successfully. Sep 8 23:54:02.974365 containerd[1483]: time="2025-09-08T23:54:02.973664463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:02.974365 containerd[1483]: time="2025-09-08T23:54:02.974108527Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 8 23:54:02.974910 containerd[1483]: time="2025-09-08T23:54:02.974883952Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:02.977343 containerd[1483]: time="2025-09-08T23:54:02.977305568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:02.978261 containerd[1483]: time="2025-09-08T23:54:02.977953086Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.272145807s" Sep 8 23:54:02.978261 containerd[1483]: time="2025-09-08T23:54:02.978001256Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 8 23:54:02.978851 containerd[1483]: time="2025-09-08T23:54:02.978826726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 8 23:54:03.490215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578928007.mount: Deactivated successfully. Sep 8 23:54:04.250277 containerd[1483]: time="2025-09-08T23:54:04.250215110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:04.250787 containerd[1483]: time="2025-09-08T23:54:04.250742809Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 8 23:54:04.251653 containerd[1483]: time="2025-09-08T23:54:04.251609266Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:04.254912 containerd[1483]: time="2025-09-08T23:54:04.254875816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:04.256150 containerd[1483]: time="2025-09-08T23:54:04.256123386Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.277263095s" Sep 8 23:54:04.256150 containerd[1483]: time="2025-09-08T23:54:04.256154763Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 8 23:54:04.256778 containerd[1483]: time="2025-09-08T23:54:04.256603780Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:54:04.694721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882301788.mount: Deactivated successfully. Sep 8 23:54:04.698754 containerd[1483]: time="2025-09-08T23:54:04.698712313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:04.699394 containerd[1483]: time="2025-09-08T23:54:04.699218056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 8 23:54:04.700157 containerd[1483]: time="2025-09-08T23:54:04.700115690Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:04.702269 containerd[1483]: time="2025-09-08T23:54:04.702218820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:04.703104 containerd[1483]: time="2025-09-08T23:54:04.703080527Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 446.448284ms" Sep 8 23:54:04.703162 containerd[1483]: time="2025-09-08T23:54:04.703111305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 8 23:54:04.703716 containerd[1483]: time="2025-09-08T23:54:04.703526669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 8 23:54:05.207786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235744918.mount: Deactivated successfully. Sep 8 23:54:06.685730 containerd[1483]: time="2025-09-08T23:54:06.685677889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:06.686739 containerd[1483]: time="2025-09-08T23:54:06.686468292Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 8 23:54:06.687577 containerd[1483]: time="2025-09-08T23:54:06.687448479Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:06.691727 containerd[1483]: time="2025-09-08T23:54:06.691113641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:06.693291 containerd[1483]: time="2025-09-08T23:54:06.693172602Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.989614833s" Sep 8 23:54:06.693291 containerd[1483]: time="2025-09-08T23:54:06.693205265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 8 23:54:11.478529 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:11.478691 systemd[1]: kubelet.service: Consumed 135ms CPU time, 109.7M memory peak. Sep 8 23:54:11.492853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:11.513408 systemd[1]: Reload requested from client PID 2112 ('systemctl') (unit session-7.scope)... Sep 8 23:54:11.513423 systemd[1]: Reloading... Sep 8 23:54:11.578678 zram_generator::config[2159]: No configuration found. Sep 8 23:54:11.682804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:54:11.772665 systemd[1]: Reloading finished in 258 ms. Sep 8 23:54:11.814663 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:11.817727 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:54:11.817947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:11.818003 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.1M memory peak. Sep 8 23:54:11.819749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:11.917301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:11.921133 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:54:11.956239 kubelet[2203]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:11.956239 kubelet[2203]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:54:11.956239 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:11.956620 kubelet[2203]: I0908 23:54:11.956307 2203 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:54:12.628594 kubelet[2203]: I0908 23:54:12.627885 2203 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:54:12.628594 kubelet[2203]: I0908 23:54:12.627918 2203 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:54:12.628594 kubelet[2203]: I0908 23:54:12.628201 2203 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:54:12.648366 kubelet[2203]: E0908 23:54:12.648327 2203 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:12.651809 kubelet[2203]: I0908 23:54:12.651773 2203 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:54:12.657647 kubelet[2203]: E0908 23:54:12.657603 2203 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:54:12.657647 kubelet[2203]: I0908 23:54:12.657642 2203 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:54:12.661234 kubelet[2203]: I0908 23:54:12.661190 2203 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:54:12.662370 kubelet[2203]: I0908 23:54:12.662322 2203 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:54:12.662608 kubelet[2203]: I0908 23:54:12.662366 2203 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:54:12.662704 kubelet[2203]: I0908 23:54:12.662670 2203 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:54:12.662704 kubelet[2203]: I0908 23:54:12.662680 2203 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:54:12.662892 kubelet[2203]: I0908 23:54:12.662878 2203 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:12.665207 kubelet[2203]: I0908 23:54:12.665183 2203 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:54:12.665207 kubelet[2203]: I0908 23:54:12.665208 2203 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:54:12.665284 kubelet[2203]: I0908 23:54:12.665229 2203 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:54:12.665284 kubelet[2203]: I0908 23:54:12.665241 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:54:12.666834 kubelet[2203]: W0908 23:54:12.666762 2203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Sep 8 23:54:12.666834 kubelet[2203]: E0908 23:54:12.666827 2203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:12.666834 kubelet[2203]: W0908 23:54:12.666766 2203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Sep 8 23:54:12.666970 kubelet[2203]: E0908 23:54:12.666855 2203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:12.668600 kubelet[2203]: I0908 23:54:12.668060 2203 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:54:12.668777 kubelet[2203]: I0908 23:54:12.668759 2203 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:54:12.668924 kubelet[2203]: W0908 23:54:12.668914 2203 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:54:12.669836 kubelet[2203]: I0908 23:54:12.669811 2203 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:54:12.669893 kubelet[2203]: I0908 23:54:12.669848 2203 server.go:1287] "Started kubelet" Sep 8 23:54:12.671141 kubelet[2203]: I0908 23:54:12.669949 2203 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:54:12.671141 kubelet[2203]: I0908 23:54:12.670819 2203 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:54:12.674501 kubelet[2203]: E0908 23:54:12.674242 2203 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186373d85a4e1e8c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:54:12.669824652 +0000 UTC m=+0.745759661,LastTimestamp:2025-09-08 23:54:12.669824652 +0000 UTC m=+0.745759661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:54:12.677452 kubelet[2203]: I0908 23:54:12.677414 2203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:54:12.678172 kubelet[2203]: I0908 23:54:12.678105 2203 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:54:12.678388 kubelet[2203]: I0908 23:54:12.678368 2203 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:54:12.678955 kubelet[2203]: I0908 23:54:12.678834 2203 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:54:12.678955 kubelet[2203]: E0908 23:54:12.678916 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:12.679056 kubelet[2203]: I0908 23:54:12.678965 2203 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:54:12.679274 kubelet[2203]: I0908 23:54:12.679249 2203 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:54:12.679345 kubelet[2203]: I0908 23:54:12.679331 2203 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:54:12.680202 kubelet[2203]: W0908 23:54:12.679828 2203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Sep 8 23:54:12.680202 kubelet[2203]: E0908 23:54:12.679887 2203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:12.680342 kubelet[2203]: I0908 23:54:12.680317 2203 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:54:12.680475 kubelet[2203]: I0908 23:54:12.680438 2203 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:54:12.680747 kubelet[2203]: E0908 23:54:12.680723 2203 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:54:12.681384 kubelet[2203]: E0908 23:54:12.681293 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" Sep 8 23:54:12.681461 kubelet[2203]: I0908 23:54:12.681447 2203 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:54:12.692555 kubelet[2203]: I0908 23:54:12.692530 2203 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:54:12.692727 kubelet[2203]: I0908 23:54:12.692716 2203 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:54:12.692779 kubelet[2203]: I0908 23:54:12.692772 2203 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:12.696280 kubelet[2203]: I0908 23:54:12.696233 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:54:12.697413 kubelet[2203]: I0908 23:54:12.697380 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:54:12.697413 kubelet[2203]: I0908 23:54:12.697412 2203 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:54:12.697516 kubelet[2203]: I0908 23:54:12.697434 2203 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:54:12.697516 kubelet[2203]: I0908 23:54:12.697441 2203 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:54:12.697516 kubelet[2203]: E0908 23:54:12.697486 2203 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:54:12.698266 kubelet[2203]: W0908 23:54:12.698201 2203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Sep 8 23:54:12.698266 kubelet[2203]: E0908 23:54:12.698245 2203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:12.779875 kubelet[2203]: E0908 23:54:12.779815 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:12.798136 kubelet[2203]: E0908 23:54:12.798103 2203 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:54:12.880342 kubelet[2203]: E0908 23:54:12.880254 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:12.881756 kubelet[2203]: E0908 23:54:12.881721 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" Sep 8 23:54:12.981091 kubelet[2203]: E0908 23:54:12.981025 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:12.998175 kubelet[2203]: E0908 23:54:12.998141 2203 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:54:13.067410 kubelet[2203]: I0908 23:54:13.067377 2203 policy_none.go:49] "None policy: Start" Sep 8 23:54:13.067410 kubelet[2203]: I0908 23:54:13.067415 2203 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:54:13.067542 kubelet[2203]: I0908 23:54:13.067442 2203 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:54:13.072609 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:54:13.082415 kubelet[2203]: E0908 23:54:13.081262 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:13.087131 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:54:13.090064 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:54:13.101699 kubelet[2203]: I0908 23:54:13.101492 2203 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:54:13.101797 kubelet[2203]: I0908 23:54:13.101729 2203 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:54:13.101797 kubelet[2203]: I0908 23:54:13.101742 2203 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:54:13.102010 kubelet[2203]: I0908 23:54:13.101984 2203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:54:13.103755 kubelet[2203]: E0908 23:54:13.103711 2203 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:54:13.103835 kubelet[2203]: E0908 23:54:13.103762 2203 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:54:13.203717 kubelet[2203]: I0908 23:54:13.203613 2203 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:13.204096 kubelet[2203]: E0908 23:54:13.204053 2203 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 8 23:54:13.282885 kubelet[2203]: E0908 23:54:13.282816 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" Sep 8 23:54:13.404909 kubelet[2203]: I0908 23:54:13.404881 2203 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:13.405200 kubelet[2203]: E0908 23:54:13.405167 2203 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 8 23:54:13.407710 systemd[1]: Created slice kubepods-burstable-pod99ae6f79c90438bbc41887a70efa85a2.slice - libcontainer container kubepods-burstable-pod99ae6f79c90438bbc41887a70efa85a2.slice. Sep 8 23:54:13.415391 kubelet[2203]: E0908 23:54:13.415355 2203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:13.418589 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 8 23:54:13.429863 kubelet[2203]: E0908 23:54:13.429824 2203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:13.432768 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 8 23:54:13.434497 kubelet[2203]: E0908 23:54:13.434457 2203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:13.483931 kubelet[2203]: I0908 23:54:13.483809 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99ae6f79c90438bbc41887a70efa85a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"99ae6f79c90438bbc41887a70efa85a2\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:13.483931 kubelet[2203]: I0908 23:54:13.483851 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99ae6f79c90438bbc41887a70efa85a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"99ae6f79c90438bbc41887a70efa85a2\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:13.483931 kubelet[2203]: I0908 23:54:13.483873 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:13.483931 kubelet[2203]: I0908 23:54:13.483892 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99ae6f79c90438bbc41887a70efa85a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"99ae6f79c90438bbc41887a70efa85a2\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:13.483931 kubelet[2203]: I0908 23:54:13.483908 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:13.484103 kubelet[2203]: I0908 23:54:13.483933 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:13.484103 kubelet[2203]: I0908 23:54:13.483954 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:13.484103 kubelet[2203]: I0908 23:54:13.483972 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:13.484103 kubelet[2203]: I0908 23:54:13.483986 2203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:13.521479 kubelet[2203]: W0908 23:54:13.521413 2203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Sep 8 23:54:13.521479 kubelet[2203]: E0908 23:54:13.521480 2203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:13.717203 containerd[1483]: time="2025-09-08T23:54:13.717161346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:99ae6f79c90438bbc41887a70efa85a2,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:13.730888 containerd[1483]: time="2025-09-08T23:54:13.730664125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:13.735326 containerd[1483]: time="2025-09-08T23:54:13.735244616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:13.810849 kubelet[2203]: I0908 23:54:13.810813 2203 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:13.811171 kubelet[2203]: E0908 23:54:13.811143 2203 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Sep 8 23:54:13.888647 kubelet[2203]: W0908 23:54:13.888534 2203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Sep 8 23:54:13.888647 kubelet[2203]: E0908 23:54:13.888616 2203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:14.083482 kubelet[2203]: E0908 23:54:14.083372 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="1.6s" Sep 8 23:54:14.191114 kubelet[2203]: W0908 23:54:14.191071 2203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Sep 8 23:54:14.191224 kubelet[2203]: E0908 23:54:14.191119 2203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:14.198832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513025282.mount: Deactivated successfully. Sep 8 23:54:14.204345 containerd[1483]: time="2025-09-08T23:54:14.204260835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:14.205754 containerd[1483]: time="2025-09-08T23:54:14.205671668Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:14.207101 containerd[1483]: time="2025-09-08T23:54:14.207040306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 8 23:54:14.207663 containerd[1483]: time="2025-09-08T23:54:14.207579258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:54:14.209451 containerd[1483]: time="2025-09-08T23:54:14.209404094Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:14.210716 containerd[1483]: time="2025-09-08T23:54:14.210521996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:54:14.210716 containerd[1483]: time="2025-09-08T23:54:14.210634398Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:14.214244 containerd[1483]: time="2025-09-08T23:54:14.214209031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:14.215775 containerd[1483]: time="2025-09-08T23:54:14.215749567Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 480.445617ms" Sep 8 23:54:14.216442 containerd[1483]: time="2025-09-08T23:54:14.216373589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.644736ms" Sep 8 23:54:14.217643 containerd[1483]: time="2025-09-08T23:54:14.217613203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 500.36855ms" Sep 8 23:54:14.220313 kubelet[2203]: W0908 23:54:14.220242 2203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused Sep 8 23:54:14.220313 kubelet[2203]: E0908 23:54:14.220286 2203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:14.331938 containerd[1483]: time="2025-09-08T23:54:14.329873771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:14.331938 containerd[1483]: time="2025-09-08T23:54:14.329944497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:14.331938 containerd[1483]: time="2025-09-08T23:54:14.329964076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:14.331938 containerd[1483]: time="2025-09-08T23:54:14.330033683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:14.331938 containerd[1483]: time="2025-09-08T23:54:14.329867538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:14.331938 containerd[1483]: time="2025-09-08T23:54:14.329933868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:14.331938 containerd[1483]: time="2025-09-08T23:54:14.329948812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:14.331938 containerd[1483]: time="2025-09-08T23:54:14.330023054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:14.335742 containerd[1483]: time="2025-09-08T23:54:14.335598658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:14.335742 containerd[1483]: time="2025-09-08T23:54:14.335693598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:14.335742 containerd[1483]: time="2025-09-08T23:54:14.335721089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:14.335852 containerd[1483]: time="2025-09-08T23:54:14.335805120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:14.351759 systemd[1]: Started cri-containerd-60171180d5aa7b24ad72720a8e732d41793ddea31daf4e3342d078e0a261028c.scope - libcontainer container 60171180d5aa7b24ad72720a8e732d41793ddea31daf4e3342d078e0a261028c. Sep 8 23:54:14.354750 systemd[1]: Started cri-containerd-d7e254981650734b94df333fc889f167ec3d19d84d2699e977fc7798b7807fbf.scope - libcontainer container d7e254981650734b94df333fc889f167ec3d19d84d2699e977fc7798b7807fbf. Sep 8 23:54:14.358599 systemd[1]: Started cri-containerd-a6aef7447c34cc10b83b08939efba32b7ae93108dac00aec4fb80b682d5ef438.scope - libcontainer container a6aef7447c34cc10b83b08939efba32b7ae93108dac00aec4fb80b682d5ef438. Sep 8 23:54:14.389193 containerd[1483]: time="2025-09-08T23:54:14.389141429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"60171180d5aa7b24ad72720a8e732d41793ddea31daf4e3342d078e0a261028c\"" Sep 8 23:54:14.394854 containerd[1483]: time="2025-09-08T23:54:14.394806978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:99ae6f79c90438bbc41887a70efa85a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6aef7447c34cc10b83b08939efba32b7ae93108dac00aec4fb80b682d5ef438\"" Sep 8 23:54:14.394948 containerd[1483]: time="2025-09-08T23:54:14.394815729Z" level=info msg="CreateContainer within sandbox \"60171180d5aa7b24ad72720a8e732d41793ddea31daf4e3342d078e0a261028c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:54:14.397296 containerd[1483]: time="2025-09-08T23:54:14.397268024Z" level=info msg="CreateContainer within sandbox \"a6aef7447c34cc10b83b08939efba32b7ae93108dac00aec4fb80b682d5ef438\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:54:14.402467 containerd[1483]: time="2025-09-08T23:54:14.402439853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7e254981650734b94df333fc889f167ec3d19d84d2699e977fc7798b7807fbf\"" Sep 8 23:54:14.407812 containerd[1483]: time="2025-09-08T23:54:14.407546352Z" level=info msg="CreateContainer within sandbox \"d7e254981650734b94df333fc889f167ec3d19d84d2699e977fc7798b7807fbf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:54:14.414397 containerd[1483]: time="2025-09-08T23:54:14.414324408Z" level=info msg="CreateContainer within sandbox \"60171180d5aa7b24ad72720a8e732d41793ddea31daf4e3342d078e0a261028c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8b4b21d3cd3c7517893f23a02136d95739c98e5f98df05efe8983b3aba9bba56\"" Sep 8 23:54:14.414966 containerd[1483]: time="2025-09-08T23:54:14.414937442Z" level=info msg="StartContainer for \"8b4b21d3cd3c7517893f23a02136d95739c98e5f98df05efe8983b3aba9bba56\"" Sep 8 23:54:14.416478 containerd[1483]: time="2025-09-08T23:54:14.416434944Z" level=info msg="CreateContainer within sandbox \"a6aef7447c34cc10b83b08939efba32b7ae93108dac00aec4fb80b682d5ef438\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a1217ef1bf0f7f8afcc8c9270d87b1bd6945be77489c750c7a2ec73015f8b3ab\"" Sep 8 23:54:14.416769 containerd[1483]: time="2025-09-08T23:54:14.416741741Z" level=info msg="StartContainer for \"a1217ef1bf0f7f8afcc8c9270d87b1bd6945be77489c750c7a2ec73015f8b3ab\"" Sep 8 23:54:14.428664 containerd[1483]: time="2025-09-08T23:54:14.428623938Z" level=info msg="CreateContainer within sandbox \"d7e254981650734b94df333fc889f167ec3d19d84d2699e977fc7798b7807fbf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b4850db0827aef5b7d5c88bad0a4df03518d8a1bd14a8f1b701cb1d089ee3f3c\"" Sep 8 23:54:14.429161 containerd[1483]: time="2025-09-08T23:54:14.429134240Z" level=info msg="StartContainer for \"b4850db0827aef5b7d5c88bad0a4df03518d8a1bd14a8f1b701cb1d089ee3f3c\"" Sep 8 23:54:14.439757 systemd[1]: Started cri-containerd-8b4b21d3cd3c7517893f23a02136d95739c98e5f98df05efe8983b3aba9bba56.scope - libcontainer container 8b4b21d3cd3c7517893f23a02136d95739c98e5f98df05efe8983b3aba9bba56. Sep 8 23:54:14.448932 systemd[1]: Started cri-containerd-a1217ef1bf0f7f8afcc8c9270d87b1bd6945be77489c750c7a2ec73015f8b3ab.scope - libcontainer container a1217ef1bf0f7f8afcc8c9270d87b1bd6945be77489c750c7a2ec73015f8b3ab. Sep 8 23:54:14.460715 systemd[1]: Started cri-containerd-b4850db0827aef5b7d5c88bad0a4df03518d8a1bd14a8f1b701cb1d089ee3f3c.scope - libcontainer container b4850db0827aef5b7d5c88bad0a4df03518d8a1bd14a8f1b701cb1d089ee3f3c. Sep 8 23:54:14.479224 containerd[1483]: time="2025-09-08T23:54:14.479181575Z" level=info msg="StartContainer for \"8b4b21d3cd3c7517893f23a02136d95739c98e5f98df05efe8983b3aba9bba56\" returns successfully" Sep 8 23:54:14.496837 containerd[1483]: time="2025-09-08T23:54:14.496785702Z" level=info msg="StartContainer for \"a1217ef1bf0f7f8afcc8c9270d87b1bd6945be77489c750c7a2ec73015f8b3ab\" returns successfully" Sep 8 23:54:14.513637 containerd[1483]: time="2025-09-08T23:54:14.513586116Z" level=info msg="StartContainer for \"b4850db0827aef5b7d5c88bad0a4df03518d8a1bd14a8f1b701cb1d089ee3f3c\" returns successfully" Sep 8 23:54:14.612828 kubelet[2203]: I0908 23:54:14.612703 2203 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:14.704231 kubelet[2203]: E0908 23:54:14.704195 2203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:14.707571 kubelet[2203]: E0908 23:54:14.707549 2203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:14.708914 kubelet[2203]: E0908 23:54:14.708898 2203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:15.711198 kubelet[2203]: E0908 23:54:15.711154 2203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:15.711501 kubelet[2203]: E0908 23:54:15.711258 2203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:15.901586 kubelet[2203]: E0908 23:54:15.900845 2203 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 8 23:54:15.971216 kubelet[2203]: I0908 23:54:15.970900 2203 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:54:15.971216 kubelet[2203]: E0908 23:54:15.970935 2203 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 8 23:54:15.989045 kubelet[2203]: E0908 23:54:15.988998 2203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:16.081393 kubelet[2203]: I0908 23:54:16.081333 2203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:16.088023 kubelet[2203]: E0908 23:54:16.087986 2203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:16.088023 kubelet[2203]: I0908 23:54:16.088017 2203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:16.089494 kubelet[2203]: E0908 23:54:16.089459 2203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:16.089494 kubelet[2203]: I0908 23:54:16.089485 2203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:16.091395 kubelet[2203]: E0908 23:54:16.091373 2203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:16.669478 kubelet[2203]: I0908 23:54:16.669385 2203 apiserver.go:52] "Watching apiserver" Sep 8 23:54:16.679840 kubelet[2203]: I0908 23:54:16.679806 2203 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:54:17.864234 systemd[1]: Reload requested from client PID 2484 ('systemctl') (unit session-7.scope)... Sep 8 23:54:17.864251 systemd[1]: Reloading... Sep 8 23:54:17.941622 zram_generator::config[2528]: No configuration found. Sep 8 23:54:18.039908 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:54:18.147119 systemd[1]: Reloading finished in 282 ms. Sep 8 23:54:18.168260 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:18.185982 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:54:18.186244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:18.186300 systemd[1]: kubelet.service: Consumed 1.104s CPU time, 130.2M memory peak. Sep 8 23:54:18.196913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:18.301791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:18.307018 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:54:18.348639 kubelet[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:18.348639 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:54:18.348639 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:18.348639 kubelet[2570]: I0908 23:54:18.347544 2570 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:54:18.355800 kubelet[2570]: I0908 23:54:18.355608 2570 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:54:18.355800 kubelet[2570]: I0908 23:54:18.355643 2570 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:54:18.356234 kubelet[2570]: I0908 23:54:18.356209 2570 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:54:18.357636 kubelet[2570]: I0908 23:54:18.357615 2570 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 8 23:54:18.361135 kubelet[2570]: I0908 23:54:18.360888 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:54:18.364328 kubelet[2570]: E0908 23:54:18.364302 2570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:54:18.364448 kubelet[2570]: I0908 23:54:18.364433 2570 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:54:18.367327 kubelet[2570]: I0908 23:54:18.367295 2570 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:54:18.367875 kubelet[2570]: I0908 23:54:18.367805 2570 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:54:18.368152 kubelet[2570]: I0908 23:54:18.367848 2570 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:54:18.368257 kubelet[2570]: I0908 23:54:18.368171 2570 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:54:18.368257 kubelet[2570]: I0908 23:54:18.368183 2570 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:54:18.368257 kubelet[2570]: I0908 23:54:18.368238 2570 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:18.368381 kubelet[2570]: I0908 23:54:18.368368 2570 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:54:18.368408 kubelet[2570]: I0908 23:54:18.368385 2570 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:54:18.368434 kubelet[2570]: I0908 23:54:18.368409 2570 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:54:18.368434 kubelet[2570]: I0908 23:54:18.368420 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:54:18.371603 kubelet[2570]: I0908 23:54:18.369785 2570 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:54:18.371603 kubelet[2570]: I0908 23:54:18.370288 2570 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:54:18.371603 kubelet[2570]: I0908 23:54:18.370840 2570 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:54:18.371603 kubelet[2570]: I0908 23:54:18.370877 2570 server.go:1287] "Started kubelet" Sep 8 23:54:18.372142 kubelet[2570]: I0908 23:54:18.372100 2570 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:54:18.372288 kubelet[2570]: I0908 23:54:18.372248 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:54:18.372521 kubelet[2570]: I0908 23:54:18.372503 2570 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:54:18.373899 kubelet[2570]: I0908 23:54:18.373723 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:54:18.377805 kubelet[2570]: I0908 23:54:18.377767 2570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:54:18.380963 kubelet[2570]: E0908 23:54:18.379777 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:18.380963 kubelet[2570]: I0908 23:54:18.379819 2570 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:54:18.380963 kubelet[2570]: I0908 23:54:18.380053 2570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:54:18.380963 kubelet[2570]: I0908 23:54:18.380187 2570 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:54:18.382362 kubelet[2570]: I0908 23:54:18.382338 2570 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:54:18.386364 kubelet[2570]: I0908 23:54:18.386231 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:54:18.387119 kubelet[2570]: I0908 23:54:18.387101 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:54:18.387192 kubelet[2570]: I0908 23:54:18.387183 2570 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:54:18.388599 kubelet[2570]: I0908 23:54:18.388587 2570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:54:18.388679 kubelet[2570]: I0908 23:54:18.388668 2570 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:54:18.388998 kubelet[2570]: E0908 23:54:18.388763 2570 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:54:18.388998 kubelet[2570]: I0908 23:54:18.388977 2570 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:54:18.389137 kubelet[2570]: I0908 23:54:18.389080 2570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:54:18.401421 kubelet[2570]: E0908 23:54:18.401016 2570 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:54:18.401421 kubelet[2570]: I0908 23:54:18.401129 2570 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:54:18.432295 kubelet[2570]: I0908 23:54:18.432269 2570 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:54:18.433453 kubelet[2570]: I0908 23:54:18.432468 2570 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:54:18.433453 kubelet[2570]: I0908 23:54:18.432493 2570 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:18.433453 kubelet[2570]: I0908 23:54:18.432708 2570 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:54:18.433453 kubelet[2570]: I0908 23:54:18.432721 2570 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:54:18.433453 kubelet[2570]: I0908 23:54:18.432739 2570 policy_none.go:49] "None policy: Start" Sep 8 23:54:18.433453 kubelet[2570]: I0908 23:54:18.432749 2570 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:54:18.433453 kubelet[2570]: I0908 23:54:18.432758 2570 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:54:18.433453 kubelet[2570]: I0908 23:54:18.432861 2570 state_mem.go:75] "Updated machine memory state" Sep 8 23:54:18.436973 kubelet[2570]: I0908 23:54:18.436943 2570 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:54:18.437129 kubelet[2570]: I0908 23:54:18.437109 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:54:18.437175 kubelet[2570]: I0908 23:54:18.437123 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:54:18.437320 kubelet[2570]: I0908 23:54:18.437304 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:54:18.438602 kubelet[2570]: E0908 23:54:18.438226 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:54:18.489734 kubelet[2570]: I0908 23:54:18.489695 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:18.490111 kubelet[2570]: I0908 23:54:18.489710 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:18.490111 kubelet[2570]: I0908 23:54:18.489780 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:18.539373 kubelet[2570]: I0908 23:54:18.539343 2570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:18.555094 kubelet[2570]: I0908 23:54:18.555049 2570 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 8 23:54:18.555216 kubelet[2570]: I0908 23:54:18.555144 2570 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:54:18.581548 kubelet[2570]: I0908 23:54:18.581511 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:18.581548 kubelet[2570]: I0908 23:54:18.581552 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99ae6f79c90438bbc41887a70efa85a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"99ae6f79c90438bbc41887a70efa85a2\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:18.581548 kubelet[2570]: I0908 23:54:18.581584 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99ae6f79c90438bbc41887a70efa85a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"99ae6f79c90438bbc41887a70efa85a2\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:18.581548 kubelet[2570]: I0908 23:54:18.581603 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:18.581548 kubelet[2570]: I0908 23:54:18.581629 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:18.581881 kubelet[2570]: I0908 23:54:18.581660 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:18.581881 kubelet[2570]: I0908 23:54:18.581674 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:18.581881 kubelet[2570]: I0908 23:54:18.581690 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:18.581881 kubelet[2570]: I0908 23:54:18.581703 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99ae6f79c90438bbc41887a70efa85a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"99ae6f79c90438bbc41887a70efa85a2\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:18.876810 sudo[2603]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 8 23:54:18.877080 sudo[2603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 8 23:54:19.307644 sudo[2603]: pam_unix(sudo:session): session closed for user root Sep 8 23:54:19.369338 kubelet[2570]: I0908 23:54:19.369300 2570 apiserver.go:52] "Watching apiserver" Sep 8 23:54:19.380831 kubelet[2570]: I0908 23:54:19.380773 2570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:54:19.415676 kubelet[2570]: I0908 23:54:19.415430 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:19.424027 kubelet[2570]: E0908 23:54:19.423952 2570 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:19.435670 kubelet[2570]: I0908 23:54:19.434511 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4344761560000001 podStartE2EDuration="1.434476156s" podCreationTimestamp="2025-09-08 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:19.434294534 +0000 UTC m=+1.124012974" watchObservedRunningTime="2025-09-08 23:54:19.434476156 +0000 UTC m=+1.124194636" Sep 8 23:54:19.442223 kubelet[2570]: I0908 23:54:19.441845 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.441827387 podStartE2EDuration="1.441827387s" podCreationTimestamp="2025-09-08 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:19.44158737 +0000 UTC m=+1.131305850" watchObservedRunningTime="2025-09-08 23:54:19.441827387 +0000 UTC m=+1.131545867" Sep 8 23:54:19.449000 kubelet[2570]: I0908 23:54:19.448932 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.448917417 podStartE2EDuration="1.448917417s" podCreationTimestamp="2025-09-08 23:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:19.448914259 +0000 UTC m=+1.138632739" watchObservedRunningTime="2025-09-08 23:54:19.448917417 +0000 UTC m=+1.138635897" Sep 8 23:54:20.803135 sudo[1668]: pam_unix(sudo:session): session closed for user root Sep 8 23:54:20.804517 sshd[1667]: Connection closed by 10.0.0.1 port 55784 Sep 8 23:54:20.805050 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:20.808460 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:54:20.808686 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:55784.service: Deactivated successfully. Sep 8 23:54:20.810395 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:54:20.810668 systemd[1]: session-7.scope: Consumed 6.748s CPU time, 259.1M memory peak. Sep 8 23:54:20.811683 systemd-logind[1468]: Removed session 7. Sep 8 23:54:24.809137 kubelet[2570]: I0908 23:54:24.809080 2570 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:54:24.809474 containerd[1483]: time="2025-09-08T23:54:24.809400190Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:54:24.809694 kubelet[2570]: I0908 23:54:24.809597 2570 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:54:25.419279 systemd[1]: Created slice kubepods-besteffort-pod6d5d3d7f_0bf6_4db2_b2e6_33e1f897b0fa.slice - libcontainer container kubepods-besteffort-pod6d5d3d7f_0bf6_4db2_b2e6_33e1f897b0fa.slice. Sep 8 23:54:25.429265 systemd[1]: Created slice kubepods-burstable-pod5d300b42_6e41_4874_841a_8033a2de6915.slice - libcontainer container kubepods-burstable-pod5d300b42_6e41_4874_841a_8033a2de6915.slice. Sep 8 23:54:25.430502 kubelet[2570]: I0908 23:54:25.430295 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cilium-run\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.430502 kubelet[2570]: I0908 23:54:25.430326 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-bpf-maps\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.430502 kubelet[2570]: I0908 23:54:25.430341 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-hostproc\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.430502 kubelet[2570]: I0908 23:54:25.430356 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d300b42-6e41-4874-841a-8033a2de6915-clustermesh-secrets\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.430502 kubelet[2570]: I0908 23:54:25.430371 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-host-proc-sys-net\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.430502 kubelet[2570]: I0908 23:54:25.430388 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa-lib-modules\") pod \"kube-proxy-29tf7\" (UID: \"6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa\") " pod="kube-system/kube-proxy-29tf7" Sep 8 23:54:25.430797 kubelet[2570]: I0908 23:54:25.430401 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d300b42-6e41-4874-841a-8033a2de6915-hubble-tls\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.430797 kubelet[2570]: I0908 23:54:25.430416 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tnl\" (UniqueName: \"kubernetes.io/projected/5d300b42-6e41-4874-841a-8033a2de6915-kube-api-access-78tnl\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.430797 kubelet[2570]: I0908 23:54:25.430430 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d300b42-6e41-4874-841a-8033a2de6915-cilium-config-path\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.430797 kubelet[2570]: I0908 23:54:25.430445 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cni-path\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.430797 kubelet[2570]: I0908 23:54:25.430459 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa-xtables-lock\") pod \"kube-proxy-29tf7\" (UID: \"6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa\") " pod="kube-system/kube-proxy-29tf7" Sep 8 23:54:25.430797 kubelet[2570]: I0908 23:54:25.430472 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-lib-modules\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.431026 kubelet[2570]: I0908 23:54:25.430487 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cilium-cgroup\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.431026 kubelet[2570]: I0908 23:54:25.430502 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-etc-cni-netd\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.431026 kubelet[2570]: I0908 23:54:25.430516 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa-kube-proxy\") pod \"kube-proxy-29tf7\" (UID: \"6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa\") " pod="kube-system/kube-proxy-29tf7" Sep 8 23:54:25.431026 kubelet[2570]: I0908 23:54:25.430530 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-host-proc-sys-kernel\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.431026 kubelet[2570]: I0908 23:54:25.430545 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwngf\" (UniqueName: \"kubernetes.io/projected/6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa-kube-api-access-cwngf\") pod \"kube-proxy-29tf7\" (UID: \"6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa\") " pod="kube-system/kube-proxy-29tf7" Sep 8 23:54:25.431186 kubelet[2570]: I0908 23:54:25.430559 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-xtables-lock\") pod \"cilium-6pc9t\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " pod="kube-system/cilium-6pc9t" Sep 8 23:54:25.728115 containerd[1483]: time="2025-09-08T23:54:25.727938873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29tf7,Uid:6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:25.733676 containerd[1483]: time="2025-09-08T23:54:25.733638201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6pc9t,Uid:5d300b42-6e41-4874-841a-8033a2de6915,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:25.761706 containerd[1483]: time="2025-09-08T23:54:25.761465269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:25.761706 containerd[1483]: time="2025-09-08T23:54:25.761530316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:25.761706 containerd[1483]: time="2025-09-08T23:54:25.761541950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:25.761910 containerd[1483]: time="2025-09-08T23:54:25.761685595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:25.764600 containerd[1483]: time="2025-09-08T23:54:25.762893050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:25.764600 containerd[1483]: time="2025-09-08T23:54:25.762942704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:25.764600 containerd[1483]: time="2025-09-08T23:54:25.762953139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:25.764600 containerd[1483]: time="2025-09-08T23:54:25.763025461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:25.785742 systemd[1]: Started cri-containerd-5d6531ed492e5bccd80e329506ebb45891748ac09b02cd3dcf1888d4d24ef089.scope - libcontainer container 5d6531ed492e5bccd80e329506ebb45891748ac09b02cd3dcf1888d4d24ef089. Sep 8 23:54:25.787486 systemd[1]: Started cri-containerd-62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a.scope - libcontainer container 62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a. Sep 8 23:54:25.813512 containerd[1483]: time="2025-09-08T23:54:25.813446548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29tf7,Uid:6d5d3d7f-0bf6-4db2-b2e6-33e1f897b0fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d6531ed492e5bccd80e329506ebb45891748ac09b02cd3dcf1888d4d24ef089\"" Sep 8 23:54:25.814064 containerd[1483]: time="2025-09-08T23:54:25.813994424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6pc9t,Uid:5d300b42-6e41-4874-841a-8033a2de6915,Namespace:kube-system,Attempt:0,} returns sandbox id \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\"" Sep 8 23:54:25.817880 containerd[1483]: time="2025-09-08T23:54:25.817619107Z" level=info msg="CreateContainer within sandbox \"5d6531ed492e5bccd80e329506ebb45891748ac09b02cd3dcf1888d4d24ef089\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:54:25.820100 containerd[1483]: time="2025-09-08T23:54:25.820014666Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 8 23:54:25.833392 containerd[1483]: time="2025-09-08T23:54:25.833354477Z" level=info msg="CreateContainer within sandbox \"5d6531ed492e5bccd80e329506ebb45891748ac09b02cd3dcf1888d4d24ef089\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a3ac7316a395a893201c290403502347c5716719b3607ba6d615126654c8abe\"" Sep 8 23:54:25.834184 containerd[1483]: time="2025-09-08T23:54:25.834148146Z" level=info msg="StartContainer for \"6a3ac7316a395a893201c290403502347c5716719b3607ba6d615126654c8abe\"" Sep 8 23:54:25.878757 systemd[1]: Started cri-containerd-6a3ac7316a395a893201c290403502347c5716719b3607ba6d615126654c8abe.scope - libcontainer container 6a3ac7316a395a893201c290403502347c5716719b3607ba6d615126654c8abe. Sep 8 23:54:25.883985 systemd[1]: Created slice kubepods-besteffort-podeee7c45c_8983_40d3_a9ec_a86028b8e647.slice - libcontainer container kubepods-besteffort-podeee7c45c_8983_40d3_a9ec_a86028b8e647.slice. Sep 8 23:54:25.908089 containerd[1483]: time="2025-09-08T23:54:25.907347556Z" level=info msg="StartContainer for \"6a3ac7316a395a893201c290403502347c5716719b3607ba6d615126654c8abe\" returns successfully" Sep 8 23:54:25.932710 kubelet[2570]: I0908 23:54:25.932559 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eee7c45c-8983-40d3-a9ec-a86028b8e647-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ff2sc\" (UID: \"eee7c45c-8983-40d3-a9ec-a86028b8e647\") " pod="kube-system/cilium-operator-6c4d7847fc-ff2sc" Sep 8 23:54:25.932710 kubelet[2570]: I0908 23:54:25.932626 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr94g\" (UniqueName: \"kubernetes.io/projected/eee7c45c-8983-40d3-a9ec-a86028b8e647-kube-api-access-hr94g\") pod \"cilium-operator-6c4d7847fc-ff2sc\" (UID: \"eee7c45c-8983-40d3-a9ec-a86028b8e647\") " pod="kube-system/cilium-operator-6c4d7847fc-ff2sc" Sep 8 23:54:26.187689 containerd[1483]: time="2025-09-08T23:54:26.187156923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ff2sc,Uid:eee7c45c-8983-40d3-a9ec-a86028b8e647,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:26.215699 containerd[1483]: time="2025-09-08T23:54:26.215015357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:26.215699 containerd[1483]: time="2025-09-08T23:54:26.215387057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:26.215699 containerd[1483]: time="2025-09-08T23:54:26.215399131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:26.215699 containerd[1483]: time="2025-09-08T23:54:26.215480571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:26.232714 systemd[1]: Started cri-containerd-46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c.scope - libcontainer container 46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c. Sep 8 23:54:26.265938 containerd[1483]: time="2025-09-08T23:54:26.265901571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ff2sc,Uid:eee7c45c-8983-40d3-a9ec-a86028b8e647,Namespace:kube-system,Attempt:0,} returns sandbox id \"46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c\"" Sep 8 23:54:26.634778 kubelet[2570]: I0908 23:54:26.634642 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29tf7" podStartSLOduration=1.6346139530000001 podStartE2EDuration="1.634613953s" podCreationTimestamp="2025-09-08 23:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:26.473270608 +0000 UTC m=+8.162989168" watchObservedRunningTime="2025-09-08 23:54:26.634613953 +0000 UTC m=+8.324332433" Sep 8 23:54:35.077259 update_engine[1469]: I20250908 23:54:35.077172 1469 update_attempter.cc:509] Updating boot flags... Sep 8 23:54:35.114611 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2948) Sep 8 23:54:35.163025 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2950) Sep 8 23:54:36.106920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659210587.mount: Deactivated successfully. Sep 8 23:54:39.049556 containerd[1483]: time="2025-09-08T23:54:39.049495617Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:39.050009 containerd[1483]: time="2025-09-08T23:54:39.049967318Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 8 23:54:39.050901 containerd[1483]: time="2025-09-08T23:54:39.050864930Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:39.052547 containerd[1483]: time="2025-09-08T23:54:39.052506665Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.232397925s" Sep 8 23:54:39.052615 containerd[1483]: time="2025-09-08T23:54:39.052546617Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 8 23:54:39.056026 containerd[1483]: time="2025-09-08T23:54:39.055983096Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 8 23:54:39.057421 containerd[1483]: time="2025-09-08T23:54:39.057298580Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:54:39.088451 containerd[1483]: time="2025-09-08T23:54:39.088394417Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\"" Sep 8 23:54:39.089861 containerd[1483]: time="2025-09-08T23:54:39.088983774Z" level=info msg="StartContainer for \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\"" Sep 8 23:54:39.114792 systemd[1]: Started cri-containerd-34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84.scope - libcontainer container 34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84. Sep 8 23:54:39.159515 systemd[1]: cri-containerd-34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84.scope: Deactivated successfully. Sep 8 23:54:39.180177 containerd[1483]: time="2025-09-08T23:54:39.180120217Z" level=info msg="StartContainer for \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\" returns successfully" Sep 8 23:54:39.262169 containerd[1483]: time="2025-09-08T23:54:39.257655114Z" level=info msg="shim disconnected" id=34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84 namespace=k8s.io Sep 8 23:54:39.262169 containerd[1483]: time="2025-09-08T23:54:39.262144212Z" level=warning msg="cleaning up after shim disconnected" id=34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84 namespace=k8s.io Sep 8 23:54:39.262169 containerd[1483]: time="2025-09-08T23:54:39.262161009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:54:39.486776 containerd[1483]: time="2025-09-08T23:54:39.486731024Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:54:39.511593 containerd[1483]: time="2025-09-08T23:54:39.511517185Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\"" Sep 8 23:54:39.512795 containerd[1483]: time="2025-09-08T23:54:39.512759484Z" level=info msg="StartContainer for \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\"" Sep 8 23:54:39.549794 systemd[1]: Started cri-containerd-c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce.scope - libcontainer container c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce. Sep 8 23:54:39.575137 containerd[1483]: time="2025-09-08T23:54:39.574994510Z" level=info msg="StartContainer for \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\" returns successfully" Sep 8 23:54:39.585659 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:54:39.585895 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:54:39.586488 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:54:39.596042 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:54:39.596247 systemd[1]: cri-containerd-c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce.scope: Deactivated successfully. Sep 8 23:54:39.611739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:54:39.617799 containerd[1483]: time="2025-09-08T23:54:39.617546184Z" level=info msg="shim disconnected" id=c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce namespace=k8s.io Sep 8 23:54:39.617799 containerd[1483]: time="2025-09-08T23:54:39.617625088Z" level=warning msg="cleaning up after shim disconnected" id=c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce namespace=k8s.io Sep 8 23:54:39.617799 containerd[1483]: time="2025-09-08T23:54:39.617633566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:54:40.084703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84-rootfs.mount: Deactivated successfully. Sep 8 23:54:40.224091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166224110.mount: Deactivated successfully. Sep 8 23:54:40.488750 containerd[1483]: time="2025-09-08T23:54:40.488707559Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:54:40.508194 containerd[1483]: time="2025-09-08T23:54:40.508136618Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\"" Sep 8 23:54:40.508810 containerd[1483]: time="2025-09-08T23:54:40.508779532Z" level=info msg="StartContainer for \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\"" Sep 8 23:54:40.540811 systemd[1]: Started cri-containerd-fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8.scope - libcontainer container fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8. Sep 8 23:54:40.570650 containerd[1483]: time="2025-09-08T23:54:40.570597456Z" level=info msg="StartContainer for \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\" returns successfully" Sep 8 23:54:40.572020 systemd[1]: cri-containerd-fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8.scope: Deactivated successfully. Sep 8 23:54:40.624854 containerd[1483]: time="2025-09-08T23:54:40.624785760Z" level=info msg="shim disconnected" id=fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8 namespace=k8s.io Sep 8 23:54:40.624854 containerd[1483]: time="2025-09-08T23:54:40.624849747Z" level=warning msg="cleaning up after shim disconnected" id=fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8 namespace=k8s.io Sep 8 23:54:40.624854 containerd[1483]: time="2025-09-08T23:54:40.624858665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:54:40.886660 containerd[1483]: time="2025-09-08T23:54:40.886615512Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:40.888113 containerd[1483]: time="2025-09-08T23:54:40.888019076Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 8 23:54:40.888969 containerd[1483]: time="2025-09-08T23:54:40.888932097Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:40.890605 containerd[1483]: time="2025-09-08T23:54:40.890559257Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.83453389s" Sep 8 23:54:40.890605 containerd[1483]: time="2025-09-08T23:54:40.890606168Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 8 23:54:40.893167 containerd[1483]: time="2025-09-08T23:54:40.893131511Z" level=info msg="CreateContainer within sandbox \"46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 8 23:54:40.905215 containerd[1483]: time="2025-09-08T23:54:40.905165745Z" level=info msg="CreateContainer within sandbox \"46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\"" Sep 8 23:54:40.906595 containerd[1483]: time="2025-09-08T23:54:40.906136274Z" level=info msg="StartContainer for \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\"" Sep 8 23:54:40.931775 systemd[1]: Started cri-containerd-9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df.scope - libcontainer container 9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df. Sep 8 23:54:40.954020 containerd[1483]: time="2025-09-08T23:54:40.953979666Z" level=info msg="StartContainer for \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\" returns successfully" Sep 8 23:54:41.499468 containerd[1483]: time="2025-09-08T23:54:41.499415701Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:54:41.518155 kubelet[2570]: I0908 23:54:41.515120 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ff2sc" podStartSLOduration=1.8908311819999999 podStartE2EDuration="16.515105528s" podCreationTimestamp="2025-09-08 23:54:25 +0000 UTC" firstStartedPulling="2025-09-08 23:54:26.267094752 +0000 UTC m=+7.956813232" lastFinishedPulling="2025-09-08 23:54:40.891369098 +0000 UTC m=+22.581087578" observedRunningTime="2025-09-08 23:54:41.51477219 +0000 UTC m=+23.204490670" watchObservedRunningTime="2025-09-08 23:54:41.515105528 +0000 UTC m=+23.204824008" Sep 8 23:54:41.516771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846333280.mount: Deactivated successfully. Sep 8 23:54:41.522957 containerd[1483]: time="2025-09-08T23:54:41.522906570Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\"" Sep 8 23:54:41.525641 containerd[1483]: time="2025-09-08T23:54:41.523825241Z" level=info msg="StartContainer for \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\"" Sep 8 23:54:41.572140 systemd[1]: Started cri-containerd-252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec.scope - libcontainer container 252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec. Sep 8 23:54:41.595597 systemd[1]: cri-containerd-252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec.scope: Deactivated successfully. Sep 8 23:54:41.598674 containerd[1483]: time="2025-09-08T23:54:41.598465041Z" level=info msg="StartContainer for \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\" returns successfully" Sep 8 23:54:41.695845 containerd[1483]: time="2025-09-08T23:54:41.695776261Z" level=info msg="shim disconnected" id=252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec namespace=k8s.io Sep 8 23:54:41.695845 containerd[1483]: time="2025-09-08T23:54:41.695834171Z" level=warning msg="cleaning up after shim disconnected" id=252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec namespace=k8s.io Sep 8 23:54:41.695845 containerd[1483]: time="2025-09-08T23:54:41.695845928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:54:42.084156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec-rootfs.mount: Deactivated successfully. Sep 8 23:54:42.503600 containerd[1483]: time="2025-09-08T23:54:42.503543579Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:54:42.516998 containerd[1483]: time="2025-09-08T23:54:42.516959701Z" level=info msg="CreateContainer within sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\"" Sep 8 23:54:42.517599 containerd[1483]: time="2025-09-08T23:54:42.517559477Z" level=info msg="StartContainer for \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\"" Sep 8 23:54:42.544737 systemd[1]: Started cri-containerd-74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06.scope - libcontainer container 74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06. Sep 8 23:54:42.570392 containerd[1483]: time="2025-09-08T23:54:42.570349153Z" level=info msg="StartContainer for \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\" returns successfully" Sep 8 23:54:42.701592 kubelet[2570]: I0908 23:54:42.700371 2570 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:54:42.744206 systemd[1]: Created slice kubepods-burstable-poda441fb16_3517_4dae_a77c_5ff7820a89a7.slice - libcontainer container kubepods-burstable-poda441fb16_3517_4dae_a77c_5ff7820a89a7.slice. Sep 8 23:54:42.751546 systemd[1]: Created slice kubepods-burstable-pod3128c813_92b1_44d5_8fb0_5a4b0a887e4c.slice - libcontainer container kubepods-burstable-pod3128c813_92b1_44d5_8fb0_5a4b0a887e4c.slice. Sep 8 23:54:42.760905 kubelet[2570]: I0908 23:54:42.760574 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sppzt\" (UniqueName: \"kubernetes.io/projected/3128c813-92b1-44d5-8fb0-5a4b0a887e4c-kube-api-access-sppzt\") pod \"coredns-668d6bf9bc-mfqk5\" (UID: \"3128c813-92b1-44d5-8fb0-5a4b0a887e4c\") " pod="kube-system/coredns-668d6bf9bc-mfqk5" Sep 8 23:54:42.760905 kubelet[2570]: I0908 23:54:42.760619 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a441fb16-3517-4dae-a77c-5ff7820a89a7-config-volume\") pod \"coredns-668d6bf9bc-q9lvq\" (UID: \"a441fb16-3517-4dae-a77c-5ff7820a89a7\") " pod="kube-system/coredns-668d6bf9bc-q9lvq" Sep 8 23:54:42.760905 kubelet[2570]: I0908 23:54:42.760641 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3128c813-92b1-44d5-8fb0-5a4b0a887e4c-config-volume\") pod \"coredns-668d6bf9bc-mfqk5\" (UID: \"3128c813-92b1-44d5-8fb0-5a4b0a887e4c\") " pod="kube-system/coredns-668d6bf9bc-mfqk5" Sep 8 23:54:42.760905 kubelet[2570]: I0908 23:54:42.760662 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwgq\" (UniqueName: \"kubernetes.io/projected/a441fb16-3517-4dae-a77c-5ff7820a89a7-kube-api-access-qgwgq\") pod \"coredns-668d6bf9bc-q9lvq\" (UID: \"a441fb16-3517-4dae-a77c-5ff7820a89a7\") " pod="kube-system/coredns-668d6bf9bc-q9lvq" Sep 8 23:54:43.049585 containerd[1483]: time="2025-09-08T23:54:43.049458838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q9lvq,Uid:a441fb16-3517-4dae-a77c-5ff7820a89a7,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:43.054853 containerd[1483]: time="2025-09-08T23:54:43.054558652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mfqk5,Uid:3128c813-92b1-44d5-8fb0-5a4b0a887e4c,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:43.087090 systemd[1]: run-containerd-runc-k8s.io-74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06-runc.BEZBmt.mount: Deactivated successfully. Sep 8 23:54:43.558865 kubelet[2570]: I0908 23:54:43.557139 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6pc9t" podStartSLOduration=5.319923371 podStartE2EDuration="18.557121024s" podCreationTimestamp="2025-09-08 23:54:25 +0000 UTC" firstStartedPulling="2025-09-08 23:54:25.818330418 +0000 UTC m=+7.508048898" lastFinishedPulling="2025-09-08 23:54:39.055528071 +0000 UTC m=+20.745246551" observedRunningTime="2025-09-08 23:54:43.556599989 +0000 UTC m=+25.246318469" watchObservedRunningTime="2025-09-08 23:54:43.557121024 +0000 UTC m=+25.246839504" Sep 8 23:54:44.419267 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:53506.service - OpenSSH per-connection server daemon (10.0.0.1:53506). Sep 8 23:54:44.466883 sshd[3432]: Accepted publickey for core from 10.0.0.1 port 53506 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:54:44.469070 sshd-session[3432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:44.473793 systemd-logind[1468]: New session 8 of user core. Sep 8 23:54:44.479732 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:54:44.612875 sshd[3434]: Connection closed by 10.0.0.1 port 53506 Sep 8 23:54:44.613491 sshd-session[3432]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:44.619068 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:54:44.619339 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:53506.service: Deactivated successfully. Sep 8 23:54:44.620762 systemd-networkd[1412]: cilium_host: Link UP Sep 8 23:54:44.620983 systemd-networkd[1412]: cilium_net: Link UP Sep 8 23:54:44.621151 systemd-networkd[1412]: cilium_net: Gained carrier Sep 8 23:54:44.621271 systemd-networkd[1412]: cilium_host: Gained carrier Sep 8 23:54:44.621356 systemd-networkd[1412]: cilium_net: Gained IPv6LL Sep 8 23:54:44.621459 systemd-networkd[1412]: cilium_host: Gained IPv6LL Sep 8 23:54:44.622502 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:54:44.625361 systemd-logind[1468]: Removed session 8. Sep 8 23:54:44.705653 systemd-networkd[1412]: cilium_vxlan: Link UP Sep 8 23:54:44.705659 systemd-networkd[1412]: cilium_vxlan: Gained carrier Sep 8 23:54:44.985599 kernel: NET: Registered PF_ALG protocol family Sep 8 23:54:45.606073 systemd-networkd[1412]: lxc_health: Link UP Sep 8 23:54:45.606305 systemd-networkd[1412]: lxc_health: Gained carrier Sep 8 23:54:46.106826 systemd-networkd[1412]: cilium_vxlan: Gained IPv6LL Sep 8 23:54:46.184200 systemd-networkd[1412]: lxc5788209c9241: Link UP Sep 8 23:54:46.184398 systemd-networkd[1412]: lxccc933196c9c3: Link UP Sep 8 23:54:46.186575 kernel: eth0: renamed from tmpaa91c Sep 8 23:54:46.193391 kernel: eth0: renamed from tmp96d72 Sep 8 23:54:46.207609 systemd-networkd[1412]: lxc5788209c9241: Gained carrier Sep 8 23:54:46.210201 systemd-networkd[1412]: lxccc933196c9c3: Gained carrier Sep 8 23:54:47.258771 systemd-networkd[1412]: lxc_health: Gained IPv6LL Sep 8 23:54:48.026764 systemd-networkd[1412]: lxccc933196c9c3: Gained IPv6LL Sep 8 23:54:48.154775 systemd-networkd[1412]: lxc5788209c9241: Gained IPv6LL Sep 8 23:54:49.627609 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:53510.service - OpenSSH per-connection server daemon (10.0.0.1:53510). Sep 8 23:54:49.686218 sshd[3830]: Accepted publickey for core from 10.0.0.1 port 53510 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:54:49.688416 sshd-session[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:49.692695 systemd-logind[1468]: New session 9 of user core. Sep 8 23:54:49.705890 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:54:49.865848 sshd[3832]: Connection closed by 10.0.0.1 port 53510 Sep 8 23:54:49.866254 sshd-session[3830]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:49.870974 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:54:49.871686 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:53510.service: Deactivated successfully. Sep 8 23:54:49.874227 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:54:49.876586 systemd-logind[1468]: Removed session 9. Sep 8 23:54:49.991142 containerd[1483]: time="2025-09-08T23:54:49.990160767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:49.991142 containerd[1483]: time="2025-09-08T23:54:49.991019233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:49.991142 containerd[1483]: time="2025-09-08T23:54:49.991084865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:49.991142 containerd[1483]: time="2025-09-08T23:54:49.990941641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:49.991142 containerd[1483]: time="2025-09-08T23:54:49.990989676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:49.991142 containerd[1483]: time="2025-09-08T23:54:49.991000635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:49.991142 containerd[1483]: time="2025-09-08T23:54:49.991070707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:49.991959 containerd[1483]: time="2025-09-08T23:54:49.991777269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:50.019750 systemd[1]: Started cri-containerd-96d72283a330f7614aa57c9c5a410f91b0be6c2db810a142d285d0a1d72e3f6c.scope - libcontainer container 96d72283a330f7614aa57c9c5a410f91b0be6c2db810a142d285d0a1d72e3f6c. Sep 8 23:54:50.020983 systemd[1]: Started cri-containerd-aa91c38c2c332e612b4aa8cc7def39c72b96794d829f5e83ebd3607fbba002ec.scope - libcontainer container aa91c38c2c332e612b4aa8cc7def39c72b96794d829f5e83ebd3607fbba002ec. Sep 8 23:54:50.032640 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:54:50.033974 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:54:50.053081 containerd[1483]: time="2025-09-08T23:54:50.053041171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q9lvq,Uid:a441fb16-3517-4dae-a77c-5ff7820a89a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa91c38c2c332e612b4aa8cc7def39c72b96794d829f5e83ebd3607fbba002ec\"" Sep 8 23:54:50.055917 containerd[1483]: time="2025-09-08T23:54:50.055864640Z" level=info msg="CreateContainer within sandbox \"aa91c38c2c332e612b4aa8cc7def39c72b96794d829f5e83ebd3607fbba002ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:54:50.059140 containerd[1483]: time="2025-09-08T23:54:50.059103426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mfqk5,Uid:3128c813-92b1-44d5-8fb0-5a4b0a887e4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d72283a330f7614aa57c9c5a410f91b0be6c2db810a142d285d0a1d72e3f6c\"" Sep 8 23:54:50.067070 containerd[1483]: time="2025-09-08T23:54:50.067010650Z" level=info msg="CreateContainer within sandbox \"96d72283a330f7614aa57c9c5a410f91b0be6c2db810a142d285d0a1d72e3f6c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:54:50.072360 containerd[1483]: time="2025-09-08T23:54:50.072309464Z" level=info msg="CreateContainer within sandbox \"aa91c38c2c332e612b4aa8cc7def39c72b96794d829f5e83ebd3607fbba002ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aacf8c9a2c7b89b77aed49d5dd05daae381ed2b7b20287816f4bf8fff632087a\"" Sep 8 23:54:50.074752 containerd[1483]: time="2025-09-08T23:54:50.073055067Z" level=info msg="StartContainer for \"aacf8c9a2c7b89b77aed49d5dd05daae381ed2b7b20287816f4bf8fff632087a\"" Sep 8 23:54:50.084635 containerd[1483]: time="2025-09-08T23:54:50.084579439Z" level=info msg="CreateContainer within sandbox \"96d72283a330f7614aa57c9c5a410f91b0be6c2db810a142d285d0a1d72e3f6c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29cd9fc94a89e7b8e5859f35bd9ee0f8ebb1a6b05c1fcc21b54cc94e05bbbfcf\"" Sep 8 23:54:50.086342 containerd[1483]: time="2025-09-08T23:54:50.086317379Z" level=info msg="StartContainer for \"29cd9fc94a89e7b8e5859f35bd9ee0f8ebb1a6b05c1fcc21b54cc94e05bbbfcf\"" Sep 8 23:54:50.101737 systemd[1]: Started cri-containerd-aacf8c9a2c7b89b77aed49d5dd05daae381ed2b7b20287816f4bf8fff632087a.scope - libcontainer container aacf8c9a2c7b89b77aed49d5dd05daae381ed2b7b20287816f4bf8fff632087a. Sep 8 23:54:50.115768 systemd[1]: Started cri-containerd-29cd9fc94a89e7b8e5859f35bd9ee0f8ebb1a6b05c1fcc21b54cc94e05bbbfcf.scope - libcontainer container 29cd9fc94a89e7b8e5859f35bd9ee0f8ebb1a6b05c1fcc21b54cc94e05bbbfcf. Sep 8 23:54:50.140006 containerd[1483]: time="2025-09-08T23:54:50.139961928Z" level=info msg="StartContainer for \"aacf8c9a2c7b89b77aed49d5dd05daae381ed2b7b20287816f4bf8fff632087a\" returns successfully" Sep 8 23:54:50.144796 containerd[1483]: time="2025-09-08T23:54:50.144560373Z" level=info msg="StartContainer for \"29cd9fc94a89e7b8e5859f35bd9ee0f8ebb1a6b05c1fcc21b54cc94e05bbbfcf\" returns successfully" Sep 8 23:54:50.548923 kubelet[2570]: I0908 23:54:50.547740 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q9lvq" podStartSLOduration=25.547723278 podStartE2EDuration="25.547723278s" podCreationTimestamp="2025-09-08 23:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:50.547684042 +0000 UTC m=+32.237402522" watchObservedRunningTime="2025-09-08 23:54:50.547723278 +0000 UTC m=+32.237441758" Sep 8 23:54:50.548923 kubelet[2570]: I0908 23:54:50.547833 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mfqk5" podStartSLOduration=25.547828827 podStartE2EDuration="25.547828827s" podCreationTimestamp="2025-09-08 23:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:50.533485346 +0000 UTC m=+32.223203826" watchObservedRunningTime="2025-09-08 23:54:50.547828827 +0000 UTC m=+32.237547587" Sep 8 23:54:54.878879 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:33586.service - OpenSSH per-connection server daemon (10.0.0.1:33586). Sep 8 23:54:54.931748 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 33586 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:54:54.933463 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:54.938176 systemd-logind[1468]: New session 10 of user core. Sep 8 23:54:54.954763 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:54:55.088374 sshd[4018]: Connection closed by 10.0.0.1 port 33586 Sep 8 23:54:55.089173 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:55.094222 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:33586.service: Deactivated successfully. Sep 8 23:54:55.097146 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:54:55.098099 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:54:55.099129 systemd-logind[1468]: Removed session 10. Sep 8 23:54:55.637596 kubelet[2570]: I0908 23:54:55.637505 2570 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 8 23:55:00.109512 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:52810.service - OpenSSH per-connection server daemon (10.0.0.1:52810). Sep 8 23:55:00.161808 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 52810 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:00.163208 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:00.168166 systemd-logind[1468]: New session 11 of user core. Sep 8 23:55:00.177791 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:55:00.298379 sshd[4041]: Connection closed by 10.0.0.1 port 52810 Sep 8 23:55:00.298957 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:00.311018 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:52810.service: Deactivated successfully. Sep 8 23:55:00.313159 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:55:00.314123 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:55:00.324181 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:52814.service - OpenSSH per-connection server daemon (10.0.0.1:52814). Sep 8 23:55:00.325552 systemd-logind[1468]: Removed session 11. Sep 8 23:55:00.369190 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 52814 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:00.370084 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:00.376117 systemd-logind[1468]: New session 12 of user core. Sep 8 23:55:00.390847 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:55:00.540532 sshd[4058]: Connection closed by 10.0.0.1 port 52814 Sep 8 23:55:00.541265 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:00.551024 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:52814.service: Deactivated successfully. Sep 8 23:55:00.553835 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:55:00.556466 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:55:00.562941 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:52816.service - OpenSSH per-connection server daemon (10.0.0.1:52816). Sep 8 23:55:00.564805 systemd-logind[1468]: Removed session 12. Sep 8 23:55:00.604204 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 52816 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:00.605727 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:00.609503 systemd-logind[1468]: New session 13 of user core. Sep 8 23:55:00.623725 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:55:00.740269 sshd[4072]: Connection closed by 10.0.0.1 port 52816 Sep 8 23:55:00.740756 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:00.744019 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:52816.service: Deactivated successfully. Sep 8 23:55:00.746820 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:55:00.747685 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:55:00.748927 systemd-logind[1468]: Removed session 13. Sep 8 23:55:05.752409 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:52828.service - OpenSSH per-connection server daemon (10.0.0.1:52828). Sep 8 23:55:05.793456 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 52828 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:05.794709 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:05.799416 systemd-logind[1468]: New session 14 of user core. Sep 8 23:55:05.808746 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:55:05.915717 sshd[4088]: Connection closed by 10.0.0.1 port 52828 Sep 8 23:55:05.916041 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:05.919356 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:52828.service: Deactivated successfully. Sep 8 23:55:05.921134 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:55:05.921775 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:55:05.922525 systemd-logind[1468]: Removed session 14. Sep 8 23:55:10.929269 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:35390.service - OpenSSH per-connection server daemon (10.0.0.1:35390). Sep 8 23:55:10.974699 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 35390 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:10.975059 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:10.980009 systemd-logind[1468]: New session 15 of user core. Sep 8 23:55:10.993832 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:55:11.150778 sshd[4103]: Connection closed by 10.0.0.1 port 35390 Sep 8 23:55:11.151379 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:11.168333 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:35390.service: Deactivated successfully. Sep 8 23:55:11.172144 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:55:11.173423 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:55:11.183214 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:35396.service - OpenSSH per-connection server daemon (10.0.0.1:35396). Sep 8 23:55:11.184497 systemd-logind[1468]: Removed session 15. Sep 8 23:55:11.228663 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 35396 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:11.230085 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:11.235308 systemd-logind[1468]: New session 16 of user core. Sep 8 23:55:11.244775 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:55:11.430594 sshd[4119]: Connection closed by 10.0.0.1 port 35396 Sep 8 23:55:11.432097 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:11.446969 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:35396.service: Deactivated successfully. Sep 8 23:55:11.450014 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:55:11.450695 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:55:11.457039 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:35402.service - OpenSSH per-connection server daemon (10.0.0.1:35402). Sep 8 23:55:11.458498 systemd-logind[1468]: Removed session 16. Sep 8 23:55:11.496187 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 35402 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:11.497630 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:11.501766 systemd-logind[1468]: New session 17 of user core. Sep 8 23:55:11.524790 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:55:12.122647 sshd[4132]: Connection closed by 10.0.0.1 port 35402 Sep 8 23:55:12.122994 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:12.137477 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:35402.service: Deactivated successfully. Sep 8 23:55:12.140820 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:55:12.143336 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:55:12.155949 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:35412.service - OpenSSH per-connection server daemon (10.0.0.1:35412). Sep 8 23:55:12.157595 systemd-logind[1468]: Removed session 17. Sep 8 23:55:12.201113 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 35412 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:12.202806 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:12.206868 systemd-logind[1468]: New session 18 of user core. Sep 8 23:55:12.216748 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:55:12.449318 sshd[4154]: Connection closed by 10.0.0.1 port 35412 Sep 8 23:55:12.449772 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:12.462862 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:35412.service: Deactivated successfully. Sep 8 23:55:12.464521 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:55:12.466462 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:55:12.471858 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:35422.service - OpenSSH per-connection server daemon (10.0.0.1:35422). Sep 8 23:55:12.473664 systemd-logind[1468]: Removed session 18. Sep 8 23:55:12.514954 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 35422 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:12.516308 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:12.522154 systemd-logind[1468]: New session 19 of user core. Sep 8 23:55:12.532785 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 8 23:55:12.644978 sshd[4169]: Connection closed by 10.0.0.1 port 35422 Sep 8 23:55:12.645341 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:12.648611 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Sep 8 23:55:12.648782 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:35422.service: Deactivated successfully. Sep 8 23:55:12.650455 systemd[1]: session-19.scope: Deactivated successfully. Sep 8 23:55:12.651786 systemd-logind[1468]: Removed session 19. Sep 8 23:55:17.679664 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:35430.service - OpenSSH per-connection server daemon (10.0.0.1:35430). Sep 8 23:55:17.717409 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 35430 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:17.718630 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:17.722910 systemd-logind[1468]: New session 20 of user core. Sep 8 23:55:17.738758 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 8 23:55:17.853056 sshd[4186]: Connection closed by 10.0.0.1 port 35430 Sep 8 23:55:17.853439 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:17.858067 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:35430.service: Deactivated successfully. Sep 8 23:55:17.862168 systemd[1]: session-20.scope: Deactivated successfully. Sep 8 23:55:17.863016 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Sep 8 23:55:17.863820 systemd-logind[1468]: Removed session 20. Sep 8 23:55:22.872445 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:44864.service - OpenSSH per-connection server daemon (10.0.0.1:44864). Sep 8 23:55:22.922280 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 44864 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:22.923520 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:22.928595 systemd-logind[1468]: New session 21 of user core. Sep 8 23:55:22.937750 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 8 23:55:23.059103 sshd[4203]: Connection closed by 10.0.0.1 port 44864 Sep 8 23:55:23.059473 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:23.063615 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:44864.service: Deactivated successfully. Sep 8 23:55:23.065416 systemd[1]: session-21.scope: Deactivated successfully. Sep 8 23:55:23.066129 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Sep 8 23:55:23.067432 systemd-logind[1468]: Removed session 21. Sep 8 23:55:28.071498 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:44866.service - OpenSSH per-connection server daemon (10.0.0.1:44866). Sep 8 23:55:28.117589 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 44866 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:28.118802 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:28.122634 systemd-logind[1468]: New session 22 of user core. Sep 8 23:55:28.129724 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 8 23:55:28.248300 sshd[4221]: Connection closed by 10.0.0.1 port 44866 Sep 8 23:55:28.248670 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:28.265995 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:44866.service: Deactivated successfully. Sep 8 23:55:28.267527 systemd[1]: session-22.scope: Deactivated successfully. Sep 8 23:55:28.268213 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Sep 8 23:55:28.275835 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:44882.service - OpenSSH per-connection server daemon (10.0.0.1:44882). Sep 8 23:55:28.279477 systemd-logind[1468]: Removed session 22. Sep 8 23:55:28.319159 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 44882 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:28.320409 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:28.324740 systemd-logind[1468]: New session 23 of user core. Sep 8 23:55:28.331758 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 8 23:55:30.047990 containerd[1483]: time="2025-09-08T23:55:30.047949098Z" level=info msg="StopContainer for \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\" with timeout 30 (s)" Sep 8 23:55:30.048586 containerd[1483]: time="2025-09-08T23:55:30.048308735Z" level=info msg="Stop container \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\" with signal terminated" Sep 8 23:55:30.057310 systemd[1]: cri-containerd-9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df.scope: Deactivated successfully. Sep 8 23:55:30.087263 containerd[1483]: time="2025-09-08T23:55:30.087210310Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:55:30.094107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df-rootfs.mount: Deactivated successfully. Sep 8 23:55:30.100242 containerd[1483]: time="2025-09-08T23:55:30.100204915Z" level=info msg="StopContainer for \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\" with timeout 2 (s)" Sep 8 23:55:30.100510 containerd[1483]: time="2025-09-08T23:55:30.100433033Z" level=info msg="Stop container \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\" with signal terminated" Sep 8 23:55:30.102592 containerd[1483]: time="2025-09-08T23:55:30.102080298Z" level=info msg="shim disconnected" id=9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df namespace=k8s.io Sep 8 23:55:30.102592 containerd[1483]: time="2025-09-08T23:55:30.102120898Z" level=warning msg="cleaning up after shim disconnected" id=9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df namespace=k8s.io Sep 8 23:55:30.102592 containerd[1483]: time="2025-09-08T23:55:30.102129378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:30.106393 systemd-networkd[1412]: lxc_health: Link DOWN Sep 8 23:55:30.106398 systemd-networkd[1412]: lxc_health: Lost carrier Sep 8 23:55:30.120477 systemd[1]: cri-containerd-74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06.scope: Deactivated successfully. Sep 8 23:55:30.120905 systemd[1]: cri-containerd-74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06.scope: Consumed 6.496s CPU time, 123.7M memory peak, 136K read from disk, 12.9M written to disk. Sep 8 23:55:30.132801 containerd[1483]: time="2025-09-08T23:55:30.132756266Z" level=info msg="StopContainer for \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\" returns successfully" Sep 8 23:55:30.133407 containerd[1483]: time="2025-09-08T23:55:30.133332701Z" level=info msg="StopPodSandbox for \"46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c\"" Sep 8 23:55:30.133407 containerd[1483]: time="2025-09-08T23:55:30.133368180Z" level=info msg="Container to stop \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:30.135758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c-shm.mount: Deactivated successfully. Sep 8 23:55:30.141852 systemd[1]: cri-containerd-46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c.scope: Deactivated successfully. Sep 8 23:55:30.158847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06-rootfs.mount: Deactivated successfully. Sep 8 23:55:30.173110 containerd[1483]: time="2025-09-08T23:55:30.173035789Z" level=info msg="shim disconnected" id=74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06 namespace=k8s.io Sep 8 23:55:30.173110 containerd[1483]: time="2025-09-08T23:55:30.173087748Z" level=warning msg="cleaning up after shim disconnected" id=74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06 namespace=k8s.io Sep 8 23:55:30.173110 containerd[1483]: time="2025-09-08T23:55:30.173095588Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:30.173926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c-rootfs.mount: Deactivated successfully. Sep 8 23:55:30.174464 containerd[1483]: time="2025-09-08T23:55:30.174365617Z" level=info msg="shim disconnected" id=46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c namespace=k8s.io Sep 8 23:55:30.174692 containerd[1483]: time="2025-09-08T23:55:30.174674694Z" level=warning msg="cleaning up after shim disconnected" id=46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c namespace=k8s.io Sep 8 23:55:30.174842 containerd[1483]: time="2025-09-08T23:55:30.174825733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:30.187263 containerd[1483]: time="2025-09-08T23:55:30.187098744Z" level=info msg="TearDown network for sandbox \"46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c\" successfully" Sep 8 23:55:30.187263 containerd[1483]: time="2025-09-08T23:55:30.187131264Z" level=info msg="StopPodSandbox for \"46ccd37c650bf374ca491aa4d3f35bd374479298fc1b58beb294fa1441270e3c\" returns successfully" Sep 8 23:55:30.187408 containerd[1483]: time="2025-09-08T23:55:30.187335942Z" level=info msg="StopContainer for \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\" returns successfully" Sep 8 23:55:30.188416 containerd[1483]: time="2025-09-08T23:55:30.188388412Z" level=info msg="StopPodSandbox for \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\"" Sep 8 23:55:30.188486 containerd[1483]: time="2025-09-08T23:55:30.188446492Z" level=info msg="Container to stop \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:30.188486 containerd[1483]: time="2025-09-08T23:55:30.188458732Z" level=info msg="Container to stop \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:30.188486 containerd[1483]: time="2025-09-08T23:55:30.188467292Z" level=info msg="Container to stop \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:30.188486 containerd[1483]: time="2025-09-08T23:55:30.188475572Z" level=info msg="Container to stop \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:30.188486 containerd[1483]: time="2025-09-08T23:55:30.188483612Z" level=info msg="Container to stop \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:55:30.193920 systemd[1]: cri-containerd-62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a.scope: Deactivated successfully. Sep 8 23:55:30.229973 containerd[1483]: time="2025-09-08T23:55:30.229769085Z" level=info msg="shim disconnected" id=62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a namespace=k8s.io Sep 8 23:55:30.229973 containerd[1483]: time="2025-09-08T23:55:30.229833245Z" level=warning msg="cleaning up after shim disconnected" id=62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a namespace=k8s.io Sep 8 23:55:30.229973 containerd[1483]: time="2025-09-08T23:55:30.229843525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:30.241480 containerd[1483]: time="2025-09-08T23:55:30.241427262Z" level=info msg="TearDown network for sandbox \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" successfully" Sep 8 23:55:30.241480 containerd[1483]: time="2025-09-08T23:55:30.241464622Z" level=info msg="StopPodSandbox for \"62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a\" returns successfully" Sep 8 23:55:30.277597 kubelet[2570]: I0908 23:55:30.274774 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr94g\" (UniqueName: \"kubernetes.io/projected/eee7c45c-8983-40d3-a9ec-a86028b8e647-kube-api-access-hr94g\") pod \"eee7c45c-8983-40d3-a9ec-a86028b8e647\" (UID: \"eee7c45c-8983-40d3-a9ec-a86028b8e647\") " Sep 8 23:55:30.277597 kubelet[2570]: I0908 23:55:30.274837 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eee7c45c-8983-40d3-a9ec-a86028b8e647-cilium-config-path\") pod \"eee7c45c-8983-40d3-a9ec-a86028b8e647\" (UID: \"eee7c45c-8983-40d3-a9ec-a86028b8e647\") " Sep 8 23:55:30.283116 kubelet[2570]: I0908 23:55:30.283016 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee7c45c-8983-40d3-a9ec-a86028b8e647-kube-api-access-hr94g" (OuterVolumeSpecName: "kube-api-access-hr94g") pod "eee7c45c-8983-40d3-a9ec-a86028b8e647" (UID: "eee7c45c-8983-40d3-a9ec-a86028b8e647"). InnerVolumeSpecName "kube-api-access-hr94g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:55:30.284787 kubelet[2570]: I0908 23:55:30.284753 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee7c45c-8983-40d3-a9ec-a86028b8e647-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eee7c45c-8983-40d3-a9ec-a86028b8e647" (UID: "eee7c45c-8983-40d3-a9ec-a86028b8e647"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:55:30.376163 kubelet[2570]: I0908 23:55:30.376041 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cilium-cgroup\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376163 kubelet[2570]: I0908 23:55:30.376084 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-etc-cni-netd\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376163 kubelet[2570]: I0908 23:55:30.376117 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-bpf-maps\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376163 kubelet[2570]: I0908 23:55:30.376135 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-host-proc-sys-net\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376163 kubelet[2570]: I0908 23:55:30.376152 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cilium-run\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376163 kubelet[2570]: I0908 23:55:30.376166 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-hostproc\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376392 kubelet[2570]: I0908 23:55:30.376190 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-lib-modules\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376392 kubelet[2570]: I0908 23:55:30.376218 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d300b42-6e41-4874-841a-8033a2de6915-hubble-tls\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376392 kubelet[2570]: I0908 23:55:30.376235 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78tnl\" (UniqueName: \"kubernetes.io/projected/5d300b42-6e41-4874-841a-8033a2de6915-kube-api-access-78tnl\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376392 kubelet[2570]: I0908 23:55:30.376272 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cni-path\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376392 kubelet[2570]: I0908 23:55:30.376291 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-xtables-lock\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376392 kubelet[2570]: I0908 23:55:30.376310 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d300b42-6e41-4874-841a-8033a2de6915-clustermesh-secrets\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376518 kubelet[2570]: I0908 23:55:30.376331 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d300b42-6e41-4874-841a-8033a2de6915-cilium-config-path\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376518 kubelet[2570]: I0908 23:55:30.376346 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-host-proc-sys-kernel\") pod \"5d300b42-6e41-4874-841a-8033a2de6915\" (UID: \"5d300b42-6e41-4874-841a-8033a2de6915\") " Sep 8 23:55:30.376518 kubelet[2570]: I0908 23:55:30.376380 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eee7c45c-8983-40d3-a9ec-a86028b8e647-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.376518 kubelet[2570]: I0908 23:55:30.376390 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hr94g\" (UniqueName: \"kubernetes.io/projected/eee7c45c-8983-40d3-a9ec-a86028b8e647-kube-api-access-hr94g\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.376518 kubelet[2570]: I0908 23:55:30.376442 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.376518 kubelet[2570]: I0908 23:55:30.376477 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.377585 kubelet[2570]: I0908 23:55:30.376749 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.377585 kubelet[2570]: I0908 23:55:30.376492 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.377585 kubelet[2570]: I0908 23:55:30.376797 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.377585 kubelet[2570]: I0908 23:55:30.376816 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.377585 kubelet[2570]: I0908 23:55:30.376837 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.377745 kubelet[2570]: I0908 23:55:30.376853 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cni-path" (OuterVolumeSpecName: "cni-path") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.377745 kubelet[2570]: I0908 23:55:30.376869 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-hostproc" (OuterVolumeSpecName: "hostproc") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.379433 kubelet[2570]: I0908 23:55:30.379390 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d300b42-6e41-4874-841a-8033a2de6915-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:55:30.379499 kubelet[2570]: I0908 23:55:30.379468 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:55:30.379766 kubelet[2570]: I0908 23:55:30.379730 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d300b42-6e41-4874-841a-8033a2de6915-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:55:30.380331 kubelet[2570]: I0908 23:55:30.380295 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d300b42-6e41-4874-841a-8033a2de6915-kube-api-access-78tnl" (OuterVolumeSpecName: "kube-api-access-78tnl") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "kube-api-access-78tnl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:55:30.385282 kubelet[2570]: I0908 23:55:30.385251 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d300b42-6e41-4874-841a-8033a2de6915-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5d300b42-6e41-4874-841a-8033a2de6915" (UID: "5d300b42-6e41-4874-841a-8033a2de6915"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 8 23:55:30.398583 systemd[1]: Removed slice kubepods-burstable-pod5d300b42_6e41_4874_841a_8033a2de6915.slice - libcontainer container kubepods-burstable-pod5d300b42_6e41_4874_841a_8033a2de6915.slice. Sep 8 23:55:30.398681 systemd[1]: kubepods-burstable-pod5d300b42_6e41_4874_841a_8033a2de6915.slice: Consumed 6.575s CPU time, 124M memory peak, 144K read from disk, 12.9M written to disk. Sep 8 23:55:30.399513 systemd[1]: Removed slice kubepods-besteffort-podeee7c45c_8983_40d3_a9ec_a86028b8e647.slice - libcontainer container kubepods-besteffort-podeee7c45c_8983_40d3_a9ec_a86028b8e647.slice. Sep 8 23:55:30.477511 kubelet[2570]: I0908 23:55:30.477391 2570 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d300b42-6e41-4874-841a-8033a2de6915-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477511 kubelet[2570]: I0908 23:55:30.477425 2570 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477511 kubelet[2570]: I0908 23:55:30.477434 2570 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d300b42-6e41-4874-841a-8033a2de6915-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477511 kubelet[2570]: I0908 23:55:30.477455 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-78tnl\" (UniqueName: \"kubernetes.io/projected/5d300b42-6e41-4874-841a-8033a2de6915-kube-api-access-78tnl\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477511 kubelet[2570]: I0908 23:55:30.477465 2570 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477511 kubelet[2570]: I0908 23:55:30.477473 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d300b42-6e41-4874-841a-8033a2de6915-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477511 kubelet[2570]: I0908 23:55:30.477481 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477511 kubelet[2570]: I0908 23:55:30.477491 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477816 kubelet[2570]: I0908 23:55:30.477500 2570 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477816 kubelet[2570]: I0908 23:55:30.477507 2570 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477816 kubelet[2570]: I0908 23:55:30.477515 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477816 kubelet[2570]: I0908 23:55:30.477523 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477816 kubelet[2570]: I0908 23:55:30.477532 2570 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.477816 kubelet[2570]: I0908 23:55:30.477539 2570 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d300b42-6e41-4874-841a-8033a2de6915-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 8 23:55:30.676100 kubelet[2570]: I0908 23:55:30.675999 2570 scope.go:117] "RemoveContainer" containerID="9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df" Sep 8 23:55:30.677800 containerd[1483]: time="2025-09-08T23:55:30.677642913Z" level=info msg="RemoveContainer for \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\"" Sep 8 23:55:30.683908 containerd[1483]: time="2025-09-08T23:55:30.683874978Z" level=info msg="RemoveContainer for \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\" returns successfully" Sep 8 23:55:30.684585 kubelet[2570]: I0908 23:55:30.684266 2570 scope.go:117] "RemoveContainer" containerID="9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df" Sep 8 23:55:30.684649 containerd[1483]: time="2025-09-08T23:55:30.684466372Z" level=error msg="ContainerStatus for \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\": not found" Sep 8 23:55:30.692791 kubelet[2570]: E0908 23:55:30.692028 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\": not found" containerID="9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df" Sep 8 23:55:30.697895 kubelet[2570]: I0908 23:55:30.696341 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df"} err="failed to get container status \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f94083a5d7fe3c192634638868b27309597eacd52d7af4667c1957ce74051df\": not found" Sep 8 23:55:30.698505 kubelet[2570]: I0908 23:55:30.698322 2570 scope.go:117] "RemoveContainer" containerID="74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06" Sep 8 23:55:30.702295 containerd[1483]: time="2025-09-08T23:55:30.701702459Z" level=info msg="RemoveContainer for \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\"" Sep 8 23:55:30.714639 containerd[1483]: time="2025-09-08T23:55:30.713938711Z" level=info msg="RemoveContainer for \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\" returns successfully" Sep 8 23:55:30.714755 kubelet[2570]: I0908 23:55:30.714241 2570 scope.go:117] "RemoveContainer" containerID="252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec" Sep 8 23:55:30.717159 containerd[1483]: time="2025-09-08T23:55:30.716744246Z" level=info msg="RemoveContainer for \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\"" Sep 8 23:55:30.719549 containerd[1483]: time="2025-09-08T23:55:30.719454382Z" level=info msg="RemoveContainer for \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\" returns successfully" Sep 8 23:55:30.719718 kubelet[2570]: I0908 23:55:30.719687 2570 scope.go:117] "RemoveContainer" containerID="fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8" Sep 8 23:55:30.720671 containerd[1483]: time="2025-09-08T23:55:30.720601692Z" level=info msg="RemoveContainer for \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\"" Sep 8 23:55:30.723011 containerd[1483]: time="2025-09-08T23:55:30.722976671Z" level=info msg="RemoveContainer for \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\" returns successfully" Sep 8 23:55:30.723210 kubelet[2570]: I0908 23:55:30.723185 2570 scope.go:117] "RemoveContainer" containerID="c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce" Sep 8 23:55:30.724447 containerd[1483]: time="2025-09-08T23:55:30.724420778Z" level=info msg="RemoveContainer for \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\"" Sep 8 23:55:30.727520 containerd[1483]: time="2025-09-08T23:55:30.727471791Z" level=info msg="RemoveContainer for \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\" returns successfully" Sep 8 23:55:30.728759 kubelet[2570]: I0908 23:55:30.727732 2570 scope.go:117] "RemoveContainer" containerID="34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84" Sep 8 23:55:30.729126 containerd[1483]: time="2025-09-08T23:55:30.729093536Z" level=info msg="RemoveContainer for \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\"" Sep 8 23:55:30.733395 containerd[1483]: time="2025-09-08T23:55:30.733354819Z" level=info msg="RemoveContainer for \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\" returns successfully" Sep 8 23:55:30.733718 kubelet[2570]: I0908 23:55:30.733693 2570 scope.go:117] "RemoveContainer" containerID="74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06" Sep 8 23:55:30.734004 containerd[1483]: time="2025-09-08T23:55:30.733927014Z" level=error msg="ContainerStatus for \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\": not found" Sep 8 23:55:30.734089 kubelet[2570]: E0908 23:55:30.734060 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\": not found" containerID="74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06" Sep 8 23:55:30.734122 kubelet[2570]: I0908 23:55:30.734095 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06"} err="failed to get container status \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\": rpc error: code = NotFound desc = an error occurred when try to find container \"74d2a9d54acc443a6b65637eb37cb5b957ef450dcdb55a4c42ff021c6d951d06\": not found" Sep 8 23:55:30.734122 kubelet[2570]: I0908 23:55:30.734119 2570 scope.go:117] "RemoveContainer" containerID="252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec" Sep 8 23:55:30.734368 containerd[1483]: time="2025-09-08T23:55:30.734334210Z" level=error msg="ContainerStatus for \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\": not found" Sep 8 23:55:30.734559 kubelet[2570]: E0908 23:55:30.734488 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\": not found" containerID="252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec" Sep 8 23:55:30.734621 kubelet[2570]: I0908 23:55:30.734557 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec"} err="failed to get container status \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"252c14b5f9948761685fc7c0d853862dbe3892dc6cd3edaad61ffe8037c880ec\": not found" Sep 8 23:55:30.734621 kubelet[2570]: I0908 23:55:30.734606 2570 scope.go:117] "RemoveContainer" containerID="fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8" Sep 8 23:55:30.734851 containerd[1483]: time="2025-09-08T23:55:30.734774846Z" level=error msg="ContainerStatus for \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\": not found" Sep 8 23:55:30.734895 kubelet[2570]: E0908 23:55:30.734879 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\": not found" containerID="fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8" Sep 8 23:55:30.734920 kubelet[2570]: I0908 23:55:30.734902 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8"} err="failed to get container status \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd8d0eee65e8544b6a9c0f5b47b4c126598f98f66ab4da61701968c59221eef8\": not found" Sep 8 23:55:30.734920 kubelet[2570]: I0908 23:55:30.734917 2570 scope.go:117] "RemoveContainer" containerID="c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce" Sep 8 23:55:30.735141 containerd[1483]: time="2025-09-08T23:55:30.735117363Z" level=error msg="ContainerStatus for \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\": not found" Sep 8 23:55:30.735311 kubelet[2570]: E0908 23:55:30.735292 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\": not found" containerID="c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce" Sep 8 23:55:30.735357 kubelet[2570]: I0908 23:55:30.735317 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce"} err="failed to get container status \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8cd5adcd68553322e76d63c37430e208cb106ceb0db8eca5a2e9804e0cfa1ce\": not found" Sep 8 23:55:30.735357 kubelet[2570]: I0908 23:55:30.735334 2570 scope.go:117] "RemoveContainer" containerID="34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84" Sep 8 23:55:30.735589 containerd[1483]: time="2025-09-08T23:55:30.735506520Z" level=error msg="ContainerStatus for \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\": not found" Sep 8 23:55:30.735659 kubelet[2570]: E0908 23:55:30.735636 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\": not found" containerID="34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84" Sep 8 23:55:30.735701 kubelet[2570]: I0908 23:55:30.735682 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84"} err="failed to get container status \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\": rpc error: code = NotFound desc = an error occurred when try to find container \"34fface6e4ee17e3d21c9873afb33246531dda12c8fd69808607fa5c1bd96e84\": not found" Sep 8 23:55:31.071022 systemd[1]: var-lib-kubelet-pods-eee7c45c\x2d8983\x2d40d3\x2da9ec\x2da86028b8e647-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhr94g.mount: Deactivated successfully. Sep 8 23:55:31.071139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a-rootfs.mount: Deactivated successfully. Sep 8 23:55:31.071205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62683eeeff4145ec1dc31a3476e18889fa6e88ee6f94264fa8b5b53feb98018a-shm.mount: Deactivated successfully. Sep 8 23:55:31.071258 systemd[1]: var-lib-kubelet-pods-5d300b42\x2d6e41\x2d4874\x2d841a\x2d8033a2de6915-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d78tnl.mount: Deactivated successfully. Sep 8 23:55:31.071316 systemd[1]: var-lib-kubelet-pods-5d300b42\x2d6e41\x2d4874\x2d841a\x2d8033a2de6915-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 8 23:55:31.071373 systemd[1]: var-lib-kubelet-pods-5d300b42\x2d6e41\x2d4874\x2d841a\x2d8033a2de6915-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 8 23:55:31.998385 sshd[4236]: Connection closed by 10.0.0.1 port 44882 Sep 8 23:55:31.998301 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:32.009914 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:44882.service: Deactivated successfully. Sep 8 23:55:32.011760 systemd[1]: session-23.scope: Deactivated successfully. Sep 8 23:55:32.012097 systemd[1]: session-23.scope: Consumed 1.039s CPU time, 26.9M memory peak. Sep 8 23:55:32.013358 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Sep 8 23:55:32.014905 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:48342.service - OpenSSH per-connection server daemon (10.0.0.1:48342). Sep 8 23:55:32.015673 systemd-logind[1468]: Removed session 23. Sep 8 23:55:32.068617 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 48342 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:32.069957 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:32.074643 systemd-logind[1468]: New session 24 of user core. Sep 8 23:55:32.081705 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 8 23:55:32.392548 kubelet[2570]: I0908 23:55:32.391778 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d300b42-6e41-4874-841a-8033a2de6915" path="/var/lib/kubelet/pods/5d300b42-6e41-4874-841a-8033a2de6915/volumes" Sep 8 23:55:32.392548 kubelet[2570]: I0908 23:55:32.392326 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee7c45c-8983-40d3-a9ec-a86028b8e647" path="/var/lib/kubelet/pods/eee7c45c-8983-40d3-a9ec-a86028b8e647/volumes" Sep 8 23:55:33.053753 sshd[4406]: Connection closed by 10.0.0.1 port 48342 Sep 8 23:55:33.052897 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:33.065921 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:48342.service: Deactivated successfully. Sep 8 23:55:33.068206 systemd[1]: session-24.scope: Deactivated successfully. Sep 8 23:55:33.071043 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Sep 8 23:55:33.081589 kubelet[2570]: I0908 23:55:33.077775 2570 memory_manager.go:355] "RemoveStaleState removing state" podUID="5d300b42-6e41-4874-841a-8033a2de6915" containerName="cilium-agent" Sep 8 23:55:33.081589 kubelet[2570]: I0908 23:55:33.077803 2570 memory_manager.go:355] "RemoveStaleState removing state" podUID="eee7c45c-8983-40d3-a9ec-a86028b8e647" containerName="cilium-operator" Sep 8 23:55:33.077909 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:48346.service - OpenSSH per-connection server daemon (10.0.0.1:48346). Sep 8 23:55:33.084315 systemd-logind[1468]: Removed session 24. Sep 8 23:55:33.097508 systemd[1]: Created slice kubepods-burstable-pode75360ab_a197_41ab_baae_86772927ffde.slice - libcontainer container kubepods-burstable-pode75360ab_a197_41ab_baae_86772927ffde.slice. Sep 8 23:55:33.128284 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 48346 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:33.130110 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:33.134444 systemd-logind[1468]: New session 25 of user core. Sep 8 23:55:33.149748 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 8 23:55:33.191052 kubelet[2570]: I0908 23:55:33.190998 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-etc-cni-netd\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191052 kubelet[2570]: I0908 23:55:33.191044 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-lib-modules\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191224 kubelet[2570]: I0908 23:55:33.191068 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjhc\" (UniqueName: \"kubernetes.io/projected/e75360ab-a197-41ab-baae-86772927ffde-kube-api-access-kpjhc\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191224 kubelet[2570]: I0908 23:55:33.191087 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-cilium-cgroup\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191224 kubelet[2570]: I0908 23:55:33.191108 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-cilium-run\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191224 kubelet[2570]: I0908 23:55:33.191131 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-bpf-maps\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191224 kubelet[2570]: I0908 23:55:33.191151 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-host-proc-sys-kernel\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191224 kubelet[2570]: I0908 23:55:33.191167 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-hostproc\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191348 kubelet[2570]: I0908 23:55:33.191184 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-cni-path\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191348 kubelet[2570]: I0908 23:55:33.191200 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e75360ab-a197-41ab-baae-86772927ffde-clustermesh-secrets\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191348 kubelet[2570]: I0908 23:55:33.191216 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e75360ab-a197-41ab-baae-86772927ffde-cilium-config-path\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191348 kubelet[2570]: I0908 23:55:33.191230 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e75360ab-a197-41ab-baae-86772927ffde-hubble-tls\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191348 kubelet[2570]: I0908 23:55:33.191247 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-xtables-lock\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191348 kubelet[2570]: I0908 23:55:33.191263 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e75360ab-a197-41ab-baae-86772927ffde-cilium-ipsec-secrets\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.191460 kubelet[2570]: I0908 23:55:33.191279 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e75360ab-a197-41ab-baae-86772927ffde-host-proc-sys-net\") pod \"cilium-t8kv2\" (UID: \"e75360ab-a197-41ab-baae-86772927ffde\") " pod="kube-system/cilium-t8kv2" Sep 8 23:55:33.201120 sshd[4421]: Connection closed by 10.0.0.1 port 48346 Sep 8 23:55:33.200976 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:33.218091 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:48346.service: Deactivated successfully. Sep 8 23:55:33.220936 systemd[1]: session-25.scope: Deactivated successfully. Sep 8 23:55:33.223146 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Sep 8 23:55:33.230894 systemd[1]: Started sshd@25-10.0.0.98:22-10.0.0.1:48358.service - OpenSSH per-connection server daemon (10.0.0.1:48358). Sep 8 23:55:33.232237 systemd-logind[1468]: Removed session 25. Sep 8 23:55:33.269029 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 48358 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:33.270321 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:33.274409 systemd-logind[1468]: New session 26 of user core. Sep 8 23:55:33.284754 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 8 23:55:33.402616 containerd[1483]: time="2025-09-08T23:55:33.402550207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8kv2,Uid:e75360ab-a197-41ab-baae-86772927ffde,Namespace:kube-system,Attempt:0,}" Sep 8 23:55:33.419401 containerd[1483]: time="2025-09-08T23:55:33.418878312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:55:33.419401 containerd[1483]: time="2025-09-08T23:55:33.419241069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:55:33.419401 containerd[1483]: time="2025-09-08T23:55:33.419259349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:33.419401 containerd[1483]: time="2025-09-08T23:55:33.419346988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:33.438758 systemd[1]: Started cri-containerd-d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12.scope - libcontainer container d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12. Sep 8 23:55:33.457925 kubelet[2570]: E0908 23:55:33.457873 2570 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:55:33.459993 containerd[1483]: time="2025-09-08T23:55:33.459946851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8kv2,Uid:e75360ab-a197-41ab-baae-86772927ffde,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\"" Sep 8 23:55:33.462855 containerd[1483]: time="2025-09-08T23:55:33.462824708Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:55:33.477581 containerd[1483]: time="2025-09-08T23:55:33.477525426Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"283f2ca196fa6ae25a8f4c0e09d2a1d35cf299842412b212e7376e1692bd69dc\"" Sep 8 23:55:33.479609 containerd[1483]: time="2025-09-08T23:55:33.478830935Z" level=info msg="StartContainer for \"283f2ca196fa6ae25a8f4c0e09d2a1d35cf299842412b212e7376e1692bd69dc\"" Sep 8 23:55:33.504762 systemd[1]: Started cri-containerd-283f2ca196fa6ae25a8f4c0e09d2a1d35cf299842412b212e7376e1692bd69dc.scope - libcontainer container 283f2ca196fa6ae25a8f4c0e09d2a1d35cf299842412b212e7376e1692bd69dc. Sep 8 23:55:33.525880 containerd[1483]: time="2025-09-08T23:55:33.525839745Z" level=info msg="StartContainer for \"283f2ca196fa6ae25a8f4c0e09d2a1d35cf299842412b212e7376e1692bd69dc\" returns successfully" Sep 8 23:55:33.535429 systemd[1]: cri-containerd-283f2ca196fa6ae25a8f4c0e09d2a1d35cf299842412b212e7376e1692bd69dc.scope: Deactivated successfully. Sep 8 23:55:33.565246 containerd[1483]: time="2025-09-08T23:55:33.565180699Z" level=info msg="shim disconnected" id=283f2ca196fa6ae25a8f4c0e09d2a1d35cf299842412b212e7376e1692bd69dc namespace=k8s.io Sep 8 23:55:33.565246 containerd[1483]: time="2025-09-08T23:55:33.565247658Z" level=warning msg="cleaning up after shim disconnected" id=283f2ca196fa6ae25a8f4c0e09d2a1d35cf299842412b212e7376e1692bd69dc namespace=k8s.io Sep 8 23:55:33.565246 containerd[1483]: time="2025-09-08T23:55:33.565257538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:33.696402 containerd[1483]: time="2025-09-08T23:55:33.696288652Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:55:33.715299 containerd[1483]: time="2025-09-08T23:55:33.715250535Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b40fcdc29b60353645286d815dc06f02b26e3bf3186c58c029fa8aacbc00fe19\"" Sep 8 23:55:33.715781 containerd[1483]: time="2025-09-08T23:55:33.715752771Z" level=info msg="StartContainer for \"b40fcdc29b60353645286d815dc06f02b26e3bf3186c58c029fa8aacbc00fe19\"" Sep 8 23:55:33.745739 systemd[1]: Started cri-containerd-b40fcdc29b60353645286d815dc06f02b26e3bf3186c58c029fa8aacbc00fe19.scope - libcontainer container b40fcdc29b60353645286d815dc06f02b26e3bf3186c58c029fa8aacbc00fe19. Sep 8 23:55:33.764844 containerd[1483]: time="2025-09-08T23:55:33.764805764Z" level=info msg="StartContainer for \"b40fcdc29b60353645286d815dc06f02b26e3bf3186c58c029fa8aacbc00fe19\" returns successfully" Sep 8 23:55:33.771347 systemd[1]: cri-containerd-b40fcdc29b60353645286d815dc06f02b26e3bf3186c58c029fa8aacbc00fe19.scope: Deactivated successfully. Sep 8 23:55:33.790049 containerd[1483]: time="2025-09-08T23:55:33.789996235Z" level=info msg="shim disconnected" id=b40fcdc29b60353645286d815dc06f02b26e3bf3186c58c029fa8aacbc00fe19 namespace=k8s.io Sep 8 23:55:33.790049 containerd[1483]: time="2025-09-08T23:55:33.790046795Z" level=warning msg="cleaning up after shim disconnected" id=b40fcdc29b60353645286d815dc06f02b26e3bf3186c58c029fa8aacbc00fe19 namespace=k8s.io Sep 8 23:55:33.790049 containerd[1483]: time="2025-09-08T23:55:33.790055275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:34.696976 containerd[1483]: time="2025-09-08T23:55:34.696936002Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:55:34.715946 containerd[1483]: time="2025-09-08T23:55:34.715790929Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522\"" Sep 8 23:55:34.716425 containerd[1483]: time="2025-09-08T23:55:34.716396844Z" level=info msg="StartContainer for \"4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522\"" Sep 8 23:55:34.743747 systemd[1]: Started cri-containerd-4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522.scope - libcontainer container 4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522. Sep 8 23:55:34.768809 systemd[1]: cri-containerd-4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522.scope: Deactivated successfully. Sep 8 23:55:34.770625 containerd[1483]: time="2025-09-08T23:55:34.770015849Z" level=info msg="StartContainer for \"4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522\" returns successfully" Sep 8 23:55:34.791191 containerd[1483]: time="2025-09-08T23:55:34.791127158Z" level=info msg="shim disconnected" id=4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522 namespace=k8s.io Sep 8 23:55:34.791191 containerd[1483]: time="2025-09-08T23:55:34.791182077Z" level=warning msg="cleaning up after shim disconnected" id=4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522 namespace=k8s.io Sep 8 23:55:34.791191 containerd[1483]: time="2025-09-08T23:55:34.791193197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:35.297555 systemd[1]: run-containerd-runc-k8s.io-4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522-runc.kpgaFD.mount: Deactivated successfully. Sep 8 23:55:35.297689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4921ccb3e29150d19e2bbccb1ac7c8002c6576f49ef31645806e370001270522-rootfs.mount: Deactivated successfully. Sep 8 23:55:35.701659 containerd[1483]: time="2025-09-08T23:55:35.701613017Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:55:35.774913 containerd[1483]: time="2025-09-08T23:55:35.774850436Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea8e8f9bac04069d62d71ea9caef4101395124ace810ac89cc9aa5b52434c5ef\"" Sep 8 23:55:35.775424 containerd[1483]: time="2025-09-08T23:55:35.775381392Z" level=info msg="StartContainer for \"ea8e8f9bac04069d62d71ea9caef4101395124ace810ac89cc9aa5b52434c5ef\"" Sep 8 23:55:35.800770 systemd[1]: Started cri-containerd-ea8e8f9bac04069d62d71ea9caef4101395124ace810ac89cc9aa5b52434c5ef.scope - libcontainer container ea8e8f9bac04069d62d71ea9caef4101395124ace810ac89cc9aa5b52434c5ef. Sep 8 23:55:35.823822 systemd[1]: cri-containerd-ea8e8f9bac04069d62d71ea9caef4101395124ace810ac89cc9aa5b52434c5ef.scope: Deactivated successfully. Sep 8 23:55:35.827594 containerd[1483]: time="2025-09-08T23:55:35.827542098Z" level=info msg="StartContainer for \"ea8e8f9bac04069d62d71ea9caef4101395124ace810ac89cc9aa5b52434c5ef\" returns successfully" Sep 8 23:55:35.847372 containerd[1483]: time="2025-09-08T23:55:35.847308301Z" level=info msg="shim disconnected" id=ea8e8f9bac04069d62d71ea9caef4101395124ace810ac89cc9aa5b52434c5ef namespace=k8s.io Sep 8 23:55:35.847372 containerd[1483]: time="2025-09-08T23:55:35.847362620Z" level=warning msg="cleaning up after shim disconnected" id=ea8e8f9bac04069d62d71ea9caef4101395124ace810ac89cc9aa5b52434c5ef namespace=k8s.io Sep 8 23:55:35.847372 containerd[1483]: time="2025-09-08T23:55:35.847371260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:55:36.300498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea8e8f9bac04069d62d71ea9caef4101395124ace810ac89cc9aa5b52434c5ef-rootfs.mount: Deactivated successfully. Sep 8 23:55:36.712321 containerd[1483]: time="2025-09-08T23:55:36.712113120Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:55:36.730227 containerd[1483]: time="2025-09-08T23:55:36.730166420Z" level=info msg="CreateContainer within sandbox \"d2382aca7923ba49d1a0e31209d16b4b8534e65308c72ba9eb7f7741bc714f12\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a96c83a36c962d42bdd318cdec01d1c561013e0ebe9b96277ae611d12b6f29ee\"" Sep 8 23:55:36.730806 containerd[1483]: time="2025-09-08T23:55:36.730781175Z" level=info msg="StartContainer for \"a96c83a36c962d42bdd318cdec01d1c561013e0ebe9b96277ae611d12b6f29ee\"" Sep 8 23:55:36.774779 systemd[1]: Started cri-containerd-a96c83a36c962d42bdd318cdec01d1c561013e0ebe9b96277ae611d12b6f29ee.scope - libcontainer container a96c83a36c962d42bdd318cdec01d1c561013e0ebe9b96277ae611d12b6f29ee. Sep 8 23:55:36.802848 containerd[1483]: time="2025-09-08T23:55:36.802791456Z" level=info msg="StartContainer for \"a96c83a36c962d42bdd318cdec01d1c561013e0ebe9b96277ae611d12b6f29ee\" returns successfully" Sep 8 23:55:37.079593 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 8 23:55:37.759088 kubelet[2570]: I0908 23:55:37.758975 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t8kv2" podStartSLOduration=4.758944997 podStartE2EDuration="4.758944997s" podCreationTimestamp="2025-09-08 23:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:55:37.758797078 +0000 UTC m=+79.448515558" watchObservedRunningTime="2025-09-08 23:55:37.758944997 +0000 UTC m=+79.448663477" Sep 8 23:55:39.933922 systemd-networkd[1412]: lxc_health: Link UP Sep 8 23:55:39.934174 systemd-networkd[1412]: lxc_health: Gained carrier Sep 8 23:55:41.018716 systemd-networkd[1412]: lxc_health: Gained IPv6LL Sep 8 23:55:41.720032 systemd[1]: run-containerd-runc-k8s.io-a96c83a36c962d42bdd318cdec01d1c561013e0ebe9b96277ae611d12b6f29ee-runc.NQB4Yo.mount: Deactivated successfully. Sep 8 23:55:46.041999 sshd[4430]: Connection closed by 10.0.0.1 port 48358 Sep 8 23:55:46.042847 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:46.047034 systemd[1]: sshd@25-10.0.0.98:22-10.0.0.1:48358.service: Deactivated successfully. Sep 8 23:55:46.049196 systemd[1]: session-26.scope: Deactivated successfully. Sep 8 23:55:46.051144 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Sep 8 23:55:46.052269 systemd-logind[1468]: Removed session 26.