Aug 13 07:25:48.868686 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 07:25:48.868721 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue Aug 12 21:42:02 -00 2025 Aug 13 07:25:48.868732 kernel: KASLR enabled Aug 13 07:25:48.868737 kernel: efi: EFI v2.7 by EDK II Aug 13 07:25:48.868743 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Aug 13 07:25:48.868748 kernel: random: crng init done Aug 13 07:25:48.868755 kernel: secureboot: Secure boot disabled Aug 13 07:25:48.868761 kernel: ACPI: Early table checksum verification disabled Aug 13 07:25:48.868767 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Aug 13 07:25:48.868774 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 13 07:25:48.868779 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:25:48.868785 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:25:48.868791 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:25:48.868796 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:25:48.868803 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:25:48.868811 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:25:48.868817 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:25:48.868823 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:25:48.868829 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:25:48.868835 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 13 07:25:48.868841 kernel: NUMA: Failed to initialise from firmware Aug 13 07:25:48.868847 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 07:25:48.868853 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 13 07:25:48.868859 kernel: Zone ranges: Aug 13 07:25:48.868865 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 07:25:48.868872 kernel: DMA32 empty Aug 13 07:25:48.868878 kernel: Normal empty Aug 13 07:25:48.868884 kernel: Movable zone start for each node Aug 13 07:25:48.868890 kernel: Early memory node ranges Aug 13 07:25:48.868896 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Aug 13 07:25:48.868902 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Aug 13 07:25:48.868908 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Aug 13 07:25:48.868914 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 13 07:25:48.868920 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 13 07:25:48.868926 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 13 07:25:48.868932 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 13 07:25:48.868938 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 13 07:25:48.868945 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 13 07:25:48.868951 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 07:25:48.868958 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 13 07:25:48.868967 kernel: psci: probing for conduit method from ACPI. Aug 13 07:25:48.868973 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 07:25:48.868979 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 07:25:48.868987 kernel: psci: Trusted OS migration not required Aug 13 07:25:48.868993 kernel: psci: SMC Calling Convention v1.1 Aug 13 07:25:48.869000 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 07:25:48.869006 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 07:25:48.869013 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 07:25:48.869019 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 13 07:25:48.869026 kernel: Detected PIPT I-cache on CPU0 Aug 13 07:25:48.869032 kernel: CPU features: detected: GIC system register CPU interface Aug 13 07:25:48.869038 kernel: CPU features: detected: Hardware dirty bit management Aug 13 07:25:48.869045 kernel: CPU features: detected: Spectre-v4 Aug 13 07:25:48.869052 kernel: CPU features: detected: Spectre-BHB Aug 13 07:25:48.869059 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 07:25:48.869065 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 07:25:48.869072 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 07:25:48.869078 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 07:25:48.869084 kernel: alternatives: applying boot alternatives Aug 13 07:25:48.869092 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c44ba8b4c0c81c1bcadc13a1606b9de202ee4e4226c47e1c865eaa5fc436b169 Aug 13 07:25:48.869098 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:25:48.869105 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 07:25:48.869111 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:25:48.869118 kernel: Fallback order for Node 0: 0 Aug 13 07:25:48.869126 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 13 07:25:48.869132 kernel: Policy zone: DMA Aug 13 07:25:48.869138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:25:48.869145 kernel: software IO TLB: area num 4. Aug 13 07:25:48.869151 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 13 07:25:48.869158 kernel: Memory: 2387412K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184876K reserved, 0K cma-reserved) Aug 13 07:25:48.869165 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 07:25:48.869171 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:25:48.869178 kernel: rcu: RCU event tracing is enabled. Aug 13 07:25:48.869185 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 07:25:48.869191 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:25:48.869198 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:25:48.869205 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:25:48.869212 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 07:25:48.869218 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 07:25:48.869225 kernel: GICv3: 256 SPIs implemented Aug 13 07:25:48.869231 kernel: GICv3: 0 Extended SPIs implemented Aug 13 07:25:48.869237 kernel: Root IRQ handler: gic_handle_irq Aug 13 07:25:48.869244 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 07:25:48.869250 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 07:25:48.869256 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 07:25:48.869263 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 07:25:48.869269 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Aug 13 07:25:48.869277 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 13 07:25:48.869283 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 13 07:25:48.869290 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:25:48.869296 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 07:25:48.869303 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 07:25:48.869309 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 07:25:48.869316 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 07:25:48.869322 kernel: arm-pv: using stolen time PV Aug 13 07:25:48.869329 kernel: Console: colour dummy device 80x25 Aug 13 07:25:48.869336 kernel: ACPI: Core revision 20230628 Aug 13 07:25:48.869342 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 07:25:48.869350 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:25:48.869357 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:25:48.869363 kernel: landlock: Up and running. Aug 13 07:25:48.869370 kernel: SELinux: Initializing. Aug 13 07:25:48.869376 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:25:48.869383 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 07:25:48.869389 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:25:48.869396 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 07:25:48.869403 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:25:48.869411 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:25:48.869417 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 07:25:48.869424 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 07:25:48.869430 kernel: Remapping and enabling EFI services. Aug 13 07:25:48.869437 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:25:48.869444 kernel: Detected PIPT I-cache on CPU1 Aug 13 07:25:48.869450 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 07:25:48.869457 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 13 07:25:48.869464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 07:25:48.869471 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 07:25:48.869478 kernel: Detected PIPT I-cache on CPU2 Aug 13 07:25:48.869489 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 13 07:25:48.869497 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 13 07:25:48.869505 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 07:25:48.869512 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 13 07:25:48.869518 kernel: Detected PIPT I-cache on CPU3 Aug 13 07:25:48.869525 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 13 07:25:48.869532 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 13 07:25:48.869541 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 07:25:48.869548 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 13 07:25:48.869555 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 07:25:48.869562 kernel: SMP: Total of 4 processors activated. Aug 13 07:25:48.869569 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 07:25:48.869576 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 07:25:48.869583 kernel: CPU features: detected: Common not Private translations Aug 13 07:25:48.869590 kernel: CPU features: detected: CRC32 instructions Aug 13 07:25:48.869598 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 13 07:25:48.869605 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 07:25:48.869612 kernel: CPU features: detected: LSE atomic instructions Aug 13 07:25:48.869619 kernel: CPU features: detected: Privileged Access Never Aug 13 07:25:48.869626 kernel: CPU features: detected: RAS Extension Support Aug 13 07:25:48.869639 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 07:25:48.869647 kernel: CPU: All CPU(s) started at EL1 Aug 13 07:25:48.869654 kernel: alternatives: applying system-wide alternatives Aug 13 07:25:48.869661 kernel: devtmpfs: initialized Aug 13 07:25:48.869668 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:25:48.869676 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 07:25:48.869683 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:25:48.869696 kernel: SMBIOS 3.0.0 present. Aug 13 07:25:48.869705 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Aug 13 07:25:48.869712 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:25:48.869719 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 07:25:48.869726 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 07:25:48.869733 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 07:25:48.869742 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:25:48.869749 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Aug 13 07:25:48.869756 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:25:48.869763 kernel: cpuidle: using governor menu Aug 13 07:25:48.869770 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 07:25:48.869777 kernel: ASID allocator initialised with 32768 entries Aug 13 07:25:48.869783 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:25:48.869790 kernel: Serial: AMBA PL011 UART driver Aug 13 07:25:48.869797 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 07:25:48.869806 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 07:25:48.869814 kernel: Modules: 509248 pages in range for PLT usage Aug 13 07:25:48.869821 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:25:48.869827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:25:48.869834 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 07:25:48.869845 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 07:25:48.869854 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:25:48.869863 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:25:48.869872 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 07:25:48.869880 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 07:25:48.869887 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:25:48.869895 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:25:48.869902 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:25:48.869909 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:25:48.869916 kernel: ACPI: Interpreter enabled Aug 13 07:25:48.869923 kernel: ACPI: Using GIC for interrupt routing Aug 13 07:25:48.869930 kernel: ACPI: MCFG table detected, 1 entries Aug 13 07:25:48.869937 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 07:25:48.869944 kernel: printk: console [ttyAMA0] enabled Aug 13 07:25:48.869953 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:25:48.870086 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:25:48.870158 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 07:25:48.870221 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 07:25:48.870826 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 07:25:48.870901 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 07:25:48.870911 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 07:25:48.870924 kernel: PCI host bridge to bus 0000:00 Aug 13 07:25:48.870992 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 07:25:48.871063 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 07:25:48.871119 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 07:25:48.871175 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:25:48.871254 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 07:25:48.871328 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 07:25:48.871395 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 13 07:25:48.871460 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 13 07:25:48.871523 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 07:25:48.871586 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 07:25:48.871666 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 13 07:25:48.871746 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 13 07:25:48.871809 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 07:25:48.871866 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 07:25:48.871923 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 07:25:48.871932 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 07:25:48.871940 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 07:25:48.871947 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 07:25:48.871954 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 07:25:48.871961 kernel: iommu: Default domain type: Translated Aug 13 07:25:48.871970 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 07:25:48.871977 kernel: efivars: Registered efivars operations Aug 13 07:25:48.871984 kernel: vgaarb: loaded Aug 13 07:25:48.871990 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 07:25:48.871997 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:25:48.872004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:25:48.872012 kernel: pnp: PnP ACPI init Aug 13 07:25:48.872085 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 07:25:48.872095 kernel: pnp: PnP ACPI: found 1 devices Aug 13 07:25:48.872104 kernel: NET: Registered PF_INET protocol family Aug 13 07:25:48.872111 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 07:25:48.872118 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 07:25:48.872125 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:25:48.872132 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:25:48.872139 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 07:25:48.872146 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 07:25:48.872153 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:25:48.872162 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 07:25:48.872169 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:25:48.872176 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:25:48.872183 kernel: kvm [1]: HYP mode not available Aug 13 07:25:48.872190 kernel: Initialise system trusted keyrings Aug 13 07:25:48.872197 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 07:25:48.872204 kernel: Key type asymmetric registered Aug 13 07:25:48.872211 kernel: Asymmetric key parser 'x509' registered Aug 13 07:25:48.872218 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 07:25:48.872225 kernel: io scheduler mq-deadline registered Aug 13 07:25:48.872234 kernel: io scheduler kyber registered Aug 13 07:25:48.872241 kernel: io scheduler bfq registered Aug 13 07:25:48.872248 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 07:25:48.872255 kernel: ACPI: button: Power Button [PWRB] Aug 13 07:25:48.872263 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 07:25:48.872329 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 13 07:25:48.872339 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:25:48.872346 kernel: thunder_xcv, ver 1.0 Aug 13 07:25:48.872352 kernel: thunder_bgx, ver 1.0 Aug 13 07:25:48.872361 kernel: nicpf, ver 1.0 Aug 13 07:25:48.872368 kernel: nicvf, ver 1.0 Aug 13 07:25:48.872436 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 07:25:48.872496 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T07:25:48 UTC (1755069948) Aug 13 07:25:48.872506 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 07:25:48.872513 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 07:25:48.872520 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 07:25:48.872527 kernel: watchdog: Hard watchdog permanently disabled Aug 13 07:25:48.872536 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:25:48.872543 kernel: Segment Routing with IPv6 Aug 13 07:25:48.872550 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:25:48.872557 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:25:48.872563 kernel: Key type dns_resolver registered Aug 13 07:25:48.872570 kernel: registered taskstats version 1 Aug 13 07:25:48.872577 kernel: Loading compiled-in X.509 certificates Aug 13 07:25:48.872584 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: b805f03ae64b71ea1aa3cf76d07ec816116f6d0c' Aug 13 07:25:48.872591 kernel: Key type .fscrypt registered Aug 13 07:25:48.872599 kernel: Key type fscrypt-provisioning registered Aug 13 07:25:48.872606 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:25:48.872613 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:25:48.872620 kernel: ima: No architecture policies found Aug 13 07:25:48.872627 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 07:25:48.872643 kernel: clk: Disabling unused clocks Aug 13 07:25:48.872650 kernel: Freeing unused kernel memory: 38400K Aug 13 07:25:48.872657 kernel: Run /init as init process Aug 13 07:25:48.872664 kernel: with arguments: Aug 13 07:25:48.872674 kernel: /init Aug 13 07:25:48.872680 kernel: with environment: Aug 13 07:25:48.872687 kernel: HOME=/ Aug 13 07:25:48.872712 kernel: TERM=linux Aug 13 07:25:48.872720 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:25:48.872728 systemd[1]: Successfully made /usr/ read-only. Aug 13 07:25:48.872737 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 07:25:48.872748 systemd[1]: Detected virtualization kvm. Aug 13 07:25:48.872755 systemd[1]: Detected architecture arm64. Aug 13 07:25:48.872763 systemd[1]: Running in initrd. Aug 13 07:25:48.872770 systemd[1]: No hostname configured, using default hostname. Aug 13 07:25:48.872778 systemd[1]: Hostname set to . Aug 13 07:25:48.872785 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:25:48.872792 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:25:48.872800 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:25:48.872808 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:25:48.872818 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:25:48.872826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:25:48.872833 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:25:48.872842 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:25:48.872850 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:25:48.872858 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:25:48.872867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:25:48.872875 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:25:48.872882 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:25:48.872890 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:25:48.872897 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:25:48.872905 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:25:48.872912 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:25:48.872920 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:25:48.872927 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:25:48.872937 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 07:25:48.872945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:25:48.872953 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:25:48.872960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:25:48.872968 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:25:48.872976 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:25:48.872983 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:25:48.872991 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:25:48.873000 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:25:48.873008 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:25:48.873015 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:25:48.873023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:25:48.873031 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:25:48.873039 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:25:48.873049 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:25:48.873057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:25:48.873065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:25:48.873092 systemd-journald[238]: Collecting audit messages is disabled. Aug 13 07:25:48.873112 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:25:48.873121 systemd-journald[238]: Journal started Aug 13 07:25:48.873139 systemd-journald[238]: Runtime Journal (/run/log/journal/5544ce6fa5da4cefbe2469fda544b557) is 5.9M, max 47.3M, 41.4M free. Aug 13 07:25:48.860054 systemd-modules-load[239]: Inserted module 'overlay' Aug 13 07:25:48.876907 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:25:48.878811 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:25:48.884021 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:25:48.884039 kernel: Bridge firewalling registered Aug 13 07:25:48.882812 systemd-modules-load[239]: Inserted module 'br_netfilter' Aug 13 07:25:48.882991 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:25:48.885228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:25:48.895833 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:25:48.897298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:25:48.899854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:25:48.902269 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:25:48.909073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:25:48.910414 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:25:48.913360 dracut-cmdline[264]: dracut-dracut-053 Aug 13 07:25:48.913360 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c44ba8b4c0c81c1bcadc13a1606b9de202ee4e4226c47e1c865eaa5fc436b169 Aug 13 07:25:48.917375 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:25:48.926838 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:25:48.956131 systemd-resolved[298]: Positive Trust Anchors: Aug 13 07:25:48.956150 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:25:48.956181 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:25:48.960716 systemd-resolved[298]: Defaulting to hostname 'linux'. Aug 13 07:25:48.961622 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:25:48.965195 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:25:48.979712 kernel: SCSI subsystem initialized Aug 13 07:25:48.984707 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:25:48.993722 kernel: iscsi: registered transport (tcp) Aug 13 07:25:49.005761 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:25:49.005781 kernel: QLogic iSCSI HBA Driver Aug 13 07:25:49.045662 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:25:49.054834 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:25:49.071950 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:25:49.071989 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:25:49.072766 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:25:49.118723 kernel: raid6: neonx8 gen() 15730 MB/s Aug 13 07:25:49.135719 kernel: raid6: neonx4 gen() 15770 MB/s Aug 13 07:25:49.152708 kernel: raid6: neonx2 gen() 13179 MB/s Aug 13 07:25:49.169707 kernel: raid6: neonx1 gen() 10513 MB/s Aug 13 07:25:49.186714 kernel: raid6: int64x8 gen() 6791 MB/s Aug 13 07:25:49.203709 kernel: raid6: int64x4 gen() 7340 MB/s Aug 13 07:25:49.220707 kernel: raid6: int64x2 gen() 6105 MB/s Aug 13 07:25:49.237705 kernel: raid6: int64x1 gen() 5050 MB/s Aug 13 07:25:49.237720 kernel: raid6: using algorithm neonx4 gen() 15770 MB/s Aug 13 07:25:49.254711 kernel: raid6: .... xor() 12472 MB/s, rmw enabled Aug 13 07:25:49.254726 kernel: raid6: using neon recovery algorithm Aug 13 07:25:49.259991 kernel: xor: measuring software checksum speed Aug 13 07:25:49.260005 kernel: 8regs : 21584 MB/sec Aug 13 07:25:49.260014 kernel: 32regs : 21704 MB/sec Aug 13 07:25:49.260894 kernel: arm64_neon : 27823 MB/sec Aug 13 07:25:49.260909 kernel: xor: using function: arm64_neon (27823 MB/sec) Aug 13 07:25:49.314725 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:25:49.324434 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:25:49.340840 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:25:49.355776 systemd-udevd[464]: Using default interface naming scheme 'v255'. Aug 13 07:25:49.359390 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:25:49.365827 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:25:49.376343 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Aug 13 07:25:49.404649 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:25:49.416812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:25:49.453908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:25:49.461833 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:25:49.472724 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:25:49.474122 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:25:49.475850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:25:49.478601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:25:49.486849 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:25:49.497725 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:25:49.513726 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 13 07:25:49.513891 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 07:25:49.515818 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:25:49.515847 kernel: GPT:9289727 != 19775487 Aug 13 07:25:49.516777 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:25:49.516891 kernel: GPT:9289727 != 19775487 Aug 13 07:25:49.517909 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:25:49.517946 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:25:49.524837 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:25:49.524945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:25:49.527703 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:25:49.529421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:25:49.529599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:25:49.532645 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:25:49.541905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:25:49.554715 kernel: BTRFS: device fsid 66ef7c2c-768e-46b2-8baa-a2b24df44a90 devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (522) Aug 13 07:25:49.554753 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (519) Aug 13 07:25:49.555885 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:25:49.559720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:25:49.581173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:25:49.587192 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:25:49.588296 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:25:49.596362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:25:49.615874 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:25:49.617717 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:25:49.623273 disk-uuid[552]: Primary Header is updated. Aug 13 07:25:49.623273 disk-uuid[552]: Secondary Entries is updated. Aug 13 07:25:49.623273 disk-uuid[552]: Secondary Header is updated. Aug 13 07:25:49.628731 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:25:49.640926 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:25:50.638725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:25:50.639210 disk-uuid[553]: The operation has completed successfully. Aug 13 07:25:50.662068 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:25:50.662168 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:25:50.706901 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:25:50.711043 sh[575]: Success Aug 13 07:25:50.730720 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 07:25:50.769076 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:25:50.776033 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:25:50.777527 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:25:50.789212 kernel: BTRFS info (device dm-0): first mount of filesystem 66ef7c2c-768e-46b2-8baa-a2b24df44a90 Aug 13 07:25:50.789265 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 07:25:50.789277 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:25:50.789288 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:25:50.789778 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:25:50.793462 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:25:50.794810 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:25:50.800863 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:25:50.802965 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:25:50.815155 kernel: BTRFS info (device vda6): first mount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:25:50.815195 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 07:25:50.815206 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:25:50.818736 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:25:50.821738 kernel: BTRFS info (device vda6): last unmount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:25:50.824666 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:25:50.835991 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:25:50.896909 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:25:50.912833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:25:50.921750 ignition[665]: Ignition 2.20.0 Aug 13 07:25:50.921759 ignition[665]: Stage: fetch-offline Aug 13 07:25:50.921792 ignition[665]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:25:50.921801 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:25:50.921953 ignition[665]: parsed url from cmdline: "" Aug 13 07:25:50.921956 ignition[665]: no config URL provided Aug 13 07:25:50.921961 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:25:50.921967 ignition[665]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:25:50.921989 ignition[665]: op(1): [started] loading QEMU firmware config module Aug 13 07:25:50.921993 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 07:25:50.928954 ignition[665]: op(1): [finished] loading QEMU firmware config module Aug 13 07:25:50.946619 systemd-networkd[763]: lo: Link UP Aug 13 07:25:50.946640 systemd-networkd[763]: lo: Gained carrier Aug 13 07:25:50.947458 systemd-networkd[763]: Enumeration completed Aug 13 07:25:50.947564 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:25:50.947868 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:25:50.947872 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:25:50.948549 systemd-networkd[763]: eth0: Link UP Aug 13 07:25:50.948552 systemd-networkd[763]: eth0: Gained carrier Aug 13 07:25:50.948558 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:25:50.950004 systemd[1]: Reached target network.target - Network. Aug 13 07:25:50.975699 ignition[665]: parsing config with SHA512: f515974c4999104c7a7b678bd38e0dfd2f7590544afb5357ac0c82da61e362888a3636086e70ea9cd024053feb4e5f66abb8700c614a371071cebcac9e6b9cef Aug 13 07:25:50.978744 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:25:50.980588 unknown[665]: fetched base config from "system" Aug 13 07:25:50.980596 unknown[665]: fetched user config from "qemu" Aug 13 07:25:50.982745 ignition[665]: fetch-offline: fetch-offline passed Aug 13 07:25:50.982829 ignition[665]: Ignition finished successfully Aug 13 07:25:50.984334 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:25:50.985597 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 07:25:50.994860 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:25:51.007377 ignition[770]: Ignition 2.20.0 Aug 13 07:25:51.007386 ignition[770]: Stage: kargs Aug 13 07:25:51.007530 ignition[770]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:25:51.007539 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:25:51.008373 ignition[770]: kargs: kargs passed Aug 13 07:25:51.008414 ignition[770]: Ignition finished successfully Aug 13 07:25:51.011748 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:25:51.013412 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:25:51.025576 ignition[779]: Ignition 2.20.0 Aug 13 07:25:51.025585 ignition[779]: Stage: disks Aug 13 07:25:51.025760 ignition[779]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:25:51.028336 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:25:51.025770 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:25:51.029251 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:25:51.026614 ignition[779]: disks: disks passed Aug 13 07:25:51.030605 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:25:51.026663 ignition[779]: Ignition finished successfully Aug 13 07:25:51.032299 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:25:51.033897 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:25:51.035036 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:25:51.046850 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:25:51.055973 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 07:25:51.059204 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:25:51.062520 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:25:51.109707 kernel: EXT4-fs (vda9): mounted filesystem 4e885a6c-f4f3-43a5-b152-e0e8bd6b099d r/w with ordered data mode. Quota mode: none. Aug 13 07:25:51.109939 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:25:51.111163 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:25:51.122806 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:25:51.124498 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:25:51.125617 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:25:51.125665 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:25:51.125752 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:25:51.134084 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Aug 13 07:25:51.129784 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:25:51.137713 kernel: BTRFS info (device vda6): first mount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:25:51.137733 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 07:25:51.137743 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:25:51.133915 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:25:51.140216 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:25:51.140903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:25:51.180756 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:25:51.184916 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:25:51.188662 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:25:51.192431 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:25:51.274915 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:25:51.286825 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:25:51.289160 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:25:51.293715 kernel: BTRFS info (device vda6): last unmount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:25:51.309133 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:25:51.311133 ignition[913]: INFO : Ignition 2.20.0 Aug 13 07:25:51.311133 ignition[913]: INFO : Stage: mount Aug 13 07:25:51.312611 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:25:51.312611 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:25:51.312611 ignition[913]: INFO : mount: mount passed Aug 13 07:25:51.312611 ignition[913]: INFO : Ignition finished successfully Aug 13 07:25:51.313334 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:25:51.321863 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:25:51.916939 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:25:51.926899 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:25:51.932905 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Aug 13 07:25:51.932939 kernel: BTRFS info (device vda6): first mount of filesystem 5832a3b0-f866-4304-b935-a4d38424b8f9 Aug 13 07:25:51.932949 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 07:25:51.934011 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:25:51.935708 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:25:51.936914 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:25:51.950911 ignition[944]: INFO : Ignition 2.20.0 Aug 13 07:25:51.950911 ignition[944]: INFO : Stage: files Aug 13 07:25:51.952532 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:25:51.952532 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:25:51.952532 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:25:51.955923 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:25:51.955923 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:25:51.958705 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:25:51.959980 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:25:51.959980 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:25:51.959215 unknown[944]: wrote ssh authorized keys file for user: core Aug 13 07:25:51.963552 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 07:25:51.963552 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 07:25:52.022578 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 07:25:52.075063 systemd-networkd[763]: eth0: Gained IPv6LL Aug 13 07:25:52.475875 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 07:25:52.477896 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:25:52.477896 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 13 07:25:52.747144 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:25:52.825040 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 07:25:52.826979 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 07:25:53.097111 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:25:53.379627 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 07:25:53.379627 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 07:25:53.383174 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:25:53.383174 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:25:53.383174 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 07:25:53.383174 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 07:25:53.383174 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:25:53.383174 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 07:25:53.383174 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 07:25:53.383174 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 07:25:53.403985 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:25:53.407547 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 07:25:53.409064 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 07:25:53.409064 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:25:53.409064 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:25:53.409064 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:25:53.409064 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:25:53.409064 ignition[944]: INFO : files: files passed Aug 13 07:25:53.409064 ignition[944]: INFO : Ignition finished successfully Aug 13 07:25:53.410906 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:25:53.421876 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:25:53.425210 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:25:53.427103 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:25:53.427186 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:25:53.431341 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 07:25:53.434677 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:25:53.434677 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:25:53.437914 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:25:53.439389 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:25:53.440986 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:25:53.446908 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:25:53.467313 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:25:53.467416 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:25:53.469342 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:25:53.470800 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:25:53.472320 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:25:53.473086 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:25:53.487307 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:25:53.494858 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:25:53.502179 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:25:53.503455 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:25:53.505530 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:25:53.507307 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:25:53.507421 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:25:53.509825 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:25:53.511748 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:25:53.513328 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:25:53.514949 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:25:53.516839 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:25:53.518717 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:25:53.520559 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:25:53.522521 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:25:53.524543 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:25:53.526377 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:25:53.527848 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:25:53.527973 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:25:53.530220 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:25:53.532096 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:25:53.533933 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:25:53.534781 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:25:53.536025 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:25:53.536134 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:25:53.538879 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:25:53.539000 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:25:53.540857 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:25:53.542412 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:25:53.545751 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:25:53.547258 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:25:53.549408 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:25:53.550964 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:25:53.551047 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:25:53.552604 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:25:53.552724 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:25:53.554272 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:25:53.554381 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:25:53.556198 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:25:53.556301 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:25:53.567858 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:25:53.568789 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:25:53.568954 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:25:53.573891 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:25:53.574727 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:25:53.574848 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:25:53.581763 ignition[999]: INFO : Ignition 2.20.0 Aug 13 07:25:53.581763 ignition[999]: INFO : Stage: umount Aug 13 07:25:53.581763 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:25:53.581763 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 07:25:53.577664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:25:53.587977 ignition[999]: INFO : umount: umount passed Aug 13 07:25:53.587977 ignition[999]: INFO : Ignition finished successfully Aug 13 07:25:53.577796 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:25:53.583786 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:25:53.583870 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:25:53.586493 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:25:53.587007 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:25:53.587100 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:25:53.589427 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:25:53.589509 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:25:53.591723 systemd[1]: Stopped target network.target - Network. Aug 13 07:25:53.592846 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:25:53.592920 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:25:53.594534 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:25:53.594583 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:25:53.596441 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:25:53.596491 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:25:53.598229 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:25:53.598272 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:25:53.599847 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:25:53.599894 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:25:53.601686 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:25:53.603255 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:25:53.611055 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:25:53.611173 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:25:53.614991 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 07:25:53.615191 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:25:53.615280 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:25:53.619373 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 07:25:53.619995 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:25:53.620046 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:25:53.633805 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:25:53.634849 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:25:53.634911 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:25:53.636928 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:25:53.636974 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:25:53.639956 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:25:53.640001 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:25:53.641016 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:25:53.641062 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:25:53.643781 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:25:53.647273 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 07:25:53.647334 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 07:25:53.653825 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:25:53.653964 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:25:53.655887 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:25:53.655999 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:25:53.658221 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:25:53.658283 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:25:53.659433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:25:53.659464 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:25:53.661488 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:25:53.661532 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:25:53.664227 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:25:53.664282 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:25:53.666908 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:25:53.666953 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:25:53.682890 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:25:53.683855 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:25:53.683915 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:25:53.686762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:25:53.686805 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:25:53.690245 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 07:25:53.690296 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 07:25:53.690578 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:25:53.690704 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:25:53.692115 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:25:53.694488 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:25:53.704957 systemd[1]: Switching root. Aug 13 07:25:53.734517 systemd-journald[238]: Journal stopped Aug 13 07:25:54.439128 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Aug 13 07:25:54.439198 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:25:54.439210 kernel: SELinux: policy capability open_perms=1 Aug 13 07:25:54.439220 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:25:54.439233 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:25:54.439242 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:25:54.439256 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:25:54.439265 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:25:54.439296 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:25:54.439311 systemd[1]: Successfully loaded SELinux policy in 34.275ms. Aug 13 07:25:54.439328 kernel: audit: type=1403 audit(1755069953.897:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:25:54.439338 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.636ms. Aug 13 07:25:54.439349 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 07:25:54.439361 systemd[1]: Detected virtualization kvm. Aug 13 07:25:54.439384 systemd[1]: Detected architecture arm64. Aug 13 07:25:54.439396 systemd[1]: Detected first boot. Aug 13 07:25:54.439406 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:25:54.439416 zram_generator::config[1046]: No configuration found. Aug 13 07:25:54.439426 kernel: NET: Registered PF_VSOCK protocol family Aug 13 07:25:54.439436 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:25:54.439446 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 07:25:54.439459 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 07:25:54.439469 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 07:25:54.439481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 07:25:54.439492 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:25:54.439502 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:25:54.439533 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:25:54.439545 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:25:54.439555 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:25:54.439567 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:25:54.439578 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:25:54.439588 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:25:54.439598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:25:54.439608 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:25:54.439626 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:25:54.439638 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:25:54.439648 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:25:54.439658 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:25:54.439671 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 07:25:54.439681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:25:54.439776 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 07:25:54.439790 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 07:25:54.439800 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 07:25:54.439811 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:25:54.439821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:25:54.439831 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:25:54.439843 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:25:54.439853 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:25:54.439864 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:25:54.439873 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:25:54.439883 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 07:25:54.439893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:25:54.439904 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:25:54.439914 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:25:54.439924 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:25:54.439935 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:25:54.439945 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:25:54.439955 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:25:54.439966 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:25:54.439976 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:25:54.439986 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:25:54.439996 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:25:54.440006 systemd[1]: Reached target machines.target - Containers. Aug 13 07:25:54.440016 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:25:54.440028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:25:54.440038 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:25:54.440048 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:25:54.440058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:25:54.440070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:25:54.440080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:25:54.440090 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:25:54.440100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:25:54.440116 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:25:54.440126 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 07:25:54.440137 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 07:25:54.440147 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 07:25:54.440157 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 07:25:54.440167 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 07:25:54.440177 kernel: fuse: init (API version 7.39) Aug 13 07:25:54.440187 kernel: loop: module loaded Aug 13 07:25:54.440196 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:25:54.440207 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:25:54.440217 kernel: ACPI: bus type drm_connector registered Aug 13 07:25:54.440233 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:25:54.440243 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:25:54.440253 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 07:25:54.440263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:25:54.440273 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 07:25:54.440304 systemd-journald[1114]: Collecting audit messages is disabled. Aug 13 07:25:54.440333 systemd[1]: Stopped verity-setup.service. Aug 13 07:25:54.440344 systemd-journald[1114]: Journal started Aug 13 07:25:54.440367 systemd-journald[1114]: Runtime Journal (/run/log/journal/5544ce6fa5da4cefbe2469fda544b557) is 5.9M, max 47.3M, 41.4M free. Aug 13 07:25:54.264732 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:25:54.276488 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:25:54.276887 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 07:25:54.443302 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:25:54.449264 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:25:54.450426 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:25:54.451632 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:25:54.452660 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:25:54.453837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:25:54.454975 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:25:54.456131 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:25:54.457538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:25:54.459084 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:25:54.459260 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:25:54.460601 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:25:54.460815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:25:54.462198 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:25:54.462354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:25:54.463678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:25:54.463853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:25:54.465346 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:25:54.465522 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:25:54.466869 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:25:54.467036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:25:54.468343 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:25:54.469859 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:25:54.471332 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:25:54.472979 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 07:25:54.485419 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:25:54.498807 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:25:54.500886 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:25:54.501985 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:25:54.502025 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:25:54.503917 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 07:25:54.506092 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:25:54.508184 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:25:54.509365 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:25:54.510753 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:25:54.514526 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:25:54.515684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:25:54.520464 systemd-journald[1114]: Time spent on flushing to /var/log/journal/5544ce6fa5da4cefbe2469fda544b557 is 16.240ms for 866 entries. Aug 13 07:25:54.520464 systemd-journald[1114]: System Journal (/var/log/journal/5544ce6fa5da4cefbe2469fda544b557) is 8M, max 195.6M, 187.6M free. Aug 13 07:25:54.542615 systemd-journald[1114]: Received client request to flush runtime journal. Aug 13 07:25:54.542673 kernel: loop0: detected capacity change from 0 to 123192 Aug 13 07:25:54.518871 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:25:54.521466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:25:54.523068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:25:54.529005 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:25:54.538828 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:25:54.541433 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:25:54.544086 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:25:54.545447 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:25:54.547189 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:25:54.550151 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:25:54.554058 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:25:54.558717 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:25:54.562711 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:25:54.568749 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:25:54.577877 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 07:25:54.581799 kernel: loop1: detected capacity change from 0 to 203944 Aug 13 07:25:54.582752 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:25:54.587310 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:25:54.605871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:25:54.612152 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 07:25:54.619372 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 07:25:54.626572 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Aug 13 07:25:54.626589 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Aug 13 07:25:54.630683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:25:54.633862 kernel: loop2: detected capacity change from 0 to 113512 Aug 13 07:25:54.681718 kernel: loop3: detected capacity change from 0 to 123192 Aug 13 07:25:54.688735 kernel: loop4: detected capacity change from 0 to 203944 Aug 13 07:25:54.696708 kernel: loop5: detected capacity change from 0 to 113512 Aug 13 07:25:54.701578 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 07:25:54.702038 (sd-merge)[1189]: Merged extensions into '/usr'. Aug 13 07:25:54.709376 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:25:54.709392 systemd[1]: Reloading... Aug 13 07:25:54.778389 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:25:54.779720 zram_generator::config[1217]: No configuration found. Aug 13 07:25:54.857335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:25:54.907060 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:25:54.907367 systemd[1]: Reloading finished in 197 ms. Aug 13 07:25:54.933733 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:25:54.935170 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:25:54.951895 systemd[1]: Starting ensure-sysext.service... Aug 13 07:25:54.953582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:25:54.965137 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:25:54.965153 systemd[1]: Reloading... Aug 13 07:25:54.970453 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:25:54.970739 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:25:54.971430 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:25:54.971725 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Aug 13 07:25:54.971786 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Aug 13 07:25:54.974522 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:25:54.974534 systemd-tmpfiles[1252]: Skipping /boot Aug 13 07:25:54.983024 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:25:54.983041 systemd-tmpfiles[1252]: Skipping /boot Aug 13 07:25:55.014727 zram_generator::config[1287]: No configuration found. Aug 13 07:25:55.087635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:25:55.136955 systemd[1]: Reloading finished in 171 ms. Aug 13 07:25:55.150150 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:25:55.165792 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:25:55.173071 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 07:25:55.175537 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:25:55.177712 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:25:55.181641 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:25:55.196986 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:25:55.200053 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:25:55.204300 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:25:55.206746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:25:55.212657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:25:55.221064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:25:55.222935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:25:55.223146 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Aug 13 07:25:55.224064 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 07:25:55.237065 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:25:55.239386 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:25:55.241130 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:25:55.242578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:25:55.245328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:25:55.245478 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:25:55.247088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:25:55.247249 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:25:55.248981 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:25:55.249132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:25:55.251108 augenrules[1357]: No rules Aug 13 07:25:55.252491 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:25:55.252892 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 07:25:55.263634 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:25:55.270431 systemd[1]: Finished ensure-sysext.service. Aug 13 07:25:55.284141 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 07:25:55.285161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:25:55.290893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:25:55.295207 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:25:55.297597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:25:55.300788 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:25:55.301866 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:25:55.301913 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 07:25:55.304192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:25:55.306872 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:25:55.308571 augenrules[1378]: /sbin/augenrules: No change Aug 13 07:25:55.310872 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:25:55.312962 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:25:55.323381 augenrules[1408]: No rules Aug 13 07:25:55.329754 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:25:55.331453 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:25:55.331802 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 07:25:55.333185 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:25:55.333404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:25:55.334840 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:25:55.334996 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:25:55.336288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:25:55.336461 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:25:55.338226 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:25:55.338424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:25:55.340048 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:25:55.344739 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1349) Aug 13 07:25:55.357938 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 13 07:25:55.363213 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:25:55.363279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:25:55.367442 systemd-resolved[1320]: Positive Trust Anchors: Aug 13 07:25:55.367456 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:25:55.367488 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:25:55.376811 systemd-resolved[1320]: Defaulting to hostname 'linux'. Aug 13 07:25:55.377500 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:25:55.385878 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:25:55.387140 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:25:55.388336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:25:55.403224 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:25:55.428239 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:25:55.429952 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:25:55.439537 systemd-networkd[1398]: lo: Link UP Aug 13 07:25:55.439559 systemd-networkd[1398]: lo: Gained carrier Aug 13 07:25:55.440622 systemd-networkd[1398]: Enumeration completed Aug 13 07:25:55.449789 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:25:55.450712 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:25:55.450719 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:25:55.451012 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:25:55.451451 systemd-networkd[1398]: eth0: Link UP Aug 13 07:25:55.451515 systemd-networkd[1398]: eth0: Gained carrier Aug 13 07:25:55.451565 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:25:55.456547 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:25:55.461130 systemd[1]: Reached target network.target - Network. Aug 13 07:25:55.463436 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:25:55.464749 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 07:25:55.465452 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Aug 13 07:25:55.466035 systemd-timesyncd[1399]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 07:25:55.466083 systemd-timesyncd[1399]: Initial clock synchronization to Wed 2025-08-13 07:25:55.688867 UTC. Aug 13 07:25:55.466512 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 07:25:55.468764 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:25:55.486493 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 07:25:55.490641 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:25:55.498506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:25:55.515108 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:25:55.516292 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:25:55.517179 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:25:55.518037 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:25:55.518982 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:25:55.520029 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:25:55.521048 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:25:55.521992 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:25:55.522857 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:25:55.522889 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:25:55.523509 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:25:55.524937 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:25:55.527065 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:25:55.529974 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 07:25:55.531031 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 07:25:55.531951 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 07:25:55.534767 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:25:55.536069 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 07:25:55.538015 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:25:55.539294 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:25:55.540157 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:25:55.540871 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:25:55.541533 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:25:55.541565 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:25:55.542459 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:25:55.544163 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:25:55.546826 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:25:55.546846 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:25:55.549900 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:25:55.551147 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:25:55.552957 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:25:55.558570 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:25:55.561733 jq[1449]: false Aug 13 07:25:55.563851 extend-filesystems[1450]: Found loop3 Aug 13 07:25:55.566797 extend-filesystems[1450]: Found loop4 Aug 13 07:25:55.566797 extend-filesystems[1450]: Found loop5 Aug 13 07:25:55.566797 extend-filesystems[1450]: Found vda Aug 13 07:25:55.566797 extend-filesystems[1450]: Found vda1 Aug 13 07:25:55.566797 extend-filesystems[1450]: Found vda2 Aug 13 07:25:55.566797 extend-filesystems[1450]: Found vda3 Aug 13 07:25:55.566797 extend-filesystems[1450]: Found usr Aug 13 07:25:55.566797 extend-filesystems[1450]: Found vda4 Aug 13 07:25:55.566797 extend-filesystems[1450]: Found vda6 Aug 13 07:25:55.566797 extend-filesystems[1450]: Found vda7 Aug 13 07:25:55.566797 extend-filesystems[1450]: Found vda9 Aug 13 07:25:55.566797 extend-filesystems[1450]: Checking size of /dev/vda9 Aug 13 07:25:55.563904 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:25:55.574687 dbus-daemon[1448]: [system] SELinux support is enabled Aug 13 07:25:55.566102 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:25:55.570353 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:25:55.572849 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:25:55.573290 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:25:55.575348 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:25:55.578151 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:25:55.580922 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:25:55.586728 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:25:55.589753 jq[1462]: true Aug 13 07:25:55.590195 extend-filesystems[1450]: Resized partition /dev/vda9 Aug 13 07:25:55.602537 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:25:55.602752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:25:55.603725 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:25:55.605425 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:25:55.605601 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:25:55.607735 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:25:55.607919 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:25:55.613751 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1351) Aug 13 07:25:55.618709 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 07:25:55.627534 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:25:55.627581 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:25:55.630260 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:25:55.634791 update_engine[1459]: I20250813 07:25:55.631004 1459 main.cc:92] Flatcar Update Engine starting Aug 13 07:25:55.630290 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:25:55.637220 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:25:55.638331 tar[1473]: linux-arm64/helm Aug 13 07:25:55.640435 update_engine[1459]: I20250813 07:25:55.639117 1459 update_check_scheduler.cc:74] Next update check in 6m6s Aug 13 07:25:55.639451 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:25:55.641228 jq[1474]: true Aug 13 07:25:55.649894 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:25:55.666801 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 07:25:55.682625 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 07:25:55.683414 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:25:55.683414 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 07:25:55.683414 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 07:25:55.708996 extend-filesystems[1450]: Resized filesystem in /dev/vda9 Aug 13 07:25:55.685260 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:25:55.687402 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:25:55.689573 systemd-logind[1457]: New seat seat0. Aug 13 07:25:55.704636 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:25:55.715805 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:25:55.717781 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:25:55.719601 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 07:25:55.723056 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:25:55.810438 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:25:55.830020 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:25:55.837023 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:25:55.841648 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:25:55.841948 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:25:55.847212 containerd[1475]: time="2025-08-13T07:25:55.846455480Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 07:25:55.850938 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:25:55.860819 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:25:55.870046 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:25:55.873378 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 07:25:55.875045 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:25:55.876129 containerd[1475]: time="2025-08-13T07:25:55.875623920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:25:55.876829 containerd[1475]: time="2025-08-13T07:25:55.876732840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:25:55.876829 containerd[1475]: time="2025-08-13T07:25:55.876763520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:25:55.876829 containerd[1475]: time="2025-08-13T07:25:55.876779080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:25:55.876973 containerd[1475]: time="2025-08-13T07:25:55.876924560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:25:55.876973 containerd[1475]: time="2025-08-13T07:25:55.876947600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877020 containerd[1475]: time="2025-08-13T07:25:55.877001320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877020 containerd[1475]: time="2025-08-13T07:25:55.877014040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877225 containerd[1475]: time="2025-08-13T07:25:55.877196320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877225 containerd[1475]: time="2025-08-13T07:25:55.877222480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877273 containerd[1475]: time="2025-08-13T07:25:55.877236320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877273 containerd[1475]: time="2025-08-13T07:25:55.877245440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877332 containerd[1475]: time="2025-08-13T07:25:55.877316760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877512 containerd[1475]: time="2025-08-13T07:25:55.877494800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877656 containerd[1475]: time="2025-08-13T07:25:55.877620760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:25:55.877656 containerd[1475]: time="2025-08-13T07:25:55.877639680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:25:55.877792 containerd[1475]: time="2025-08-13T07:25:55.877772160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:25:55.877838 containerd[1475]: time="2025-08-13T07:25:55.877825680Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:25:55.880684 containerd[1475]: time="2025-08-13T07:25:55.880656560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:25:55.880762 containerd[1475]: time="2025-08-13T07:25:55.880718840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:25:55.880762 containerd[1475]: time="2025-08-13T07:25:55.880735960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:25:55.880762 containerd[1475]: time="2025-08-13T07:25:55.880751200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:25:55.880930 containerd[1475]: time="2025-08-13T07:25:55.880764480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:25:55.880930 containerd[1475]: time="2025-08-13T07:25:55.880912280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:25:55.881143 containerd[1475]: time="2025-08-13T07:25:55.881126400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:25:55.881241 containerd[1475]: time="2025-08-13T07:25:55.881222440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:25:55.881272 containerd[1475]: time="2025-08-13T07:25:55.881243320Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:25:55.881272 containerd[1475]: time="2025-08-13T07:25:55.881259160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:25:55.881311 containerd[1475]: time="2025-08-13T07:25:55.881272600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:25:55.881311 containerd[1475]: time="2025-08-13T07:25:55.881285720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:25:55.881311 containerd[1475]: time="2025-08-13T07:25:55.881297800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:25:55.881368 containerd[1475]: time="2025-08-13T07:25:55.881310920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:25:55.881368 containerd[1475]: time="2025-08-13T07:25:55.881324920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:25:55.881368 containerd[1475]: time="2025-08-13T07:25:55.881337160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:25:55.881368 containerd[1475]: time="2025-08-13T07:25:55.881348440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:25:55.881368 containerd[1475]: time="2025-08-13T07:25:55.881360160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:25:55.881452 containerd[1475]: time="2025-08-13T07:25:55.881380320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881452 containerd[1475]: time="2025-08-13T07:25:55.881393600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881452 containerd[1475]: time="2025-08-13T07:25:55.881405120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881452 containerd[1475]: time="2025-08-13T07:25:55.881417560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881452 containerd[1475]: time="2025-08-13T07:25:55.881429680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881452 containerd[1475]: time="2025-08-13T07:25:55.881442120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881452 containerd[1475]: time="2025-08-13T07:25:55.881453240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881571 containerd[1475]: time="2025-08-13T07:25:55.881471400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881571 containerd[1475]: time="2025-08-13T07:25:55.881486400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881571 containerd[1475]: time="2025-08-13T07:25:55.881500640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881571 containerd[1475]: time="2025-08-13T07:25:55.881512720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881571 containerd[1475]: time="2025-08-13T07:25:55.881524200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881571 containerd[1475]: time="2025-08-13T07:25:55.881535840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881571 containerd[1475]: time="2025-08-13T07:25:55.881550480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:25:55.881571 containerd[1475]: time="2025-08-13T07:25:55.881570280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881724 containerd[1475]: time="2025-08-13T07:25:55.881584840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.881724 containerd[1475]: time="2025-08-13T07:25:55.881595560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:25:55.882067 containerd[1475]: time="2025-08-13T07:25:55.881804880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:25:55.882067 containerd[1475]: time="2025-08-13T07:25:55.881826960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:25:55.882067 containerd[1475]: time="2025-08-13T07:25:55.881836800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:25:55.882067 containerd[1475]: time="2025-08-13T07:25:55.881849040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:25:55.882067 containerd[1475]: time="2025-08-13T07:25:55.881858400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.882067 containerd[1475]: time="2025-08-13T07:25:55.881870240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:25:55.882067 containerd[1475]: time="2025-08-13T07:25:55.881880160Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:25:55.882067 containerd[1475]: time="2025-08-13T07:25:55.881890960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:25:55.882330 containerd[1475]: time="2025-08-13T07:25:55.882220440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:25:55.882330 containerd[1475]: time="2025-08-13T07:25:55.882265920Z" level=info msg="Connect containerd service" Aug 13 07:25:55.882330 containerd[1475]: time="2025-08-13T07:25:55.882294480Z" level=info msg="using legacy CRI server" Aug 13 07:25:55.882330 containerd[1475]: time="2025-08-13T07:25:55.882301040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:25:55.882593 containerd[1475]: time="2025-08-13T07:25:55.882515280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:25:55.883552 containerd[1475]: time="2025-08-13T07:25:55.883488640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:25:55.883680 containerd[1475]: time="2025-08-13T07:25:55.883658160Z" level=info msg="Start subscribing containerd event" Aug 13 07:25:55.883955 containerd[1475]: time="2025-08-13T07:25:55.883920200Z" level=info msg="Start recovering state" Aug 13 07:25:55.884266 containerd[1475]: time="2025-08-13T07:25:55.884088520Z" level=info msg="Start event monitor" Aug 13 07:25:55.884366 containerd[1475]: time="2025-08-13T07:25:55.884240640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:25:55.885664 containerd[1475]: time="2025-08-13T07:25:55.884332560Z" level=info msg="Start snapshots syncer" Aug 13 07:25:55.885664 containerd[1475]: time="2025-08-13T07:25:55.884407880Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:25:55.885664 containerd[1475]: time="2025-08-13T07:25:55.884416080Z" level=info msg="Start streaming server" Aug 13 07:25:55.885664 containerd[1475]: time="2025-08-13T07:25:55.884422680Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:25:55.885664 containerd[1475]: time="2025-08-13T07:25:55.884531640Z" level=info msg="containerd successfully booted in 0.040221s" Aug 13 07:25:55.884597 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:25:56.018017 tar[1473]: linux-arm64/LICENSE Aug 13 07:25:56.018017 tar[1473]: linux-arm64/README.md Aug 13 07:25:56.032016 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:25:57.003053 systemd-networkd[1398]: eth0: Gained IPv6LL Aug 13 07:25:57.005378 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:25:57.007029 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:25:57.025061 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 07:25:57.027470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:25:57.029541 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:25:57.045315 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 07:25:57.045527 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 07:25:57.048435 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:25:57.050353 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:25:57.568689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:25:57.570207 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:25:57.572026 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:25:57.575854 systemd[1]: Startup finished in 520ms (kernel) + 5.198s (initrd) + 3.716s (userspace) = 9.436s. Aug 13 07:25:58.002810 kubelet[1561]: E0813 07:25:58.002684 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:25:58.005421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:25:58.005573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:25:58.005899 systemd[1]: kubelet.service: Consumed 840ms CPU time, 258.2M memory peak. Aug 13 07:26:01.777192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:26:01.778291 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:50902.service - OpenSSH per-connection server daemon (10.0.0.1:50902). Aug 13 07:26:01.833133 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 50902 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:26:01.834840 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:26:01.848010 systemd-logind[1457]: New session 1 of user core. Aug 13 07:26:01.848921 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:26:01.859918 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:26:01.868039 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:26:01.869951 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:26:01.875791 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:26:01.877623 systemd-logind[1457]: New session c1 of user core. Aug 13 07:26:01.970782 systemd[1578]: Queued start job for default target default.target. Aug 13 07:26:01.980607 systemd[1578]: Created slice app.slice - User Application Slice. Aug 13 07:26:01.980639 systemd[1578]: Reached target paths.target - Paths. Aug 13 07:26:01.980675 systemd[1578]: Reached target timers.target - Timers. Aug 13 07:26:01.981868 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:26:01.990532 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:26:01.990592 systemd[1578]: Reached target sockets.target - Sockets. Aug 13 07:26:01.990629 systemd[1578]: Reached target basic.target - Basic System. Aug 13 07:26:01.990661 systemd[1578]: Reached target default.target - Main User Target. Aug 13 07:26:01.990687 systemd[1578]: Startup finished in 108ms. Aug 13 07:26:01.990865 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:26:01.992304 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:26:02.051401 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:50918.service - OpenSSH per-connection server daemon (10.0.0.1:50918). Aug 13 07:26:02.092751 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 50918 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:26:02.093973 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:26:02.098745 systemd-logind[1457]: New session 2 of user core. Aug 13 07:26:02.112866 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:26:02.164239 sshd[1591]: Connection closed by 10.0.0.1 port 50918 Aug 13 07:26:02.164721 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Aug 13 07:26:02.179899 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:50918.service: Deactivated successfully. Aug 13 07:26:02.181331 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:26:02.183062 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:26:02.184749 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:50930.service - OpenSSH per-connection server daemon (10.0.0.1:50930). Aug 13 07:26:02.185505 systemd-logind[1457]: Removed session 2. Aug 13 07:26:02.226588 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 50930 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:26:02.227725 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:26:02.231609 systemd-logind[1457]: New session 3 of user core. Aug 13 07:26:02.238855 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:26:02.286632 sshd[1599]: Connection closed by 10.0.0.1 port 50930 Aug 13 07:26:02.286986 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Aug 13 07:26:02.295511 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:50930.service: Deactivated successfully. Aug 13 07:26:02.296907 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 07:26:02.298800 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Aug 13 07:26:02.299873 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:50944.service - OpenSSH per-connection server daemon (10.0.0.1:50944). Aug 13 07:26:02.302099 systemd-logind[1457]: Removed session 3. Aug 13 07:26:02.341187 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 50944 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:26:02.342226 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:26:02.346293 systemd-logind[1457]: New session 4 of user core. Aug 13 07:26:02.358842 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:26:02.409449 sshd[1607]: Connection closed by 10.0.0.1 port 50944 Aug 13 07:26:02.409747 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Aug 13 07:26:02.418599 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:50944.service: Deactivated successfully. Aug 13 07:26:02.419996 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 07:26:02.422500 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Aug 13 07:26:02.422928 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:56090.service - OpenSSH per-connection server daemon (10.0.0.1:56090). Aug 13 07:26:02.424043 systemd-logind[1457]: Removed session 4. Aug 13 07:26:02.464943 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 56090 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:26:02.466647 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:26:02.470377 systemd-logind[1457]: New session 5 of user core. Aug 13 07:26:02.479866 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:26:02.538185 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:26:02.538451 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:26:02.554018 sudo[1616]: pam_unix(sudo:session): session closed for user root Aug 13 07:26:02.557745 sshd[1615]: Connection closed by 10.0.0.1 port 56090 Aug 13 07:26:02.558396 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Aug 13 07:26:02.577356 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:56090.service: Deactivated successfully. Aug 13 07:26:02.578954 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:26:02.579821 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:26:02.592037 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:56102.service - OpenSSH per-connection server daemon (10.0.0.1:56102). Aug 13 07:26:02.593282 systemd-logind[1457]: Removed session 5. Aug 13 07:26:02.631897 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 56102 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:26:02.633161 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:26:02.637268 systemd-logind[1457]: New session 6 of user core. Aug 13 07:26:02.651857 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:26:02.703520 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:26:02.703877 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:26:02.706900 sudo[1626]: pam_unix(sudo:session): session closed for user root Aug 13 07:26:02.711363 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 07:26:02.711626 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:26:02.732047 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 07:26:02.754541 augenrules[1648]: No rules Aug 13 07:26:02.755688 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:26:02.755951 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 07:26:02.757164 sudo[1625]: pam_unix(sudo:session): session closed for user root Aug 13 07:26:02.758309 sshd[1624]: Connection closed by 10.0.0.1 port 56102 Aug 13 07:26:02.758738 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Aug 13 07:26:02.773668 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:56102.service: Deactivated successfully. Aug 13 07:26:02.775280 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:26:02.776617 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:26:02.777880 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:56104.service - OpenSSH per-connection server daemon (10.0.0.1:56104). Aug 13 07:26:02.780050 systemd-logind[1457]: Removed session 6. Aug 13 07:26:02.820225 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 56104 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:26:02.820486 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:26:02.825017 systemd-logind[1457]: New session 7 of user core. Aug 13 07:26:02.841839 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:26:02.892179 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:26:02.892441 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:26:03.252961 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:26:03.253036 (dockerd)[1680]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:26:03.510457 dockerd[1680]: time="2025-08-13T07:26:03.510336653Z" level=info msg="Starting up" Aug 13 07:26:03.666840 dockerd[1680]: time="2025-08-13T07:26:03.666788839Z" level=info msg="Loading containers: start." Aug 13 07:26:03.812754 kernel: Initializing XFRM netlink socket Aug 13 07:26:03.874115 systemd-networkd[1398]: docker0: Link UP Aug 13 07:26:03.903897 dockerd[1680]: time="2025-08-13T07:26:03.903848146Z" level=info msg="Loading containers: done." Aug 13 07:26:03.917254 dockerd[1680]: time="2025-08-13T07:26:03.917211415Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:26:03.917386 dockerd[1680]: time="2025-08-13T07:26:03.917294191Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 07:26:03.917503 dockerd[1680]: time="2025-08-13T07:26:03.917483520Z" level=info msg="Daemon has completed initialization" Aug 13 07:26:03.943938 dockerd[1680]: time="2025-08-13T07:26:03.943868603Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:26:03.944026 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:26:04.621741 containerd[1475]: time="2025-08-13T07:26:04.621671757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 07:26:05.267498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048306409.mount: Deactivated successfully. Aug 13 07:26:06.522156 containerd[1475]: time="2025-08-13T07:26:06.522096171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:06.522754 containerd[1475]: time="2025-08-13T07:26:06.522719427Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651815" Aug 13 07:26:06.523384 containerd[1475]: time="2025-08-13T07:26:06.523347154Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:06.526370 containerd[1475]: time="2025-08-13T07:26:06.526325949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:06.527610 containerd[1475]: time="2025-08-13T07:26:06.527498240Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 1.905771137s" Aug 13 07:26:06.527610 containerd[1475]: time="2025-08-13T07:26:06.527532892Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 07:26:06.531010 containerd[1475]: time="2025-08-13T07:26:06.530920340Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 07:26:07.711434 containerd[1475]: time="2025-08-13T07:26:07.711386863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:07.711916 containerd[1475]: time="2025-08-13T07:26:07.711872316Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460285" Aug 13 07:26:07.712688 containerd[1475]: time="2025-08-13T07:26:07.712659010Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:07.715624 containerd[1475]: time="2025-08-13T07:26:07.715580588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:07.717755 containerd[1475]: time="2025-08-13T07:26:07.717729040Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.186774417s" Aug 13 07:26:07.717812 containerd[1475]: time="2025-08-13T07:26:07.717761285Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 07:26:07.718367 containerd[1475]: time="2025-08-13T07:26:07.718202980Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 07:26:08.256083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:26:08.264923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:26:08.364445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:26:08.367670 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:26:08.406965 kubelet[1945]: E0813 07:26:08.406920 1945 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:26:08.409566 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:26:08.409715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:26:08.410092 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.3M memory peak. Aug 13 07:26:09.017726 containerd[1475]: time="2025-08-13T07:26:09.017649555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:09.018607 containerd[1475]: time="2025-08-13T07:26:09.018326945Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125091" Aug 13 07:26:09.019505 containerd[1475]: time="2025-08-13T07:26:09.019469930Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:09.022586 containerd[1475]: time="2025-08-13T07:26:09.022552071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:09.023779 containerd[1475]: time="2025-08-13T07:26:09.023692604Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.305455743s" Aug 13 07:26:09.023779 containerd[1475]: time="2025-08-13T07:26:09.023734449Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 07:26:09.024486 containerd[1475]: time="2025-08-13T07:26:09.024414049Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 07:26:10.005892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200297925.mount: Deactivated successfully. Aug 13 07:26:10.229469 containerd[1475]: time="2025-08-13T07:26:10.229415927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:10.229978 containerd[1475]: time="2025-08-13T07:26:10.229928680Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915995" Aug 13 07:26:10.230639 containerd[1475]: time="2025-08-13T07:26:10.230605857Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:10.232532 containerd[1475]: time="2025-08-13T07:26:10.232499358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:10.233364 containerd[1475]: time="2025-08-13T07:26:10.233327782Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.208880295s" Aug 13 07:26:10.233409 containerd[1475]: time="2025-08-13T07:26:10.233363134Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 07:26:10.233846 containerd[1475]: time="2025-08-13T07:26:10.233805386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:26:10.772756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596777805.mount: Deactivated successfully. Aug 13 07:26:11.664661 containerd[1475]: time="2025-08-13T07:26:11.664532296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:11.665534 containerd[1475]: time="2025-08-13T07:26:11.665280659Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 13 07:26:11.666286 containerd[1475]: time="2025-08-13T07:26:11.666233870Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:11.670034 containerd[1475]: time="2025-08-13T07:26:11.669992347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:11.670781 containerd[1475]: time="2025-08-13T07:26:11.670746853Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.436907446s" Aug 13 07:26:11.670836 containerd[1475]: time="2025-08-13T07:26:11.670787204Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 07:26:11.671497 containerd[1475]: time="2025-08-13T07:26:11.671464380Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:26:12.139184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704842568.mount: Deactivated successfully. Aug 13 07:26:12.142654 containerd[1475]: time="2025-08-13T07:26:12.142604106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:12.143355 containerd[1475]: time="2025-08-13T07:26:12.143312429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 13 07:26:12.143992 containerd[1475]: time="2025-08-13T07:26:12.143953330Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:12.146766 containerd[1475]: time="2025-08-13T07:26:12.146734690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:12.148309 containerd[1475]: time="2025-08-13T07:26:12.148271729Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 476.774951ms" Aug 13 07:26:12.148309 containerd[1475]: time="2025-08-13T07:26:12.148305279Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 07:26:12.148800 containerd[1475]: time="2025-08-13T07:26:12.148766150Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:26:12.638521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819303795.mount: Deactivated successfully. Aug 13 07:26:14.599193 containerd[1475]: time="2025-08-13T07:26:14.599150032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:14.600432 containerd[1475]: time="2025-08-13T07:26:14.600361115Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Aug 13 07:26:14.602061 containerd[1475]: time="2025-08-13T07:26:14.601696750Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:14.605404 containerd[1475]: time="2025-08-13T07:26:14.605372264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:14.606808 containerd[1475]: time="2025-08-13T07:26:14.606760833Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.457958812s" Aug 13 07:26:14.606808 containerd[1475]: time="2025-08-13T07:26:14.606795319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 07:26:18.660127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:26:18.669952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:26:18.801281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:26:18.804496 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:26:18.838549 kubelet[2107]: E0813 07:26:18.838509 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:26:18.840709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:26:18.840846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:26:18.842882 systemd[1]: kubelet.service: Consumed 123ms CPU time, 109.5M memory peak. Aug 13 07:26:20.053296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:26:20.053438 systemd[1]: kubelet.service: Consumed 123ms CPU time, 109.5M memory peak. Aug 13 07:26:20.065004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:26:20.085673 systemd[1]: Reload requested from client PID 2122 ('systemctl') (unit session-7.scope)... Aug 13 07:26:20.085702 systemd[1]: Reloading... Aug 13 07:26:20.162733 zram_generator::config[2169]: No configuration found. Aug 13 07:26:20.279629 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:26:20.353041 systemd[1]: Reloading finished in 267 ms. Aug 13 07:26:20.392268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:26:20.395483 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:26:20.396186 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:26:20.396405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:26:20.396450 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95.1M memory peak. Aug 13 07:26:20.409182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:26:20.506895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:26:20.511481 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:26:20.545903 kubelet[2214]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:26:20.545903 kubelet[2214]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:26:20.545903 kubelet[2214]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:26:20.546273 kubelet[2214]: I0813 07:26:20.545970 2214 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:26:21.497584 kubelet[2214]: I0813 07:26:21.497534 2214 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:26:21.497584 kubelet[2214]: I0813 07:26:21.497572 2214 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:26:21.497876 kubelet[2214]: I0813 07:26:21.497848 2214 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:26:21.539826 kubelet[2214]: E0813 07:26:21.539780 2214 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:26:21.540518 kubelet[2214]: I0813 07:26:21.540486 2214 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:26:21.546078 kubelet[2214]: E0813 07:26:21.546034 2214 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:26:21.546078 kubelet[2214]: I0813 07:26:21.546075 2214 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:26:21.549571 kubelet[2214]: I0813 07:26:21.549539 2214 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:26:21.550332 kubelet[2214]: I0813 07:26:21.550305 2214 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:26:21.550475 kubelet[2214]: I0813 07:26:21.550434 2214 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:26:21.550641 kubelet[2214]: I0813 07:26:21.550466 2214 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:26:21.550734 kubelet[2214]: I0813 07:26:21.550643 2214 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:26:21.550734 kubelet[2214]: I0813 07:26:21.550652 2214 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:26:21.550919 kubelet[2214]: I0813 07:26:21.550896 2214 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:26:21.554721 kubelet[2214]: I0813 07:26:21.554697 2214 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:26:21.554761 kubelet[2214]: I0813 07:26:21.554730 2214 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:26:21.554761 kubelet[2214]: I0813 07:26:21.554755 2214 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:26:21.554852 kubelet[2214]: I0813 07:26:21.554834 2214 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:26:21.555967 kubelet[2214]: W0813 07:26:21.555915 2214 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 13 07:26:21.556005 kubelet[2214]: E0813 07:26:21.555980 2214 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:26:21.556723 kubelet[2214]: W0813 07:26:21.556300 2214 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 13 07:26:21.556723 kubelet[2214]: E0813 07:26:21.556343 2214 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:26:21.559017 kubelet[2214]: I0813 07:26:21.558992 2214 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 07:26:21.560750 kubelet[2214]: I0813 07:26:21.560392 2214 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:26:21.560750 kubelet[2214]: W0813 07:26:21.560572 2214 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:26:21.561729 kubelet[2214]: I0813 07:26:21.561665 2214 server.go:1274] "Started kubelet" Aug 13 07:26:21.562738 kubelet[2214]: I0813 07:26:21.562603 2214 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:26:21.565203 kubelet[2214]: I0813 07:26:21.565148 2214 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:26:21.565560 kubelet[2214]: I0813 07:26:21.565536 2214 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:26:21.565720 kubelet[2214]: I0813 07:26:21.565689 2214 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:26:21.565948 kubelet[2214]: I0813 07:26:21.565929 2214 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:26:21.566752 kubelet[2214]: I0813 07:26:21.566661 2214 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:26:21.566752 kubelet[2214]: I0813 07:26:21.566752 2214 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:26:21.566915 kubelet[2214]: I0813 07:26:21.566863 2214 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:26:21.567258 kubelet[2214]: I0813 07:26:21.567235 2214 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:26:21.568685 kubelet[2214]: I0813 07:26:21.567601 2214 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:26:21.568685 kubelet[2214]: E0813 07:26:21.567609 2214 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:26:21.568685 kubelet[2214]: I0813 07:26:21.567688 2214 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:26:21.568685 kubelet[2214]: E0813 07:26:21.568217 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Aug 13 07:26:21.568685 kubelet[2214]: W0813 07:26:21.568296 2214 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 13 07:26:21.568685 kubelet[2214]: E0813 07:26:21.568335 2214 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:26:21.570748 kubelet[2214]: I0813 07:26:21.569654 2214 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:26:21.570748 kubelet[2214]: E0813 07:26:21.569578 2214 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b42d976eed54e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 07:26:21.56164027 +0000 UTC m=+1.047386330,LastTimestamp:2025-08-13 07:26:21.56164027 +0000 UTC m=+1.047386330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 07:26:21.570918 kubelet[2214]: E0813 07:26:21.570835 2214 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:26:21.582685 kubelet[2214]: I0813 07:26:21.582658 2214 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:26:21.582685 kubelet[2214]: I0813 07:26:21.582678 2214 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:26:21.582819 kubelet[2214]: I0813 07:26:21.582707 2214 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:26:21.583824 kubelet[2214]: I0813 07:26:21.583787 2214 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:26:21.584749 kubelet[2214]: I0813 07:26:21.584729 2214 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:26:21.584749 kubelet[2214]: I0813 07:26:21.584759 2214 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:26:21.585016 kubelet[2214]: I0813 07:26:21.584778 2214 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:26:21.585016 kubelet[2214]: E0813 07:26:21.584821 2214 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:26:21.668627 kubelet[2214]: E0813 07:26:21.668582 2214 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:26:21.672265 kubelet[2214]: I0813 07:26:21.672236 2214 policy_none.go:49] "None policy: Start" Aug 13 07:26:21.672746 kubelet[2214]: W0813 07:26:21.672630 2214 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 13 07:26:21.672746 kubelet[2214]: E0813 07:26:21.672712 2214 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:26:21.673620 kubelet[2214]: I0813 07:26:21.673257 2214 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:26:21.673620 kubelet[2214]: I0813 07:26:21.673297 2214 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:26:21.679620 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 07:26:21.685295 kubelet[2214]: E0813 07:26:21.685257 2214 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:26:21.689559 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 07:26:21.692318 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 07:26:21.705586 kubelet[2214]: I0813 07:26:21.705517 2214 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:26:21.706224 kubelet[2214]: I0813 07:26:21.705762 2214 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:26:21.706224 kubelet[2214]: I0813 07:26:21.705783 2214 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:26:21.706224 kubelet[2214]: I0813 07:26:21.706149 2214 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:26:21.707723 kubelet[2214]: E0813 07:26:21.707679 2214 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 07:26:21.769506 kubelet[2214]: E0813 07:26:21.769406 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Aug 13 07:26:21.808615 kubelet[2214]: I0813 07:26:21.808580 2214 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:26:21.809267 kubelet[2214]: E0813 07:26:21.809239 2214 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Aug 13 07:26:21.893103 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 13 07:26:21.917485 systemd[1]: Created slice kubepods-burstable-pod0ea919629311fabb19135713f7ffc308.slice - libcontainer container kubepods-burstable-pod0ea919629311fabb19135713f7ffc308.slice. Aug 13 07:26:21.930046 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 13 07:26:21.969741 kubelet[2214]: I0813 07:26:21.969571 2214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:21.969741 kubelet[2214]: I0813 07:26:21.969605 2214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:21.969741 kubelet[2214]: I0813 07:26:21.969623 2214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:21.969741 kubelet[2214]: I0813 07:26:21.969643 2214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:26:21.969741 kubelet[2214]: I0813 07:26:21.969658 2214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:26:21.969953 kubelet[2214]: I0813 07:26:21.969675 2214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:26:21.969953 kubelet[2214]: I0813 07:26:21.969703 2214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:21.969953 kubelet[2214]: I0813 07:26:21.969719 2214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:21.969953 kubelet[2214]: I0813 07:26:21.969737 2214 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:26:22.010813 kubelet[2214]: I0813 07:26:22.010795 2214 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:26:22.011217 kubelet[2214]: E0813 07:26:22.011175 2214 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Aug 13 07:26:22.170580 kubelet[2214]: E0813 07:26:22.170462 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Aug 13 07:26:22.215838 kubelet[2214]: E0813 07:26:22.215797 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:22.216550 containerd[1475]: time="2025-08-13T07:26:22.216458339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 07:26:22.228669 kubelet[2214]: E0813 07:26:22.228629 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:22.229134 containerd[1475]: time="2025-08-13T07:26:22.229102511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ea919629311fabb19135713f7ffc308,Namespace:kube-system,Attempt:0,}" Aug 13 07:26:22.232429 kubelet[2214]: E0813 07:26:22.232364 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:22.232922 containerd[1475]: time="2025-08-13T07:26:22.232885262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 07:26:22.397370 kubelet[2214]: W0813 07:26:22.397306 2214 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 13 07:26:22.397370 kubelet[2214]: E0813 07:26:22.397373 2214 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:26:22.412819 kubelet[2214]: I0813 07:26:22.412771 2214 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:26:22.413150 kubelet[2214]: E0813 07:26:22.413115 2214 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Aug 13 07:26:22.527143 kubelet[2214]: W0813 07:26:22.527037 2214 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 13 07:26:22.527143 kubelet[2214]: E0813 07:26:22.527087 2214 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:26:22.684265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612582945.mount: Deactivated successfully. Aug 13 07:26:22.688520 containerd[1475]: time="2025-08-13T07:26:22.688475834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:26:22.690021 containerd[1475]: time="2025-08-13T07:26:22.689963921Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 13 07:26:22.690671 containerd[1475]: time="2025-08-13T07:26:22.690626934Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:26:22.692774 containerd[1475]: time="2025-08-13T07:26:22.692742323Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:26:22.695821 containerd[1475]: time="2025-08-13T07:26:22.695766939Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:26:22.697121 containerd[1475]: time="2025-08-13T07:26:22.697081195Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:26:22.698087 containerd[1475]: time="2025-08-13T07:26:22.698045669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:26:22.698155 containerd[1475]: time="2025-08-13T07:26:22.698116290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:26:22.699127 containerd[1475]: time="2025-08-13T07:26:22.699076160Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 482.536552ms" Aug 13 07:26:22.702601 containerd[1475]: time="2025-08-13T07:26:22.702489952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 469.534309ms" Aug 13 07:26:22.703124 containerd[1475]: time="2025-08-13T07:26:22.703093314Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.916738ms" Aug 13 07:26:22.812445 containerd[1475]: time="2025-08-13T07:26:22.812243692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:26:22.812634 containerd[1475]: time="2025-08-13T07:26:22.812334731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:26:22.812634 containerd[1475]: time="2025-08-13T07:26:22.812410156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:22.812634 containerd[1475]: time="2025-08-13T07:26:22.812510482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:22.813520 containerd[1475]: time="2025-08-13T07:26:22.813423632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:26:22.813520 containerd[1475]: time="2025-08-13T07:26:22.813478039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:26:22.813645 containerd[1475]: time="2025-08-13T07:26:22.813494814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:22.813645 containerd[1475]: time="2025-08-13T07:26:22.813564914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:22.819231 containerd[1475]: time="2025-08-13T07:26:22.819065550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:26:22.819231 containerd[1475]: time="2025-08-13T07:26:22.819118476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:26:22.819231 containerd[1475]: time="2025-08-13T07:26:22.819129486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:22.819509 containerd[1475]: time="2025-08-13T07:26:22.819213158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:22.830925 systemd[1]: Started cri-containerd-2bb0695ebf5c9e381bd011c277d7e354caf33f05368e91465b9bfecd570ffd0a.scope - libcontainer container 2bb0695ebf5c9e381bd011c277d7e354caf33f05368e91465b9bfecd570ffd0a. Aug 13 07:26:22.835040 systemd[1]: Started cri-containerd-4dd217c3724ae32107bd81bf724fac336d47dffdc72c82f213771980ec9792a2.scope - libcontainer container 4dd217c3724ae32107bd81bf724fac336d47dffdc72c82f213771980ec9792a2. Aug 13 07:26:22.836058 systemd[1]: Started cri-containerd-57f96c2ab32f77105e0de8306ce7a2b9cce92fb0b979291966ef8d4ce5f574c5.scope - libcontainer container 57f96c2ab32f77105e0de8306ce7a2b9cce92fb0b979291966ef8d4ce5f574c5. Aug 13 07:26:22.836918 kubelet[2214]: W0813 07:26:22.836735 2214 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 13 07:26:22.837152 kubelet[2214]: E0813 07:26:22.837117 2214 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:26:22.843971 kubelet[2214]: W0813 07:26:22.843853 2214 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Aug 13 07:26:22.843971 kubelet[2214]: E0813 07:26:22.843928 2214 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:26:22.870318 containerd[1475]: time="2025-08-13T07:26:22.870232433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ea919629311fabb19135713f7ffc308,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bb0695ebf5c9e381bd011c277d7e354caf33f05368e91465b9bfecd570ffd0a\"" Aug 13 07:26:22.871496 kubelet[2214]: E0813 07:26:22.871450 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:22.873786 containerd[1475]: time="2025-08-13T07:26:22.873752957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dd217c3724ae32107bd81bf724fac336d47dffdc72c82f213771980ec9792a2\"" Aug 13 07:26:22.874844 kubelet[2214]: E0813 07:26:22.874774 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:22.875233 containerd[1475]: time="2025-08-13T07:26:22.875055563Z" level=info msg="CreateContainer within sandbox \"2bb0695ebf5c9e381bd011c277d7e354caf33f05368e91465b9bfecd570ffd0a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:26:22.876044 containerd[1475]: time="2025-08-13T07:26:22.876004864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"57f96c2ab32f77105e0de8306ce7a2b9cce92fb0b979291966ef8d4ce5f574c5\"" Aug 13 07:26:22.876528 containerd[1475]: time="2025-08-13T07:26:22.876505577Z" level=info msg="CreateContainer within sandbox \"4dd217c3724ae32107bd81bf724fac336d47dffdc72c82f213771980ec9792a2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:26:22.876859 kubelet[2214]: E0813 07:26:22.876832 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:22.878607 containerd[1475]: time="2025-08-13T07:26:22.878582693Z" level=info msg="CreateContainer within sandbox \"57f96c2ab32f77105e0de8306ce7a2b9cce92fb0b979291966ef8d4ce5f574c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:26:22.890140 containerd[1475]: time="2025-08-13T07:26:22.890088521Z" level=info msg="CreateContainer within sandbox \"2bb0695ebf5c9e381bd011c277d7e354caf33f05368e91465b9bfecd570ffd0a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60a4f564ce9d08dbc27a9b9a3c03663e600a55275b97ac6df3afaf2014800df5\"" Aug 13 07:26:22.890783 containerd[1475]: time="2025-08-13T07:26:22.890760943Z" level=info msg="StartContainer for \"60a4f564ce9d08dbc27a9b9a3c03663e600a55275b97ac6df3afaf2014800df5\"" Aug 13 07:26:22.898125 containerd[1475]: time="2025-08-13T07:26:22.898033591Z" level=info msg="CreateContainer within sandbox \"4dd217c3724ae32107bd81bf724fac336d47dffdc72c82f213771980ec9792a2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0fdbdb339a52540e05d78166ac00adbc302453395cb0be858d77f54d2b218841\"" Aug 13 07:26:22.898723 containerd[1475]: time="2025-08-13T07:26:22.898582706Z" level=info msg="StartContainer for \"0fdbdb339a52540e05d78166ac00adbc302453395cb0be858d77f54d2b218841\"" Aug 13 07:26:22.901011 containerd[1475]: time="2025-08-13T07:26:22.900974174Z" level=info msg="CreateContainer within sandbox \"57f96c2ab32f77105e0de8306ce7a2b9cce92fb0b979291966ef8d4ce5f574c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bcdb96a9f0308e34f91e4c3533c2bc078627b2f808106a8814b1f9ad827dad3c\"" Aug 13 07:26:22.902649 containerd[1475]: time="2025-08-13T07:26:22.902562547Z" level=info msg="StartContainer for \"bcdb96a9f0308e34f91e4c3533c2bc078627b2f808106a8814b1f9ad827dad3c\"" Aug 13 07:26:22.916900 systemd[1]: Started cri-containerd-60a4f564ce9d08dbc27a9b9a3c03663e600a55275b97ac6df3afaf2014800df5.scope - libcontainer container 60a4f564ce9d08dbc27a9b9a3c03663e600a55275b97ac6df3afaf2014800df5. Aug 13 07:26:22.925862 systemd[1]: Started cri-containerd-0fdbdb339a52540e05d78166ac00adbc302453395cb0be858d77f54d2b218841.scope - libcontainer container 0fdbdb339a52540e05d78166ac00adbc302453395cb0be858d77f54d2b218841. Aug 13 07:26:22.928497 systemd[1]: Started cri-containerd-bcdb96a9f0308e34f91e4c3533c2bc078627b2f808106a8814b1f9ad827dad3c.scope - libcontainer container bcdb96a9f0308e34f91e4c3533c2bc078627b2f808106a8814b1f9ad827dad3c. Aug 13 07:26:22.955354 containerd[1475]: time="2025-08-13T07:26:22.955281251Z" level=info msg="StartContainer for \"60a4f564ce9d08dbc27a9b9a3c03663e600a55275b97ac6df3afaf2014800df5\" returns successfully" Aug 13 07:26:22.975059 kubelet[2214]: E0813 07:26:22.974995 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Aug 13 07:26:22.976008 containerd[1475]: time="2025-08-13T07:26:22.975971541Z" level=info msg="StartContainer for \"bcdb96a9f0308e34f91e4c3533c2bc078627b2f808106a8814b1f9ad827dad3c\" returns successfully" Aug 13 07:26:22.976245 containerd[1475]: time="2025-08-13T07:26:22.976134402Z" level=info msg="StartContainer for \"0fdbdb339a52540e05d78166ac00adbc302453395cb0be858d77f54d2b218841\" returns successfully" Aug 13 07:26:23.215668 kubelet[2214]: I0813 07:26:23.215558 2214 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:26:23.594875 kubelet[2214]: E0813 07:26:23.594688 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:23.596572 kubelet[2214]: E0813 07:26:23.596427 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:23.598860 kubelet[2214]: E0813 07:26:23.598805 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:24.601435 kubelet[2214]: E0813 07:26:24.601405 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:24.786734 kubelet[2214]: E0813 07:26:24.786697 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:24.927271 kubelet[2214]: E0813 07:26:24.927170 2214 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 07:26:25.003761 kubelet[2214]: I0813 07:26:24.999672 2214 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:26:25.334240 kubelet[2214]: E0813 07:26:25.333684 2214 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 07:26:25.334240 kubelet[2214]: E0813 07:26:25.333872 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:25.556910 kubelet[2214]: I0813 07:26:25.556867 2214 apiserver.go:52] "Watching apiserver" Aug 13 07:26:25.567663 kubelet[2214]: I0813 07:26:25.567632 2214 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:26:25.603902 kubelet[2214]: E0813 07:26:25.603521 2214 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 13 07:26:25.603902 kubelet[2214]: E0813 07:26:25.603762 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:26.970353 systemd[1]: Reload requested from client PID 2496 ('systemctl') (unit session-7.scope)... Aug 13 07:26:26.970367 systemd[1]: Reloading... Aug 13 07:26:27.047718 zram_generator::config[2540]: No configuration found. Aug 13 07:26:27.136012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:26:27.223216 systemd[1]: Reloading finished in 252 ms. Aug 13 07:26:27.246835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:26:27.261888 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:26:27.262170 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:26:27.262276 systemd[1]: kubelet.service: Consumed 1.468s CPU time, 129.5M memory peak. Aug 13 07:26:27.270129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:26:27.379668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:26:27.383847 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:26:27.418542 kubelet[2582]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:26:27.418542 kubelet[2582]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:26:27.418542 kubelet[2582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:26:27.418915 kubelet[2582]: I0813 07:26:27.418591 2582 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:26:27.425571 kubelet[2582]: I0813 07:26:27.425535 2582 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:26:27.425571 kubelet[2582]: I0813 07:26:27.425563 2582 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:26:27.425844 kubelet[2582]: I0813 07:26:27.425830 2582 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:26:27.427395 kubelet[2582]: I0813 07:26:27.427370 2582 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:26:27.432194 kubelet[2582]: I0813 07:26:27.432156 2582 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:26:27.435587 kubelet[2582]: E0813 07:26:27.435553 2582 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:26:27.435587 kubelet[2582]: I0813 07:26:27.435585 2582 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:26:27.438027 kubelet[2582]: I0813 07:26:27.438003 2582 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:26:27.438192 kubelet[2582]: I0813 07:26:27.438177 2582 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:26:27.438321 kubelet[2582]: I0813 07:26:27.438289 2582 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:26:27.438509 kubelet[2582]: I0813 07:26:27.438323 2582 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 07:26:27.438594 kubelet[2582]: I0813 07:26:27.438517 2582 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:26:27.438594 kubelet[2582]: I0813 07:26:27.438527 2582 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:26:27.438594 kubelet[2582]: I0813 07:26:27.438562 2582 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:26:27.438698 kubelet[2582]: I0813 07:26:27.438676 2582 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:26:27.439341 kubelet[2582]: I0813 07:26:27.439317 2582 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:26:27.439385 kubelet[2582]: I0813 07:26:27.439360 2582 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:26:27.439385 kubelet[2582]: I0813 07:26:27.439383 2582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:26:27.444100 kubelet[2582]: I0813 07:26:27.444076 2582 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 07:26:27.444569 kubelet[2582]: I0813 07:26:27.444537 2582 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:26:27.444982 kubelet[2582]: I0813 07:26:27.444970 2582 server.go:1274] "Started kubelet" Aug 13 07:26:27.446090 kubelet[2582]: I0813 07:26:27.445761 2582 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:26:27.446090 kubelet[2582]: I0813 07:26:27.445959 2582 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:26:27.446090 kubelet[2582]: I0813 07:26:27.446000 2582 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:26:27.449699 kubelet[2582]: I0813 07:26:27.447508 2582 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:26:27.450024 kubelet[2582]: I0813 07:26:27.450002 2582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:26:27.450067 kubelet[2582]: I0813 07:26:27.450037 2582 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:26:27.450981 kubelet[2582]: E0813 07:26:27.450956 2582 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 07:26:27.452773 kubelet[2582]: I0813 07:26:27.450012 2582 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:26:27.452773 kubelet[2582]: I0813 07:26:27.451921 2582 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:26:27.452773 kubelet[2582]: I0813 07:26:27.452181 2582 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:26:27.459334 kubelet[2582]: I0813 07:26:27.459298 2582 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:26:27.461711 kubelet[2582]: I0813 07:26:27.461630 2582 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:26:27.470534 kubelet[2582]: E0813 07:26:27.470496 2582 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:26:27.472414 kubelet[2582]: I0813 07:26:27.472275 2582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:26:27.472727 kubelet[2582]: I0813 07:26:27.472709 2582 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:26:27.474124 kubelet[2582]: I0813 07:26:27.473969 2582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:26:27.474124 kubelet[2582]: I0813 07:26:27.473993 2582 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:26:27.474124 kubelet[2582]: I0813 07:26:27.474014 2582 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:26:27.474124 kubelet[2582]: E0813 07:26:27.474059 2582 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:26:27.503059 kubelet[2582]: I0813 07:26:27.503032 2582 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:26:27.503059 kubelet[2582]: I0813 07:26:27.503050 2582 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:26:27.503234 kubelet[2582]: I0813 07:26:27.503082 2582 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:26:27.503259 kubelet[2582]: I0813 07:26:27.503245 2582 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:26:27.503281 kubelet[2582]: I0813 07:26:27.503256 2582 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:26:27.503281 kubelet[2582]: I0813 07:26:27.503275 2582 policy_none.go:49] "None policy: Start" Aug 13 07:26:27.503888 kubelet[2582]: I0813 07:26:27.503871 2582 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:26:27.503946 kubelet[2582]: I0813 07:26:27.503897 2582 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:26:27.504061 kubelet[2582]: I0813 07:26:27.504045 2582 state_mem.go:75] "Updated machine memory state" Aug 13 07:26:27.510974 kubelet[2582]: I0813 07:26:27.510933 2582 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:26:27.511169 kubelet[2582]: I0813 07:26:27.511134 2582 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:26:27.511202 kubelet[2582]: I0813 07:26:27.511154 2582 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:26:27.511392 kubelet[2582]: I0813 07:26:27.511349 2582 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:26:27.615729 kubelet[2582]: I0813 07:26:27.615688 2582 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 07:26:27.624326 kubelet[2582]: I0813 07:26:27.624294 2582 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 07:26:27.625063 kubelet[2582]: I0813 07:26:27.624541 2582 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 07:26:27.753247 kubelet[2582]: I0813 07:26:27.753123 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:26:27.753247 kubelet[2582]: I0813 07:26:27.753166 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:27.753247 kubelet[2582]: I0813 07:26:27.753187 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:27.753247 kubelet[2582]: I0813 07:26:27.753218 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:27.753247 kubelet[2582]: I0813 07:26:27.753247 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:26:27.753514 kubelet[2582]: I0813 07:26:27.753270 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ea919629311fabb19135713f7ffc308-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ea919629311fabb19135713f7ffc308\") " pod="kube-system/kube-apiserver-localhost" Aug 13 07:26:27.753514 kubelet[2582]: I0813 07:26:27.753289 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:27.753514 kubelet[2582]: I0813 07:26:27.753306 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 07:26:27.753514 kubelet[2582]: I0813 07:26:27.753328 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 07:26:27.882467 kubelet[2582]: E0813 07:26:27.882425 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:27.882569 kubelet[2582]: E0813 07:26:27.882542 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:27.882711 kubelet[2582]: E0813 07:26:27.882664 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:27.969138 sudo[2619]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 07:26:27.969428 sudo[2619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 07:26:28.414339 sudo[2619]: pam_unix(sudo:session): session closed for user root Aug 13 07:26:28.440271 kubelet[2582]: I0813 07:26:28.440183 2582 apiserver.go:52] "Watching apiserver" Aug 13 07:26:28.452349 kubelet[2582]: I0813 07:26:28.452298 2582 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:26:28.491394 kubelet[2582]: E0813 07:26:28.490962 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:28.492006 kubelet[2582]: E0813 07:26:28.491973 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:28.497631 kubelet[2582]: E0813 07:26:28.497582 2582 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 07:26:28.497796 kubelet[2582]: E0813 07:26:28.497781 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:28.531382 kubelet[2582]: I0813 07:26:28.531308 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.531288187 podStartE2EDuration="1.531288187s" podCreationTimestamp="2025-08-13 07:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:26:28.531185221 +0000 UTC m=+1.144204582" watchObservedRunningTime="2025-08-13 07:26:28.531288187 +0000 UTC m=+1.144307588" Aug 13 07:26:28.531519 kubelet[2582]: I0813 07:26:28.531457 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5314512200000001 podStartE2EDuration="1.53145122s" podCreationTimestamp="2025-08-13 07:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:26:28.522137139 +0000 UTC m=+1.135156540" watchObservedRunningTime="2025-08-13 07:26:28.53145122 +0000 UTC m=+1.144470581" Aug 13 07:26:28.544966 kubelet[2582]: I0813 07:26:28.544837 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.544814989 podStartE2EDuration="1.544814989s" podCreationTimestamp="2025-08-13 07:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:26:28.542483468 +0000 UTC m=+1.155502869" watchObservedRunningTime="2025-08-13 07:26:28.544814989 +0000 UTC m=+1.157834350" Aug 13 07:26:29.493202 kubelet[2582]: E0813 07:26:29.492756 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:29.494011 kubelet[2582]: E0813 07:26:29.493916 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:30.494527 kubelet[2582]: E0813 07:26:30.494477 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:30.869309 sudo[1660]: pam_unix(sudo:session): session closed for user root Aug 13 07:26:30.870422 sshd[1659]: Connection closed by 10.0.0.1 port 56104 Aug 13 07:26:30.870949 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Aug 13 07:26:30.874687 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:56104.service: Deactivated successfully. Aug 13 07:26:30.876730 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:26:30.876923 systemd[1]: session-7.scope: Consumed 8.521s CPU time, 261.9M memory peak. Aug 13 07:26:30.877848 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:26:30.878977 systemd-logind[1457]: Removed session 7. Aug 13 07:26:31.452295 kubelet[2582]: E0813 07:26:31.451931 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:32.652303 kubelet[2582]: I0813 07:26:32.652265 2582 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:26:32.652645 containerd[1475]: time="2025-08-13T07:26:32.652599856Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:26:32.652855 kubelet[2582]: I0813 07:26:32.652809 2582 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:26:33.635542 systemd[1]: Created slice kubepods-besteffort-pod288af81f_69c7_475a_a9e0_ca9781db7428.slice - libcontainer container kubepods-besteffort-pod288af81f_69c7_475a_a9e0_ca9781db7428.slice. Aug 13 07:26:33.646140 systemd[1]: Created slice kubepods-burstable-podd48a423a_8ce2_4e3b_b08d_50a04ecd1944.slice - libcontainer container kubepods-burstable-podd48a423a_8ce2_4e3b_b08d_50a04ecd1944.slice. Aug 13 07:26:33.688848 kubelet[2582]: I0813 07:26:33.688772 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-host-proc-sys-kernel\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.688848 kubelet[2582]: I0813 07:26:33.688837 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsg7z\" (UniqueName: \"kubernetes.io/projected/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-kube-api-access-bsg7z\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.688848 kubelet[2582]: I0813 07:26:33.688857 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-etc-cni-netd\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689225 kubelet[2582]: I0813 07:26:33.688873 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/288af81f-69c7-475a-a9e0-ca9781db7428-lib-modules\") pod \"kube-proxy-6d4gv\" (UID: \"288af81f-69c7-475a-a9e0-ca9781db7428\") " pod="kube-system/kube-proxy-6d4gv" Aug 13 07:26:33.689225 kubelet[2582]: I0813 07:26:33.688891 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnrjg\" (UniqueName: \"kubernetes.io/projected/288af81f-69c7-475a-a9e0-ca9781db7428-kube-api-access-tnrjg\") pod \"kube-proxy-6d4gv\" (UID: \"288af81f-69c7-475a-a9e0-ca9781db7428\") " pod="kube-system/kube-proxy-6d4gv" Aug 13 07:26:33.689225 kubelet[2582]: I0813 07:26:33.688907 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cni-path\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689225 kubelet[2582]: I0813 07:26:33.688922 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-lib-modules\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689225 kubelet[2582]: I0813 07:26:33.688937 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-cgroup\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689225 kubelet[2582]: I0813 07:26:33.688953 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-clustermesh-secrets\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689347 kubelet[2582]: I0813 07:26:33.688966 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/288af81f-69c7-475a-a9e0-ca9781db7428-kube-proxy\") pod \"kube-proxy-6d4gv\" (UID: \"288af81f-69c7-475a-a9e0-ca9781db7428\") " pod="kube-system/kube-proxy-6d4gv" Aug 13 07:26:33.689347 kubelet[2582]: I0813 07:26:33.688979 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/288af81f-69c7-475a-a9e0-ca9781db7428-xtables-lock\") pod \"kube-proxy-6d4gv\" (UID: \"288af81f-69c7-475a-a9e0-ca9781db7428\") " pod="kube-system/kube-proxy-6d4gv" Aug 13 07:26:33.689347 kubelet[2582]: I0813 07:26:33.688993 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-xtables-lock\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689347 kubelet[2582]: I0813 07:26:33.689008 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-config-path\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689347 kubelet[2582]: I0813 07:26:33.689025 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-host-proc-sys-net\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689347 kubelet[2582]: I0813 07:26:33.689038 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-hubble-tls\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689462 kubelet[2582]: I0813 07:26:33.689053 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-hostproc\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689462 kubelet[2582]: I0813 07:26:33.689068 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-bpf-maps\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.689462 kubelet[2582]: I0813 07:26:33.689084 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-run\") pod \"cilium-hdjqq\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " pod="kube-system/cilium-hdjqq" Aug 13 07:26:33.783162 systemd[1]: Created slice kubepods-besteffort-pod990f54a5_ee29_491e_9c2c_59758e4137ff.slice - libcontainer container kubepods-besteffort-pod990f54a5_ee29_491e_9c2c_59758e4137ff.slice. Aug 13 07:26:33.789458 kubelet[2582]: I0813 07:26:33.789427 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/990f54a5-ee29-491e-9c2c-59758e4137ff-cilium-config-path\") pod \"cilium-operator-5d85765b45-n8hhd\" (UID: \"990f54a5-ee29-491e-9c2c-59758e4137ff\") " pod="kube-system/cilium-operator-5d85765b45-n8hhd" Aug 13 07:26:33.789670 kubelet[2582]: I0813 07:26:33.789577 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfbzd\" (UniqueName: \"kubernetes.io/projected/990f54a5-ee29-491e-9c2c-59758e4137ff-kube-api-access-wfbzd\") pod \"cilium-operator-5d85765b45-n8hhd\" (UID: \"990f54a5-ee29-491e-9c2c-59758e4137ff\") " pod="kube-system/cilium-operator-5d85765b45-n8hhd" Aug 13 07:26:33.944511 kubelet[2582]: E0813 07:26:33.944400 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:33.945670 containerd[1475]: time="2025-08-13T07:26:33.945611096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6d4gv,Uid:288af81f-69c7-475a-a9e0-ca9781db7428,Namespace:kube-system,Attempt:0,}" Aug 13 07:26:33.948104 kubelet[2582]: E0813 07:26:33.948069 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:33.948715 containerd[1475]: time="2025-08-13T07:26:33.948522318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hdjqq,Uid:d48a423a-8ce2-4e3b-b08d-50a04ecd1944,Namespace:kube-system,Attempt:0,}" Aug 13 07:26:33.968137 containerd[1475]: time="2025-08-13T07:26:33.967878565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:26:33.968137 containerd[1475]: time="2025-08-13T07:26:33.967943867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:26:33.968137 containerd[1475]: time="2025-08-13T07:26:33.967958072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:33.968137 containerd[1475]: time="2025-08-13T07:26:33.968044061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:33.981926 containerd[1475]: time="2025-08-13T07:26:33.981757485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:26:33.981926 containerd[1475]: time="2025-08-13T07:26:33.981889450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:26:33.982124 containerd[1475]: time="2025-08-13T07:26:33.981905615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:33.982175 containerd[1475]: time="2025-08-13T07:26:33.982012131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:33.982924 systemd[1]: Started cri-containerd-93b5d546749cc36e51d0b363230583207780376facb50281895b85634e0515b0.scope - libcontainer container 93b5d546749cc36e51d0b363230583207780376facb50281895b85634e0515b0. Aug 13 07:26:34.003925 systemd[1]: Started cri-containerd-73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251.scope - libcontainer container 73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251. Aug 13 07:26:34.016203 containerd[1475]: time="2025-08-13T07:26:34.016147164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6d4gv,Uid:288af81f-69c7-475a-a9e0-ca9781db7428,Namespace:kube-system,Attempt:0,} returns sandbox id \"93b5d546749cc36e51d0b363230583207780376facb50281895b85634e0515b0\"" Aug 13 07:26:34.017617 kubelet[2582]: E0813 07:26:34.017582 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:34.021593 containerd[1475]: time="2025-08-13T07:26:34.021502234Z" level=info msg="CreateContainer within sandbox \"93b5d546749cc36e51d0b363230583207780376facb50281895b85634e0515b0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:26:34.028151 containerd[1475]: time="2025-08-13T07:26:34.028113505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hdjqq,Uid:d48a423a-8ce2-4e3b-b08d-50a04ecd1944,Namespace:kube-system,Attempt:0,} returns sandbox id \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\"" Aug 13 07:26:34.029182 kubelet[2582]: E0813 07:26:34.029153 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:34.030428 containerd[1475]: time="2025-08-13T07:26:34.030385350Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 07:26:34.036685 containerd[1475]: time="2025-08-13T07:26:34.036577367Z" level=info msg="CreateContainer within sandbox \"93b5d546749cc36e51d0b363230583207780376facb50281895b85634e0515b0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a79015cdf45401167deb52043f5a7257829c79459dc6213d7dbf843ad225fbd\"" Aug 13 07:26:34.036685 containerd[1475]: time="2025-08-13T07:26:34.037876661Z" level=info msg="StartContainer for \"0a79015cdf45401167deb52043f5a7257829c79459dc6213d7dbf843ad225fbd\"" Aug 13 07:26:34.067909 systemd[1]: Started cri-containerd-0a79015cdf45401167deb52043f5a7257829c79459dc6213d7dbf843ad225fbd.scope - libcontainer container 0a79015cdf45401167deb52043f5a7257829c79459dc6213d7dbf843ad225fbd. Aug 13 07:26:34.086983 kubelet[2582]: E0813 07:26:34.086924 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:34.088202 containerd[1475]: time="2025-08-13T07:26:34.087993621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n8hhd,Uid:990f54a5-ee29-491e-9c2c-59758e4137ff,Namespace:kube-system,Attempt:0,}" Aug 13 07:26:34.095705 containerd[1475]: time="2025-08-13T07:26:34.095649465Z" level=info msg="StartContainer for \"0a79015cdf45401167deb52043f5a7257829c79459dc6213d7dbf843ad225fbd\" returns successfully" Aug 13 07:26:34.113605 containerd[1475]: time="2025-08-13T07:26:34.113510487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:26:34.113742 containerd[1475]: time="2025-08-13T07:26:34.113574988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:26:34.113742 containerd[1475]: time="2025-08-13T07:26:34.113586552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:34.113742 containerd[1475]: time="2025-08-13T07:26:34.113678821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:34.133921 systemd[1]: Started cri-containerd-a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b.scope - libcontainer container a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b. Aug 13 07:26:34.161916 containerd[1475]: time="2025-08-13T07:26:34.161855041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n8hhd,Uid:990f54a5-ee29-491e-9c2c-59758e4137ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b\"" Aug 13 07:26:34.163939 kubelet[2582]: E0813 07:26:34.163847 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:34.502941 kubelet[2582]: E0813 07:26:34.502803 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:34.515398 kubelet[2582]: I0813 07:26:34.515232 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6d4gv" podStartSLOduration=1.51521193 podStartE2EDuration="1.51521193s" podCreationTimestamp="2025-08-13 07:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:26:34.514278792 +0000 UTC m=+7.127298193" watchObservedRunningTime="2025-08-13 07:26:34.51521193 +0000 UTC m=+7.128231331" Aug 13 07:26:38.002721 kubelet[2582]: E0813 07:26:38.002389 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:38.513182 kubelet[2582]: E0813 07:26:38.513155 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:38.703168 kubelet[2582]: E0813 07:26:38.702750 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:40.503593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166615410.mount: Deactivated successfully. Aug 13 07:26:41.140579 update_engine[1459]: I20250813 07:26:41.140504 1459 update_attempter.cc:509] Updating boot flags... Aug 13 07:26:41.162767 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2981) Aug 13 07:26:41.202998 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2980) Aug 13 07:26:41.238724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2980) Aug 13 07:26:41.473157 kubelet[2582]: E0813 07:26:41.473043 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:44.244064 containerd[1475]: time="2025-08-13T07:26:44.244016889Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:44.245072 containerd[1475]: time="2025-08-13T07:26:44.244900978Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Aug 13 07:26:44.246921 containerd[1475]: time="2025-08-13T07:26:44.246023313Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:44.247674 containerd[1475]: time="2025-08-13T07:26:44.247313679Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.216884837s" Aug 13 07:26:44.247674 containerd[1475]: time="2025-08-13T07:26:44.247351367Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 13 07:26:44.249992 containerd[1475]: time="2025-08-13T07:26:44.249955704Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 07:26:44.251103 containerd[1475]: time="2025-08-13T07:26:44.250874600Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:26:44.269676 containerd[1475]: time="2025-08-13T07:26:44.269627663Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\"" Aug 13 07:26:44.270387 containerd[1475]: time="2025-08-13T07:26:44.270280948Z" level=info msg="StartContainer for \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\"" Aug 13 07:26:44.297882 systemd[1]: Started cri-containerd-12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0.scope - libcontainer container 12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0. Aug 13 07:26:44.318308 containerd[1475]: time="2025-08-13T07:26:44.318270517Z" level=info msg="StartContainer for \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\" returns successfully" Aug 13 07:26:44.376374 systemd[1]: cri-containerd-12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0.scope: Deactivated successfully. Aug 13 07:26:44.494681 containerd[1475]: time="2025-08-13T07:26:44.494549437Z" level=info msg="shim disconnected" id=12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0 namespace=k8s.io Aug 13 07:26:44.494681 containerd[1475]: time="2025-08-13T07:26:44.494610689Z" level=warning msg="cleaning up after shim disconnected" id=12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0 namespace=k8s.io Aug 13 07:26:44.494681 containerd[1475]: time="2025-08-13T07:26:44.494620931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:26:44.524425 kubelet[2582]: E0813 07:26:44.524394 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:44.528771 containerd[1475]: time="2025-08-13T07:26:44.528730168Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:26:44.543386 containerd[1475]: time="2025-08-13T07:26:44.543331477Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\"" Aug 13 07:26:44.544745 containerd[1475]: time="2025-08-13T07:26:44.544719543Z" level=info msg="StartContainer for \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\"" Aug 13 07:26:44.573834 systemd[1]: Started cri-containerd-a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f.scope - libcontainer container a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f. Aug 13 07:26:44.593937 containerd[1475]: time="2025-08-13T07:26:44.593894418Z" level=info msg="StartContainer for \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\" returns successfully" Aug 13 07:26:44.614581 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:26:44.614815 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:26:44.615236 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:26:44.622979 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:26:44.623144 systemd[1]: cri-containerd-a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f.scope: Deactivated successfully. Aug 13 07:26:44.637386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:26:44.648310 containerd[1475]: time="2025-08-13T07:26:44.648240442Z" level=info msg="shim disconnected" id=a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f namespace=k8s.io Aug 13 07:26:44.648310 containerd[1475]: time="2025-08-13T07:26:44.648302573Z" level=warning msg="cleaning up after shim disconnected" id=a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f namespace=k8s.io Aug 13 07:26:44.648310 containerd[1475]: time="2025-08-13T07:26:44.648320497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:26:45.263308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0-rootfs.mount: Deactivated successfully. Aug 13 07:26:45.316906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932748894.mount: Deactivated successfully. Aug 13 07:26:45.528222 kubelet[2582]: E0813 07:26:45.528087 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:45.532530 containerd[1475]: time="2025-08-13T07:26:45.532369556Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:26:45.555540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779389796.mount: Deactivated successfully. Aug 13 07:26:45.560799 containerd[1475]: time="2025-08-13T07:26:45.560663512Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\"" Aug 13 07:26:45.561570 containerd[1475]: time="2025-08-13T07:26:45.561544393Z" level=info msg="StartContainer for \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\"" Aug 13 07:26:45.587955 systemd[1]: Started cri-containerd-54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310.scope - libcontainer container 54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310. Aug 13 07:26:45.612723 containerd[1475]: time="2025-08-13T07:26:45.612642944Z" level=info msg="StartContainer for \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\" returns successfully" Aug 13 07:26:45.623123 systemd[1]: cri-containerd-54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310.scope: Deactivated successfully. Aug 13 07:26:45.689268 containerd[1475]: time="2025-08-13T07:26:45.689210377Z" level=info msg="shim disconnected" id=54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310 namespace=k8s.io Aug 13 07:26:45.689740 containerd[1475]: time="2025-08-13T07:26:45.689560681Z" level=warning msg="cleaning up after shim disconnected" id=54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310 namespace=k8s.io Aug 13 07:26:45.689740 containerd[1475]: time="2025-08-13T07:26:45.689578044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:26:45.754940 containerd[1475]: time="2025-08-13T07:26:45.754897227Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:45.755381 containerd[1475]: time="2025-08-13T07:26:45.755350269Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Aug 13 07:26:45.756289 containerd[1475]: time="2025-08-13T07:26:45.756241912Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:26:45.758084 containerd[1475]: time="2025-08-13T07:26:45.757785273Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.507794363s" Aug 13 07:26:45.758084 containerd[1475]: time="2025-08-13T07:26:45.757818799Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 13 07:26:45.759983 containerd[1475]: time="2025-08-13T07:26:45.759949827Z" level=info msg="CreateContainer within sandbox \"a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 07:26:45.768701 containerd[1475]: time="2025-08-13T07:26:45.768662655Z" level=info msg="CreateContainer within sandbox \"a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\"" Aug 13 07:26:45.769087 containerd[1475]: time="2025-08-13T07:26:45.769052086Z" level=info msg="StartContainer for \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\"" Aug 13 07:26:45.800839 systemd[1]: Started cri-containerd-d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e.scope - libcontainer container d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e. Aug 13 07:26:45.821629 containerd[1475]: time="2025-08-13T07:26:45.821587980Z" level=info msg="StartContainer for \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\" returns successfully" Aug 13 07:26:46.531343 kubelet[2582]: E0813 07:26:46.531030 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:46.538720 containerd[1475]: time="2025-08-13T07:26:46.535906793Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:26:46.539071 kubelet[2582]: E0813 07:26:46.537177 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:46.555527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209134208.mount: Deactivated successfully. Aug 13 07:26:46.565873 containerd[1475]: time="2025-08-13T07:26:46.565755705Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\"" Aug 13 07:26:46.587379 containerd[1475]: time="2025-08-13T07:26:46.572572571Z" level=info msg="StartContainer for \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\"" Aug 13 07:26:46.590546 kubelet[2582]: I0813 07:26:46.590493 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-n8hhd" podStartSLOduration=1.997235182 podStartE2EDuration="13.590474644s" podCreationTimestamp="2025-08-13 07:26:33 +0000 UTC" firstStartedPulling="2025-08-13 07:26:34.165351438 +0000 UTC m=+6.778370839" lastFinishedPulling="2025-08-13 07:26:45.7585909 +0000 UTC m=+18.371610301" observedRunningTime="2025-08-13 07:26:46.590294733 +0000 UTC m=+19.203314134" watchObservedRunningTime="2025-08-13 07:26:46.590474644 +0000 UTC m=+19.203494045" Aug 13 07:26:46.627926 systemd[1]: Started cri-containerd-8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559.scope - libcontainer container 8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559. Aug 13 07:26:46.647164 systemd[1]: cri-containerd-8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559.scope: Deactivated successfully. Aug 13 07:26:46.648142 containerd[1475]: time="2025-08-13T07:26:46.648047979Z" level=info msg="StartContainer for \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\" returns successfully" Aug 13 07:26:46.683882 containerd[1475]: time="2025-08-13T07:26:46.683784835Z" level=info msg="shim disconnected" id=8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559 namespace=k8s.io Aug 13 07:26:46.684271 containerd[1475]: time="2025-08-13T07:26:46.684079526Z" level=warning msg="cleaning up after shim disconnected" id=8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559 namespace=k8s.io Aug 13 07:26:46.684271 containerd[1475]: time="2025-08-13T07:26:46.684097649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:26:47.541072 kubelet[2582]: E0813 07:26:47.541031 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:47.542705 kubelet[2582]: E0813 07:26:47.541415 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:47.546079 containerd[1475]: time="2025-08-13T07:26:47.546036087Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:26:47.567260 containerd[1475]: time="2025-08-13T07:26:47.567211486Z" level=info msg="CreateContainer within sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\"" Aug 13 07:26:47.567831 containerd[1475]: time="2025-08-13T07:26:47.567641437Z" level=info msg="StartContainer for \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\"" Aug 13 07:26:47.594903 systemd[1]: Started cri-containerd-4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88.scope - libcontainer container 4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88. Aug 13 07:26:47.620413 containerd[1475]: time="2025-08-13T07:26:47.620293147Z" level=info msg="StartContainer for \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\" returns successfully" Aug 13 07:26:47.748040 kubelet[2582]: I0813 07:26:47.747942 2582 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:26:47.798537 kubelet[2582]: I0813 07:26:47.796990 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/092bc188-65b1-4634-826b-b1c9af0bcbdc-config-volume\") pod \"coredns-7c65d6cfc9-st4r4\" (UID: \"092bc188-65b1-4634-826b-b1c9af0bcbdc\") " pod="kube-system/coredns-7c65d6cfc9-st4r4" Aug 13 07:26:47.798537 kubelet[2582]: I0813 07:26:47.797031 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxcqk\" (UniqueName: \"kubernetes.io/projected/092bc188-65b1-4634-826b-b1c9af0bcbdc-kube-api-access-qxcqk\") pod \"coredns-7c65d6cfc9-st4r4\" (UID: \"092bc188-65b1-4634-826b-b1c9af0bcbdc\") " pod="kube-system/coredns-7c65d6cfc9-st4r4" Aug 13 07:26:47.798537 kubelet[2582]: I0813 07:26:47.797246 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2drrs\" (UniqueName: \"kubernetes.io/projected/cd423bf3-e134-4684-b6a7-468b783b7e25-kube-api-access-2drrs\") pod \"coredns-7c65d6cfc9-cv2hr\" (UID: \"cd423bf3-e134-4684-b6a7-468b783b7e25\") " pod="kube-system/coredns-7c65d6cfc9-cv2hr" Aug 13 07:26:47.798537 kubelet[2582]: I0813 07:26:47.797435 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd423bf3-e134-4684-b6a7-468b783b7e25-config-volume\") pod \"coredns-7c65d6cfc9-cv2hr\" (UID: \"cd423bf3-e134-4684-b6a7-468b783b7e25\") " pod="kube-system/coredns-7c65d6cfc9-cv2hr" Aug 13 07:26:47.800796 systemd[1]: Created slice kubepods-burstable-pod092bc188_65b1_4634_826b_b1c9af0bcbdc.slice - libcontainer container kubepods-burstable-pod092bc188_65b1_4634_826b_b1c9af0bcbdc.slice. Aug 13 07:26:47.808767 systemd[1]: Created slice kubepods-burstable-podcd423bf3_e134_4684_b6a7_468b783b7e25.slice - libcontainer container kubepods-burstable-podcd423bf3_e134_4684_b6a7_468b783b7e25.slice. Aug 13 07:26:48.108012 kubelet[2582]: E0813 07:26:48.107976 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:48.108637 containerd[1475]: time="2025-08-13T07:26:48.108594064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-st4r4,Uid:092bc188-65b1-4634-826b-b1c9af0bcbdc,Namespace:kube-system,Attempt:0,}" Aug 13 07:26:48.111179 kubelet[2582]: E0813 07:26:48.111148 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:48.111795 containerd[1475]: time="2025-08-13T07:26:48.111665032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cv2hr,Uid:cd423bf3-e134-4684-b6a7-468b783b7e25,Namespace:kube-system,Attempt:0,}" Aug 13 07:26:48.545733 kubelet[2582]: E0813 07:26:48.545613 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:48.560588 kubelet[2582]: I0813 07:26:48.560534 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hdjqq" podStartSLOduration=5.341092648 podStartE2EDuration="15.560519311s" podCreationTimestamp="2025-08-13 07:26:33 +0000 UTC" firstStartedPulling="2025-08-13 07:26:34.029950971 +0000 UTC m=+6.642970372" lastFinishedPulling="2025-08-13 07:26:44.249377634 +0000 UTC m=+16.862397035" observedRunningTime="2025-08-13 07:26:48.560236306 +0000 UTC m=+21.173255707" watchObservedRunningTime="2025-08-13 07:26:48.560519311 +0000 UTC m=+21.173538712" Aug 13 07:26:49.547544 kubelet[2582]: E0813 07:26:49.547514 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:49.814057 systemd-networkd[1398]: cilium_host: Link UP Aug 13 07:26:49.814978 systemd-networkd[1398]: cilium_net: Link UP Aug 13 07:26:49.815740 systemd-networkd[1398]: cilium_net: Gained carrier Aug 13 07:26:49.815888 systemd-networkd[1398]: cilium_host: Gained carrier Aug 13 07:26:49.854777 systemd-networkd[1398]: cilium_net: Gained IPv6LL Aug 13 07:26:49.894054 systemd-networkd[1398]: cilium_vxlan: Link UP Aug 13 07:26:49.894327 systemd-networkd[1398]: cilium_vxlan: Gained carrier Aug 13 07:26:50.172805 kernel: NET: Registered PF_ALG protocol family Aug 13 07:26:50.307889 systemd-networkd[1398]: cilium_host: Gained IPv6LL Aug 13 07:26:50.549611 kubelet[2582]: E0813 07:26:50.549354 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:50.726204 systemd-networkd[1398]: lxc_health: Link UP Aug 13 07:26:50.727618 systemd-networkd[1398]: lxc_health: Gained carrier Aug 13 07:26:51.229819 kernel: eth0: renamed from tmp4b5d3 Aug 13 07:26:51.246793 kernel: eth0: renamed from tmp07ee3 Aug 13 07:26:51.261514 systemd-networkd[1398]: lxc35cc1bd6f7a6: Link UP Aug 13 07:26:51.261782 systemd-networkd[1398]: lxc87d17b613608: Link UP Aug 13 07:26:51.263739 systemd-networkd[1398]: lxc87d17b613608: Gained carrier Aug 13 07:26:51.263965 systemd-networkd[1398]: lxc35cc1bd6f7a6: Gained carrier Aug 13 07:26:51.851800 systemd-networkd[1398]: cilium_vxlan: Gained IPv6LL Aug 13 07:26:51.914802 systemd-networkd[1398]: lxc_health: Gained IPv6LL Aug 13 07:26:51.972083 kubelet[2582]: E0813 07:26:51.972004 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:52.682863 systemd-networkd[1398]: lxc87d17b613608: Gained IPv6LL Aug 13 07:26:53.194814 systemd-networkd[1398]: lxc35cc1bd6f7a6: Gained IPv6LL Aug 13 07:26:54.763052 containerd[1475]: time="2025-08-13T07:26:54.762953698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:26:54.763052 containerd[1475]: time="2025-08-13T07:26:54.763017946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:26:54.763668 containerd[1475]: time="2025-08-13T07:26:54.763029907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:54.763817 containerd[1475]: time="2025-08-13T07:26:54.763720233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:26:54.763817 containerd[1475]: time="2025-08-13T07:26:54.763704191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:54.763872 containerd[1475]: time="2025-08-13T07:26:54.763815565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:26:54.763872 containerd[1475]: time="2025-08-13T07:26:54.763832527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:54.764129 containerd[1475]: time="2025-08-13T07:26:54.764090919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:26:54.791933 systemd[1]: Started cri-containerd-07ee361dbcc486a8a2269fbeda0113499d82b30ccdadd1535ab2ec7dda4660ef.scope - libcontainer container 07ee361dbcc486a8a2269fbeda0113499d82b30ccdadd1535ab2ec7dda4660ef. Aug 13 07:26:54.793575 systemd[1]: Started cri-containerd-4b5d358829b431b2fb4f3b3ef0ac6371f96a8f35e3262977648382c31e52e689.scope - libcontainer container 4b5d358829b431b2fb4f3b3ef0ac6371f96a8f35e3262977648382c31e52e689. Aug 13 07:26:54.802626 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:26:54.804483 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 07:26:54.821560 containerd[1475]: time="2025-08-13T07:26:54.821028449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-st4r4,Uid:092bc188-65b1-4634-826b-b1c9af0bcbdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"07ee361dbcc486a8a2269fbeda0113499d82b30ccdadd1535ab2ec7dda4660ef\"" Aug 13 07:26:54.823139 kubelet[2582]: E0813 07:26:54.823050 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:54.824944 containerd[1475]: time="2025-08-13T07:26:54.824845121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cv2hr,Uid:cd423bf3-e134-4684-b6a7-468b783b7e25,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b5d358829b431b2fb4f3b3ef0ac6371f96a8f35e3262977648382c31e52e689\"" Aug 13 07:26:54.825989 kubelet[2582]: E0813 07:26:54.825889 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:54.826991 containerd[1475]: time="2025-08-13T07:26:54.826875093Z" level=info msg="CreateContainer within sandbox \"07ee361dbcc486a8a2269fbeda0113499d82b30ccdadd1535ab2ec7dda4660ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:26:54.828194 containerd[1475]: time="2025-08-13T07:26:54.828164372Z" level=info msg="CreateContainer within sandbox \"4b5d358829b431b2fb4f3b3ef0ac6371f96a8f35e3262977648382c31e52e689\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:26:54.843131 containerd[1475]: time="2025-08-13T07:26:54.843097141Z" level=info msg="CreateContainer within sandbox \"07ee361dbcc486a8a2269fbeda0113499d82b30ccdadd1535ab2ec7dda4660ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99a37213700a255cd30ce4e6405b45bc0e41116e2080dba1d16594fadacd47ac\"" Aug 13 07:26:54.843637 containerd[1475]: time="2025-08-13T07:26:54.843577841Z" level=info msg="StartContainer for \"99a37213700a255cd30ce4e6405b45bc0e41116e2080dba1d16594fadacd47ac\"" Aug 13 07:26:54.851587 containerd[1475]: time="2025-08-13T07:26:54.851478299Z" level=info msg="CreateContainer within sandbox \"4b5d358829b431b2fb4f3b3ef0ac6371f96a8f35e3262977648382c31e52e689\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b28bb68226e3ecc161f93691ddff219a05da9a943acc36388aae1966dc72b2d1\"" Aug 13 07:26:54.852060 containerd[1475]: time="2025-08-13T07:26:54.852031728Z" level=info msg="StartContainer for \"b28bb68226e3ecc161f93691ddff219a05da9a943acc36388aae1966dc72b2d1\"" Aug 13 07:26:54.875961 systemd[1]: Started cri-containerd-99a37213700a255cd30ce4e6405b45bc0e41116e2080dba1d16594fadacd47ac.scope - libcontainer container 99a37213700a255cd30ce4e6405b45bc0e41116e2080dba1d16594fadacd47ac. Aug 13 07:26:54.879245 systemd[1]: Started cri-containerd-b28bb68226e3ecc161f93691ddff219a05da9a943acc36388aae1966dc72b2d1.scope - libcontainer container b28bb68226e3ecc161f93691ddff219a05da9a943acc36388aae1966dc72b2d1. Aug 13 07:26:54.907752 containerd[1475]: time="2025-08-13T07:26:54.906272324Z" level=info msg="StartContainer for \"99a37213700a255cd30ce4e6405b45bc0e41116e2080dba1d16594fadacd47ac\" returns successfully" Aug 13 07:26:54.909493 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:45528.service - OpenSSH per-connection server daemon (10.0.0.1:45528). Aug 13 07:26:54.921442 containerd[1475]: time="2025-08-13T07:26:54.921407838Z" level=info msg="StartContainer for \"b28bb68226e3ecc161f93691ddff219a05da9a943acc36388aae1966dc72b2d1\" returns successfully" Aug 13 07:26:54.968236 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 45528 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:26:54.969393 sshd-session[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:26:54.978755 systemd-logind[1457]: New session 8 of user core. Aug 13 07:26:54.983884 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:26:55.135518 sshd[3983]: Connection closed by 10.0.0.1 port 45528 Aug 13 07:26:55.136758 sshd-session[3966]: pam_unix(sshd:session): session closed for user core Aug 13 07:26:55.141203 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:26:55.141505 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:45528.service: Deactivated successfully. Aug 13 07:26:55.144262 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:26:55.146477 systemd-logind[1457]: Removed session 8. Aug 13 07:26:55.559134 kubelet[2582]: E0813 07:26:55.558829 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:55.561539 kubelet[2582]: E0813 07:26:55.561404 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:55.580338 kubelet[2582]: I0813 07:26:55.580277 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-st4r4" podStartSLOduration=22.580261816 podStartE2EDuration="22.580261816s" podCreationTimestamp="2025-08-13 07:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:26:55.568909703 +0000 UTC m=+28.181929104" watchObservedRunningTime="2025-08-13 07:26:55.580261816 +0000 UTC m=+28.193281217" Aug 13 07:26:55.768656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949731469.mount: Deactivated successfully. Aug 13 07:26:56.562400 kubelet[2582]: E0813 07:26:56.562367 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:57.564089 kubelet[2582]: E0813 07:26:57.564029 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:58.113613 kubelet[2582]: E0813 07:26:58.113583 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:26:58.125285 kubelet[2582]: I0813 07:26:58.125192 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cv2hr" podStartSLOduration=25.125146476 podStartE2EDuration="25.125146476s" podCreationTimestamp="2025-08-13 07:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:26:55.594256804 +0000 UTC m=+28.207276205" watchObservedRunningTime="2025-08-13 07:26:58.125146476 +0000 UTC m=+30.738165837" Aug 13 07:26:58.565387 kubelet[2582]: E0813 07:26:58.565283 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:00.151401 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:45542.service - OpenSSH per-connection server daemon (10.0.0.1:45542). Aug 13 07:27:00.203700 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 45542 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:00.204998 sshd-session[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:00.209452 systemd-logind[1457]: New session 9 of user core. Aug 13 07:27:00.218869 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:27:00.330022 sshd[4020]: Connection closed by 10.0.0.1 port 45542 Aug 13 07:27:00.330363 sshd-session[4018]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:00.333374 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:45542.service: Deactivated successfully. Aug 13 07:27:00.335063 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:27:00.335678 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:27:00.336677 systemd-logind[1457]: Removed session 9. Aug 13 07:27:02.309752 kubelet[2582]: I0813 07:27:02.309502 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:27:02.310101 kubelet[2582]: E0813 07:27:02.309985 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:02.571881 kubelet[2582]: E0813 07:27:02.571777 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:05.341717 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:34608.service - OpenSSH per-connection server daemon (10.0.0.1:34608). Aug 13 07:27:05.385746 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 34608 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:05.386387 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:05.389841 systemd-logind[1457]: New session 10 of user core. Aug 13 07:27:05.398856 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:27:05.510437 sshd[4039]: Connection closed by 10.0.0.1 port 34608 Aug 13 07:27:05.511361 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:05.525259 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:34608.service: Deactivated successfully. Aug 13 07:27:05.526678 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:27:05.527381 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:27:05.533983 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:34620.service - OpenSSH per-connection server daemon (10.0.0.1:34620). Aug 13 07:27:05.535541 systemd-logind[1457]: Removed session 10. Aug 13 07:27:05.575466 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 34620 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:05.576269 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:05.580477 systemd-logind[1457]: New session 11 of user core. Aug 13 07:27:05.595843 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:27:05.746131 sshd[4057]: Connection closed by 10.0.0.1 port 34620 Aug 13 07:27:05.742199 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:05.756163 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:34620.service: Deactivated successfully. Aug 13 07:27:05.760669 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:27:05.765794 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:27:05.776029 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:34622.service - OpenSSH per-connection server daemon (10.0.0.1:34622). Aug 13 07:27:05.779286 systemd-logind[1457]: Removed session 11. Aug 13 07:27:05.824191 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 34622 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:05.824968 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:05.828698 systemd-logind[1457]: New session 12 of user core. Aug 13 07:27:05.834835 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:27:05.940888 sshd[4072]: Connection closed by 10.0.0.1 port 34622 Aug 13 07:27:05.941130 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:05.944535 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:34622.service: Deactivated successfully. Aug 13 07:27:05.946113 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:27:05.946706 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:27:05.947511 systemd-logind[1457]: Removed session 12. Aug 13 07:27:10.955973 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:34626.service - OpenSSH per-connection server daemon (10.0.0.1:34626). Aug 13 07:27:10.997683 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 34626 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:10.998822 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:11.002253 systemd-logind[1457]: New session 13 of user core. Aug 13 07:27:11.017824 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:27:11.128110 sshd[4089]: Connection closed by 10.0.0.1 port 34626 Aug 13 07:27:11.128450 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:11.131820 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:34626.service: Deactivated successfully. Aug 13 07:27:11.133578 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:27:11.134195 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:27:11.134941 systemd-logind[1457]: Removed session 13. Aug 13 07:27:16.140126 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:38172.service - OpenSSH per-connection server daemon (10.0.0.1:38172). Aug 13 07:27:16.182389 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 38172 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:16.183495 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:16.187169 systemd-logind[1457]: New session 14 of user core. Aug 13 07:27:16.202853 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:27:16.313759 sshd[4104]: Connection closed by 10.0.0.1 port 38172 Aug 13 07:27:16.314123 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:16.323895 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:38172.service: Deactivated successfully. Aug 13 07:27:16.325454 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:27:16.326748 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:27:16.327925 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:38188.service - OpenSSH per-connection server daemon (10.0.0.1:38188). Aug 13 07:27:16.329008 systemd-logind[1457]: Removed session 14. Aug 13 07:27:16.370642 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 38188 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:16.371739 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:16.377488 systemd-logind[1457]: New session 15 of user core. Aug 13 07:27:16.388896 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:27:16.581999 sshd[4120]: Connection closed by 10.0.0.1 port 38188 Aug 13 07:27:16.583056 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:16.597883 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:38188.service: Deactivated successfully. Aug 13 07:27:16.599379 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:27:16.600534 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:27:16.607993 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:38198.service - OpenSSH per-connection server daemon (10.0.0.1:38198). Aug 13 07:27:16.609134 systemd-logind[1457]: Removed session 15. Aug 13 07:27:16.651050 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 38198 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:16.652199 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:16.656317 systemd-logind[1457]: New session 16 of user core. Aug 13 07:27:16.667894 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:27:17.852068 sshd[4133]: Connection closed by 10.0.0.1 port 38198 Aug 13 07:27:17.851552 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:17.865202 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:38198.service: Deactivated successfully. Aug 13 07:27:17.867056 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:27:17.868643 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:27:17.879965 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:38210.service - OpenSSH per-connection server daemon (10.0.0.1:38210). Aug 13 07:27:17.880972 systemd-logind[1457]: Removed session 16. Aug 13 07:27:17.919199 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 38210 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:17.920366 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:17.924403 systemd-logind[1457]: New session 17 of user core. Aug 13 07:27:17.934848 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:27:18.144189 sshd[4157]: Connection closed by 10.0.0.1 port 38210 Aug 13 07:27:18.145268 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:18.151929 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:38210.service: Deactivated successfully. Aug 13 07:27:18.153349 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:27:18.154354 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:27:18.159227 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:38216.service - OpenSSH per-connection server daemon (10.0.0.1:38216). Aug 13 07:27:18.160608 systemd-logind[1457]: Removed session 17. Aug 13 07:27:18.199089 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 38216 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:18.200386 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:18.204162 systemd-logind[1457]: New session 18 of user core. Aug 13 07:27:18.215846 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:27:18.323756 sshd[4170]: Connection closed by 10.0.0.1 port 38216 Aug 13 07:27:18.324841 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:18.327811 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:38216.service: Deactivated successfully. Aug 13 07:27:18.329560 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:27:18.330217 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:27:18.331112 systemd-logind[1457]: Removed session 18. Aug 13 07:27:23.336555 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:45384.service - OpenSSH per-connection server daemon (10.0.0.1:45384). Aug 13 07:27:23.378024 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 45384 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:23.379094 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:23.382371 systemd-logind[1457]: New session 19 of user core. Aug 13 07:27:23.393824 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:27:23.501310 sshd[4188]: Connection closed by 10.0.0.1 port 45384 Aug 13 07:27:23.501645 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:23.504480 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:45384.service: Deactivated successfully. Aug 13 07:27:23.506516 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:27:23.507308 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:27:23.508086 systemd-logind[1457]: Removed session 19. Aug 13 07:27:28.513462 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:45394.service - OpenSSH per-connection server daemon (10.0.0.1:45394). Aug 13 07:27:28.554884 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 45394 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:28.555904 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:28.559170 systemd-logind[1457]: New session 20 of user core. Aug 13 07:27:28.564808 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:27:28.667748 sshd[4207]: Connection closed by 10.0.0.1 port 45394 Aug 13 07:27:28.667200 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:28.670581 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:45394.service: Deactivated successfully. Aug 13 07:27:28.672188 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:27:28.673038 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:27:28.673825 systemd-logind[1457]: Removed session 20. Aug 13 07:27:33.683148 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:46962.service - OpenSSH per-connection server daemon (10.0.0.1:46962). Aug 13 07:27:33.725365 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 46962 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:33.726653 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:33.732216 systemd-logind[1457]: New session 21 of user core. Aug 13 07:27:33.742366 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:27:33.850500 sshd[4222]: Connection closed by 10.0.0.1 port 46962 Aug 13 07:27:33.851055 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:33.865756 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:46962.service: Deactivated successfully. Aug 13 07:27:33.867286 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:27:33.867974 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:27:33.873155 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:46968.service - OpenSSH per-connection server daemon (10.0.0.1:46968). Aug 13 07:27:33.874808 systemd-logind[1457]: Removed session 21. Aug 13 07:27:33.912362 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 46968 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:33.913402 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:33.917602 systemd-logind[1457]: New session 22 of user core. Aug 13 07:27:33.926854 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:27:35.619525 containerd[1475]: time="2025-08-13T07:27:35.619426094Z" level=info msg="StopContainer for \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\" with timeout 30 (s)" Aug 13 07:27:35.621100 containerd[1475]: time="2025-08-13T07:27:35.619789716Z" level=info msg="Stop container \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\" with signal terminated" Aug 13 07:27:35.632656 systemd[1]: cri-containerd-d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e.scope: Deactivated successfully. Aug 13 07:27:35.659654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e-rootfs.mount: Deactivated successfully. Aug 13 07:27:35.670367 containerd[1475]: time="2025-08-13T07:27:35.669914865Z" level=info msg="shim disconnected" id=d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e namespace=k8s.io Aug 13 07:27:35.670367 containerd[1475]: time="2025-08-13T07:27:35.670284567Z" level=warning msg="cleaning up after shim disconnected" id=d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e namespace=k8s.io Aug 13 07:27:35.670367 containerd[1475]: time="2025-08-13T07:27:35.670297566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:27:35.682639 containerd[1475]: time="2025-08-13T07:27:35.682613444Z" level=info msg="StopContainer for \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\" with timeout 2 (s)" Aug 13 07:27:35.683046 containerd[1475]: time="2025-08-13T07:27:35.683019544Z" level=info msg="Stop container \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\" with signal terminated" Aug 13 07:27:35.689163 systemd-networkd[1398]: lxc_health: Link DOWN Aug 13 07:27:35.689171 systemd-networkd[1398]: lxc_health: Lost carrier Aug 13 07:27:35.705042 systemd[1]: cri-containerd-4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88.scope: Deactivated successfully. Aug 13 07:27:35.705350 systemd[1]: cri-containerd-4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88.scope: Consumed 6.326s CPU time, 122.7M memory peak, 212K read from disk, 12.9M written to disk. Aug 13 07:27:35.706113 containerd[1475]: time="2025-08-13T07:27:35.705965342Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:27:35.723270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88-rootfs.mount: Deactivated successfully. Aug 13 07:27:35.731342 containerd[1475]: time="2025-08-13T07:27:35.731260265Z" level=info msg="shim disconnected" id=4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88 namespace=k8s.io Aug 13 07:27:35.731342 containerd[1475]: time="2025-08-13T07:27:35.731311702Z" level=warning msg="cleaning up after shim disconnected" id=4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88 namespace=k8s.io Aug 13 07:27:35.731342 containerd[1475]: time="2025-08-13T07:27:35.731337541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:27:35.738735 containerd[1475]: time="2025-08-13T07:27:35.738584827Z" level=info msg="StopContainer for \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\" returns successfully" Aug 13 07:27:35.739443 containerd[1475]: time="2025-08-13T07:27:35.739284592Z" level=info msg="StopPodSandbox for \"a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b\"" Aug 13 07:27:35.739443 containerd[1475]: time="2025-08-13T07:27:35.739329790Z" level=info msg="Container to stop \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:27:35.741340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b-shm.mount: Deactivated successfully. Aug 13 07:27:35.758444 containerd[1475]: time="2025-08-13T07:27:35.758406537Z" level=info msg="StopContainer for \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\" returns successfully" Aug 13 07:27:35.759264 systemd[1]: cri-containerd-a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b.scope: Deactivated successfully. Aug 13 07:27:35.761235 containerd[1475]: time="2025-08-13T07:27:35.759538122Z" level=info msg="StopPodSandbox for \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\"" Aug 13 07:27:35.761235 containerd[1475]: time="2025-08-13T07:27:35.759586320Z" level=info msg="Container to stop \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:27:35.761235 containerd[1475]: time="2025-08-13T07:27:35.759598039Z" level=info msg="Container to stop \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:27:35.761235 containerd[1475]: time="2025-08-13T07:27:35.759612798Z" level=info msg="Container to stop \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:27:35.761235 containerd[1475]: time="2025-08-13T07:27:35.759622158Z" level=info msg="Container to stop \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:27:35.761235 containerd[1475]: time="2025-08-13T07:27:35.759630278Z" level=info msg="Container to stop \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 07:27:35.761618 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251-shm.mount: Deactivated successfully. Aug 13 07:27:35.773091 systemd[1]: cri-containerd-73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251.scope: Deactivated successfully. Aug 13 07:27:35.798635 containerd[1475]: time="2025-08-13T07:27:35.797618620Z" level=info msg="shim disconnected" id=73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251 namespace=k8s.io Aug 13 07:27:35.798845 containerd[1475]: time="2025-08-13T07:27:35.798826521Z" level=warning msg="cleaning up after shim disconnected" id=73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251 namespace=k8s.io Aug 13 07:27:35.798908 containerd[1475]: time="2025-08-13T07:27:35.798888358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:27:35.798976 containerd[1475]: time="2025-08-13T07:27:35.797807731Z" level=info msg="shim disconnected" id=a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b namespace=k8s.io Aug 13 07:27:35.799029 containerd[1475]: time="2025-08-13T07:27:35.799016631Z" level=warning msg="cleaning up after shim disconnected" id=a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b namespace=k8s.io Aug 13 07:27:35.799075 containerd[1475]: time="2025-08-13T07:27:35.799064909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:27:35.810506 containerd[1475]: time="2025-08-13T07:27:35.810469711Z" level=warning msg="cleanup warnings time=\"2025-08-13T07:27:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 07:27:35.811737 containerd[1475]: time="2025-08-13T07:27:35.811673732Z" level=info msg="TearDown network for sandbox \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" successfully" Aug 13 07:27:35.811841 containerd[1475]: time="2025-08-13T07:27:35.811825005Z" level=info msg="StopPodSandbox for \"73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251\" returns successfully" Aug 13 07:27:35.812946 containerd[1475]: time="2025-08-13T07:27:35.812918432Z" level=info msg="TearDown network for sandbox \"a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b\" successfully" Aug 13 07:27:35.813020 containerd[1475]: time="2025-08-13T07:27:35.813007907Z" level=info msg="StopPodSandbox for \"a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b\" returns successfully" Aug 13 07:27:35.863984 kubelet[2582]: I0813 07:27:35.863940 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-cgroup\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.864397 kubelet[2582]: I0813 07:27:35.864376 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsg7z\" (UniqueName: \"kubernetes.io/projected/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-kube-api-access-bsg7z\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.864481 kubelet[2582]: I0813 07:27:35.864469 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-etc-cni-netd\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.864547 kubelet[2582]: I0813 07:27:35.864537 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfbzd\" (UniqueName: \"kubernetes.io/projected/990f54a5-ee29-491e-9c2c-59758e4137ff-kube-api-access-wfbzd\") pod \"990f54a5-ee29-491e-9c2c-59758e4137ff\" (UID: \"990f54a5-ee29-491e-9c2c-59758e4137ff\") " Aug 13 07:27:35.864604 kubelet[2582]: I0813 07:27:35.864595 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-clustermesh-secrets\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.864718 kubelet[2582]: I0813 07:27:35.864677 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-bpf-maps\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.868670 kubelet[2582]: I0813 07:27:35.868641 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/990f54a5-ee29-491e-9c2c-59758e4137ff-cilium-config-path\") pod \"990f54a5-ee29-491e-9c2c-59758e4137ff\" (UID: \"990f54a5-ee29-491e-9c2c-59758e4137ff\") " Aug 13 07:27:35.868747 kubelet[2582]: I0813 07:27:35.868675 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-lib-modules\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.868747 kubelet[2582]: I0813 07:27:35.868723 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-host-proc-sys-kernel\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.868747 kubelet[2582]: I0813 07:27:35.868744 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-config-path\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.868813 kubelet[2582]: I0813 07:27:35.868761 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-hubble-tls\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.868813 kubelet[2582]: I0813 07:27:35.868778 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cni-path\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.868813 kubelet[2582]: I0813 07:27:35.868792 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-host-proc-sys-net\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.868875 kubelet[2582]: I0813 07:27:35.868599 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:35.868875 kubelet[2582]: I0813 07:27:35.868833 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:35.869138 kubelet[2582]: I0813 07:27:35.869116 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:35.870275 kubelet[2582]: I0813 07:27:35.870195 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:35.874388 kubelet[2582]: I0813 07:27:35.874357 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:27:35.875130 kubelet[2582]: I0813 07:27:35.875085 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/990f54a5-ee29-491e-9c2c-59758e4137ff-kube-api-access-wfbzd" (OuterVolumeSpecName: "kube-api-access-wfbzd") pod "990f54a5-ee29-491e-9c2c-59758e4137ff" (UID: "990f54a5-ee29-491e-9c2c-59758e4137ff"). InnerVolumeSpecName "kube-api-access-wfbzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:27:35.875831 kubelet[2582]: I0813 07:27:35.875792 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-kube-api-access-bsg7z" (OuterVolumeSpecName: "kube-api-access-bsg7z") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "kube-api-access-bsg7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:27:35.875892 kubelet[2582]: I0813 07:27:35.875837 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:35.875892 kubelet[2582]: I0813 07:27:35.875870 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cni-path" (OuterVolumeSpecName: "cni-path") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:35.875892 kubelet[2582]: I0813 07:27:35.875885 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:35.876742 kubelet[2582]: I0813 07:27:35.876122 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:27:35.877118 kubelet[2582]: I0813 07:27:35.877078 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:27:35.880281 kubelet[2582]: I0813 07:27:35.880249 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/990f54a5-ee29-491e-9c2c-59758e4137ff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "990f54a5-ee29-491e-9c2c-59758e4137ff" (UID: "990f54a5-ee29-491e-9c2c-59758e4137ff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:27:35.969856 kubelet[2582]: I0813 07:27:35.969822 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-xtables-lock\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.969856 kubelet[2582]: I0813 07:27:35.969856 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-hostproc\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.969971 kubelet[2582]: I0813 07:27:35.969873 2582 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-run\") pod \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\" (UID: \"d48a423a-8ce2-4e3b-b08d-50a04ecd1944\") " Aug 13 07:27:35.969971 kubelet[2582]: I0813 07:27:35.969902 2582 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.969971 kubelet[2582]: I0813 07:27:35.969912 2582 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.969971 kubelet[2582]: I0813 07:27:35.969921 2582 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/990f54a5-ee29-491e-9c2c-59758e4137ff-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.969971 kubelet[2582]: I0813 07:27:35.969929 2582 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.969971 kubelet[2582]: I0813 07:27:35.969937 2582 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.969971 kubelet[2582]: I0813 07:27:35.969945 2582 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.969971 kubelet[2582]: I0813 07:27:35.969953 2582 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.970133 kubelet[2582]: I0813 07:27:35.969961 2582 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.970133 kubelet[2582]: I0813 07:27:35.969970 2582 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.970133 kubelet[2582]: I0813 07:27:35.969978 2582 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.970133 kubelet[2582]: I0813 07:27:35.969986 2582 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfbzd\" (UniqueName: \"kubernetes.io/projected/990f54a5-ee29-491e-9c2c-59758e4137ff-kube-api-access-wfbzd\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.970133 kubelet[2582]: I0813 07:27:35.969994 2582 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsg7z\" (UniqueName: \"kubernetes.io/projected/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-kube-api-access-bsg7z\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.970133 kubelet[2582]: I0813 07:27:35.970003 2582 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:35.970133 kubelet[2582]: I0813 07:27:35.969947 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:35.970267 kubelet[2582]: I0813 07:27:35.969981 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-hostproc" (OuterVolumeSpecName: "hostproc") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:35.970267 kubelet[2582]: I0813 07:27:35.970000 2582 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d48a423a-8ce2-4e3b-b08d-50a04ecd1944" (UID: "d48a423a-8ce2-4e3b-b08d-50a04ecd1944"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 07:27:36.070316 kubelet[2582]: I0813 07:27:36.070246 2582 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:36.070316 kubelet[2582]: I0813 07:27:36.070307 2582 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:36.070443 kubelet[2582]: I0813 07:27:36.070331 2582 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d48a423a-8ce2-4e3b-b08d-50a04ecd1944-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 13 07:27:36.630829 kubelet[2582]: I0813 07:27:36.630739 2582 scope.go:117] "RemoveContainer" containerID="4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88" Aug 13 07:27:36.635751 containerd[1475]: time="2025-08-13T07:27:36.633581287Z" level=info msg="RemoveContainer for \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\"" Aug 13 07:27:36.637030 systemd[1]: Removed slice kubepods-burstable-podd48a423a_8ce2_4e3b_b08d_50a04ecd1944.slice - libcontainer container kubepods-burstable-podd48a423a_8ce2_4e3b_b08d_50a04ecd1944.slice. Aug 13 07:27:36.637316 systemd[1]: kubepods-burstable-podd48a423a_8ce2_4e3b_b08d_50a04ecd1944.slice: Consumed 6.462s CPU time, 123M memory peak, 228K read from disk, 12.9M written to disk. Aug 13 07:27:36.641210 containerd[1475]: time="2025-08-13T07:27:36.641008104Z" level=info msg="RemoveContainer for \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\" returns successfully" Aug 13 07:27:36.641949 kubelet[2582]: I0813 07:27:36.641928 2582 scope.go:117] "RemoveContainer" containerID="8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559" Aug 13 07:27:36.642601 systemd[1]: Removed slice kubepods-besteffort-pod990f54a5_ee29_491e_9c2c_59758e4137ff.slice - libcontainer container kubepods-besteffort-pod990f54a5_ee29_491e_9c2c_59758e4137ff.slice. Aug 13 07:27:36.643984 containerd[1475]: time="2025-08-13T07:27:36.643954848Z" level=info msg="RemoveContainer for \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\"" Aug 13 07:27:36.646574 containerd[1475]: time="2025-08-13T07:27:36.646452253Z" level=info msg="RemoveContainer for \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\" returns successfully" Aug 13 07:27:36.646769 kubelet[2582]: I0813 07:27:36.646733 2582 scope.go:117] "RemoveContainer" containerID="54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310" Aug 13 07:27:36.647553 containerd[1475]: time="2025-08-13T07:27:36.647516363Z" level=info msg="RemoveContainer for \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\"" Aug 13 07:27:36.652729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6cf4b50768725e8a1927c0fdaa7b14239e08b53e2edb0d9df151cd1274e277b-rootfs.mount: Deactivated successfully. Aug 13 07:27:36.652822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73c9a8127e5265811cec7f7df753518c8f0fafad216558ed913492cfe989e251-rootfs.mount: Deactivated successfully. Aug 13 07:27:36.652876 systemd[1]: var-lib-kubelet-pods-990f54a5\x2dee29\x2d491e\x2d9c2c\x2d59758e4137ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwfbzd.mount: Deactivated successfully. Aug 13 07:27:36.652936 systemd[1]: var-lib-kubelet-pods-d48a423a\x2d8ce2\x2d4e3b\x2db08d\x2d50a04ecd1944-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbsg7z.mount: Deactivated successfully. Aug 13 07:27:36.652984 systemd[1]: var-lib-kubelet-pods-d48a423a\x2d8ce2\x2d4e3b\x2db08d\x2d50a04ecd1944-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 07:27:36.653048 systemd[1]: var-lib-kubelet-pods-d48a423a\x2d8ce2\x2d4e3b\x2db08d\x2d50a04ecd1944-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 07:27:36.658801 containerd[1475]: time="2025-08-13T07:27:36.658506576Z" level=info msg="RemoveContainer for \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\" returns successfully" Aug 13 07:27:36.659126 kubelet[2582]: I0813 07:27:36.659085 2582 scope.go:117] "RemoveContainer" containerID="a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f" Aug 13 07:27:36.661283 containerd[1475]: time="2025-08-13T07:27:36.661214011Z" level=info msg="RemoveContainer for \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\"" Aug 13 07:27:36.663738 containerd[1475]: time="2025-08-13T07:27:36.663684817Z" level=info msg="RemoveContainer for \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\" returns successfully" Aug 13 07:27:36.663982 kubelet[2582]: I0813 07:27:36.663958 2582 scope.go:117] "RemoveContainer" containerID="12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0" Aug 13 07:27:36.665111 containerd[1475]: time="2025-08-13T07:27:36.664994636Z" level=info msg="RemoveContainer for \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\"" Aug 13 07:27:36.667478 containerd[1475]: time="2025-08-13T07:27:36.667395845Z" level=info msg="RemoveContainer for \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\" returns successfully" Aug 13 07:27:36.667752 kubelet[2582]: I0813 07:27:36.667643 2582 scope.go:117] "RemoveContainer" containerID="4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88" Aug 13 07:27:36.667927 containerd[1475]: time="2025-08-13T07:27:36.667829305Z" level=error msg="ContainerStatus for \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\": not found" Aug 13 07:27:36.674526 kubelet[2582]: E0813 07:27:36.674502 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\": not found" containerID="4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88" Aug 13 07:27:36.674941 kubelet[2582]: I0813 07:27:36.674626 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88"} err="failed to get container status \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ae82a40710a374a7793aa7eb938079c60d8adefcbdc1fc619b2a61a19678a88\": not found" Aug 13 07:27:36.674941 kubelet[2582]: I0813 07:27:36.674724 2582 scope.go:117] "RemoveContainer" containerID="8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559" Aug 13 07:27:36.675032 containerd[1475]: time="2025-08-13T07:27:36.674883179Z" level=error msg="ContainerStatus for \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\": not found" Aug 13 07:27:36.675235 kubelet[2582]: E0813 07:27:36.675125 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\": not found" containerID="8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559" Aug 13 07:27:36.675235 kubelet[2582]: I0813 07:27:36.675150 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559"} err="failed to get container status \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b58371dc0fe3d0c75b5438551922ad820f5f76e91fc6db175f841b2403d6559\": not found" Aug 13 07:27:36.675235 kubelet[2582]: I0813 07:27:36.675164 2582 scope.go:117] "RemoveContainer" containerID="54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310" Aug 13 07:27:36.675615 containerd[1475]: time="2025-08-13T07:27:36.675503911Z" level=error msg="ContainerStatus for \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\": not found" Aug 13 07:27:36.675766 kubelet[2582]: E0813 07:27:36.675743 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\": not found" containerID="54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310" Aug 13 07:27:36.675831 kubelet[2582]: I0813 07:27:36.675769 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310"} err="failed to get container status \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\": rpc error: code = NotFound desc = an error occurred when try to find container \"54d88eb9c8403761fefbbcd4b64bc5c389dad921e7bb49d25ace91abe4f78310\": not found" Aug 13 07:27:36.675831 kubelet[2582]: I0813 07:27:36.675785 2582 scope.go:117] "RemoveContainer" containerID="a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f" Aug 13 07:27:36.676013 containerd[1475]: time="2025-08-13T07:27:36.675943210Z" level=error msg="ContainerStatus for \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\": not found" Aug 13 07:27:36.676378 kubelet[2582]: E0813 07:27:36.676118 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\": not found" containerID="a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f" Aug 13 07:27:36.676378 kubelet[2582]: I0813 07:27:36.676141 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f"} err="failed to get container status \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6c74529e1e9eeecf35d0a140f7ce4fd2cdb6479d2ba1edcb21ddf653bf98f0f\": not found" Aug 13 07:27:36.676378 kubelet[2582]: I0813 07:27:36.676167 2582 scope.go:117] "RemoveContainer" containerID="12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0" Aug 13 07:27:36.676478 containerd[1475]: time="2025-08-13T07:27:36.676321873Z" level=error msg="ContainerStatus for \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\": not found" Aug 13 07:27:36.676683 kubelet[2582]: E0813 07:27:36.676571 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\": not found" containerID="12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0" Aug 13 07:27:36.676683 kubelet[2582]: I0813 07:27:36.676631 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0"} err="failed to get container status \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\": rpc error: code = NotFound desc = an error occurred when try to find container \"12e653b43b1557ec4581fae42d460fbb5f473a6d75c86017335a890b6a9bbac0\": not found" Aug 13 07:27:36.676683 kubelet[2582]: I0813 07:27:36.676648 2582 scope.go:117] "RemoveContainer" containerID="d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e" Aug 13 07:27:36.677887 containerd[1475]: time="2025-08-13T07:27:36.677850082Z" level=info msg="RemoveContainer for \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\"" Aug 13 07:27:36.680005 containerd[1475]: time="2025-08-13T07:27:36.679973944Z" level=info msg="RemoveContainer for \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\" returns successfully" Aug 13 07:27:36.680192 kubelet[2582]: I0813 07:27:36.680123 2582 scope.go:117] "RemoveContainer" containerID="d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e" Aug 13 07:27:36.680541 containerd[1475]: time="2025-08-13T07:27:36.680373326Z" level=error msg="ContainerStatus for \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\": not found" Aug 13 07:27:36.680592 kubelet[2582]: E0813 07:27:36.680505 2582 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\": not found" containerID="d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e" Aug 13 07:27:36.680699 kubelet[2582]: I0813 07:27:36.680668 2582 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e"} err="failed to get container status \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3867db8e7fccf21355924d0eb1efb2c76e15537bbb62905a1676cf90b841b9e\": not found" Aug 13 07:27:37.477960 kubelet[2582]: I0813 07:27:37.477146 2582 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="990f54a5-ee29-491e-9c2c-59758e4137ff" path="/var/lib/kubelet/pods/990f54a5-ee29-491e-9c2c-59758e4137ff/volumes" Aug 13 07:27:37.477960 kubelet[2582]: I0813 07:27:37.477526 2582 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48a423a-8ce2-4e3b-b08d-50a04ecd1944" path="/var/lib/kubelet/pods/d48a423a-8ce2-4e3b-b08d-50a04ecd1944/volumes" Aug 13 07:27:37.525971 kubelet[2582]: E0813 07:27:37.525922 2582 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:27:37.585738 sshd[4238]: Connection closed by 10.0.0.1 port 46968 Aug 13 07:27:37.586220 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:37.593827 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:46968.service: Deactivated successfully. Aug 13 07:27:37.595423 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:27:37.595615 systemd[1]: session-22.scope: Consumed 1.042s CPU time, 26.9M memory peak. Aug 13 07:27:37.596835 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:27:37.607936 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:46974.service - OpenSSH per-connection server daemon (10.0.0.1:46974). Aug 13 07:27:37.609253 systemd-logind[1457]: Removed session 22. Aug 13 07:27:37.646180 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 46974 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:37.647214 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:37.650658 systemd-logind[1457]: New session 23 of user core. Aug 13 07:27:37.658831 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:27:38.281735 sshd[4407]: Connection closed by 10.0.0.1 port 46974 Aug 13 07:27:38.282991 sshd-session[4404]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:38.297981 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:46990.service - OpenSSH per-connection server daemon (10.0.0.1:46990). Aug 13 07:27:38.299215 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:46974.service: Deactivated successfully. Aug 13 07:27:38.302447 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:27:38.311563 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:27:38.316890 kubelet[2582]: E0813 07:27:38.313812 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d48a423a-8ce2-4e3b-b08d-50a04ecd1944" containerName="apply-sysctl-overwrites" Aug 13 07:27:38.316890 kubelet[2582]: E0813 07:27:38.313838 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="990f54a5-ee29-491e-9c2c-59758e4137ff" containerName="cilium-operator" Aug 13 07:27:38.316890 kubelet[2582]: E0813 07:27:38.313845 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d48a423a-8ce2-4e3b-b08d-50a04ecd1944" containerName="cilium-agent" Aug 13 07:27:38.316890 kubelet[2582]: E0813 07:27:38.313851 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d48a423a-8ce2-4e3b-b08d-50a04ecd1944" containerName="mount-cgroup" Aug 13 07:27:38.316890 kubelet[2582]: E0813 07:27:38.313856 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d48a423a-8ce2-4e3b-b08d-50a04ecd1944" containerName="mount-bpf-fs" Aug 13 07:27:38.316890 kubelet[2582]: E0813 07:27:38.313862 2582 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d48a423a-8ce2-4e3b-b08d-50a04ecd1944" containerName="clean-cilium-state" Aug 13 07:27:38.316890 kubelet[2582]: I0813 07:27:38.313999 2582 memory_manager.go:354] "RemoveStaleState removing state" podUID="990f54a5-ee29-491e-9c2c-59758e4137ff" containerName="cilium-operator" Aug 13 07:27:38.316890 kubelet[2582]: I0813 07:27:38.314012 2582 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48a423a-8ce2-4e3b-b08d-50a04ecd1944" containerName="cilium-agent" Aug 13 07:27:38.314145 systemd-logind[1457]: Removed session 23. Aug 13 07:27:38.326723 systemd[1]: Created slice kubepods-burstable-pod44f900f5_053e_4db5_95a7_819ac786c8be.slice - libcontainer container kubepods-burstable-pod44f900f5_053e_4db5_95a7_819ac786c8be.slice. Aug 13 07:27:38.359868 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 46990 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:38.360960 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:38.365253 systemd-logind[1457]: New session 24 of user core. Aug 13 07:27:38.373864 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:27:38.383597 kubelet[2582]: I0813 07:27:38.383553 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-host-proc-sys-net\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.383597 kubelet[2582]: I0813 07:27:38.383601 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-cilium-run\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.383877 kubelet[2582]: I0813 07:27:38.383623 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44f900f5-053e-4db5-95a7-819ac786c8be-clustermesh-secrets\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.383877 kubelet[2582]: I0813 07:27:38.383642 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-host-proc-sys-kernel\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.383877 kubelet[2582]: I0813 07:27:38.383660 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/44f900f5-053e-4db5-95a7-819ac786c8be-cilium-ipsec-secrets\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.383877 kubelet[2582]: I0813 07:27:38.383677 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-cilium-cgroup\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.383877 kubelet[2582]: I0813 07:27:38.383718 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-cni-path\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.383877 kubelet[2582]: I0813 07:27:38.383742 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-etc-cni-netd\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.384022 kubelet[2582]: I0813 07:27:38.383761 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-lib-modules\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.384022 kubelet[2582]: I0813 07:27:38.383804 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-bpf-maps\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.384022 kubelet[2582]: I0813 07:27:38.383836 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-hostproc\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.384022 kubelet[2582]: I0813 07:27:38.383864 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44f900f5-053e-4db5-95a7-819ac786c8be-xtables-lock\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.384022 kubelet[2582]: I0813 07:27:38.383883 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44f900f5-053e-4db5-95a7-819ac786c8be-cilium-config-path\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.384022 kubelet[2582]: I0813 07:27:38.383901 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44f900f5-053e-4db5-95a7-819ac786c8be-hubble-tls\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.384139 kubelet[2582]: I0813 07:27:38.383917 2582 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59xgx\" (UniqueName: \"kubernetes.io/projected/44f900f5-053e-4db5-95a7-819ac786c8be-kube-api-access-59xgx\") pod \"cilium-zff64\" (UID: \"44f900f5-053e-4db5-95a7-819ac786c8be\") " pod="kube-system/cilium-zff64" Aug 13 07:27:38.423887 sshd[4421]: Connection closed by 10.0.0.1 port 46990 Aug 13 07:27:38.424499 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:38.434883 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:46990.service: Deactivated successfully. Aug 13 07:27:38.436311 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:27:38.436971 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:27:38.438660 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:46996.service - OpenSSH per-connection server daemon (10.0.0.1:46996). Aug 13 07:27:38.439489 systemd-logind[1457]: Removed session 24. Aug 13 07:27:38.480357 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 46996 ssh2: RSA SHA256:WOUoNnkS2a4WwtuEwg7LyHAfw0SfFAvW0SEvwcNBN8I Aug 13 07:27:38.481500 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:27:38.486799 systemd-logind[1457]: New session 25 of user core. Aug 13 07:27:38.493468 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 07:27:38.630576 kubelet[2582]: E0813 07:27:38.630517 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:38.631071 containerd[1475]: time="2025-08-13T07:27:38.631034488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zff64,Uid:44f900f5-053e-4db5-95a7-819ac786c8be,Namespace:kube-system,Attempt:0,}" Aug 13 07:27:38.649358 containerd[1475]: time="2025-08-13T07:27:38.649253701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:27:38.649358 containerd[1475]: time="2025-08-13T07:27:38.649304379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:27:38.649358 containerd[1475]: time="2025-08-13T07:27:38.649315178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:27:38.649358 containerd[1475]: time="2025-08-13T07:27:38.649378175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:27:38.680879 systemd[1]: Started cri-containerd-0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85.scope - libcontainer container 0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85. Aug 13 07:27:38.708413 containerd[1475]: time="2025-08-13T07:27:38.708372635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zff64,Uid:44f900f5-053e-4db5-95a7-819ac786c8be,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\"" Aug 13 07:27:38.709094 kubelet[2582]: E0813 07:27:38.709073 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:38.710744 containerd[1475]: time="2025-08-13T07:27:38.710715419Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 07:27:38.720338 containerd[1475]: time="2025-08-13T07:27:38.720234308Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b9f62c36ec8ebeee70b3ab549d2728b5273c957ba140e11330ee056e5040533c\"" Aug 13 07:27:38.720864 containerd[1475]: time="2025-08-13T07:27:38.720831684Z" level=info msg="StartContainer for \"b9f62c36ec8ebeee70b3ab549d2728b5273c957ba140e11330ee056e5040533c\"" Aug 13 07:27:38.746002 systemd[1]: Started cri-containerd-b9f62c36ec8ebeee70b3ab549d2728b5273c957ba140e11330ee056e5040533c.scope - libcontainer container b9f62c36ec8ebeee70b3ab549d2728b5273c957ba140e11330ee056e5040533c. Aug 13 07:27:38.766967 containerd[1475]: time="2025-08-13T07:27:38.766919553Z" level=info msg="StartContainer for \"b9f62c36ec8ebeee70b3ab549d2728b5273c957ba140e11330ee056e5040533c\" returns successfully" Aug 13 07:27:38.793636 systemd[1]: cri-containerd-b9f62c36ec8ebeee70b3ab549d2728b5273c957ba140e11330ee056e5040533c.scope: Deactivated successfully. Aug 13 07:27:38.819515 containerd[1475]: time="2025-08-13T07:27:38.819447477Z" level=info msg="shim disconnected" id=b9f62c36ec8ebeee70b3ab549d2728b5273c957ba140e11330ee056e5040533c namespace=k8s.io Aug 13 07:27:38.819515 containerd[1475]: time="2025-08-13T07:27:38.819503875Z" level=warning msg="cleaning up after shim disconnected" id=b9f62c36ec8ebeee70b3ab549d2728b5273c957ba140e11330ee056e5040533c namespace=k8s.io Aug 13 07:27:38.819515 containerd[1475]: time="2025-08-13T07:27:38.819513075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:27:39.136294 kubelet[2582]: I0813 07:27:39.136244 2582 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T07:27:39Z","lastTransitionTime":"2025-08-13T07:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 07:27:39.643450 kubelet[2582]: E0813 07:27:39.643414 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:39.652293 containerd[1475]: time="2025-08-13T07:27:39.652146111Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 07:27:39.662420 containerd[1475]: time="2025-08-13T07:27:39.662297559Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"248d764160ea4700065a0c7ec530c1a1e7e5dcc59e8d97d361719bccc3ee2550\"" Aug 13 07:27:39.663321 containerd[1475]: time="2025-08-13T07:27:39.663290481Z" level=info msg="StartContainer for \"248d764160ea4700065a0c7ec530c1a1e7e5dcc59e8d97d361719bccc3ee2550\"" Aug 13 07:27:39.692886 systemd[1]: Started cri-containerd-248d764160ea4700065a0c7ec530c1a1e7e5dcc59e8d97d361719bccc3ee2550.scope - libcontainer container 248d764160ea4700065a0c7ec530c1a1e7e5dcc59e8d97d361719bccc3ee2550. Aug 13 07:27:39.716004 containerd[1475]: time="2025-08-13T07:27:39.715647862Z" level=info msg="StartContainer for \"248d764160ea4700065a0c7ec530c1a1e7e5dcc59e8d97d361719bccc3ee2550\" returns successfully" Aug 13 07:27:39.724323 systemd[1]: cri-containerd-248d764160ea4700065a0c7ec530c1a1e7e5dcc59e8d97d361719bccc3ee2550.scope: Deactivated successfully. Aug 13 07:27:39.746253 containerd[1475]: time="2025-08-13T07:27:39.746106727Z" level=info msg="shim disconnected" id=248d764160ea4700065a0c7ec530c1a1e7e5dcc59e8d97d361719bccc3ee2550 namespace=k8s.io Aug 13 07:27:39.746253 containerd[1475]: time="2025-08-13T07:27:39.746165285Z" level=warning msg="cleaning up after shim disconnected" id=248d764160ea4700065a0c7ec530c1a1e7e5dcc59e8d97d361719bccc3ee2550 namespace=k8s.io Aug 13 07:27:39.746253 containerd[1475]: time="2025-08-13T07:27:39.746174244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:27:40.474727 kubelet[2582]: E0813 07:27:40.474674 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:40.495419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-248d764160ea4700065a0c7ec530c1a1e7e5dcc59e8d97d361719bccc3ee2550-rootfs.mount: Deactivated successfully. Aug 13 07:27:40.645363 kubelet[2582]: E0813 07:27:40.645321 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:40.648232 containerd[1475]: time="2025-08-13T07:27:40.648190918Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 07:27:40.663819 containerd[1475]: time="2025-08-13T07:27:40.663775714Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8ba915121c0111a425fad5492e6fe42cb1ec5979640d48d6d09f7649273e3009\"" Aug 13 07:27:40.664402 containerd[1475]: time="2025-08-13T07:27:40.664376212Z" level=info msg="StartContainer for \"8ba915121c0111a425fad5492e6fe42cb1ec5979640d48d6d09f7649273e3009\"" Aug 13 07:27:40.690865 systemd[1]: Started cri-containerd-8ba915121c0111a425fad5492e6fe42cb1ec5979640d48d6d09f7649273e3009.scope - libcontainer container 8ba915121c0111a425fad5492e6fe42cb1ec5979640d48d6d09f7649273e3009. Aug 13 07:27:40.718623 systemd[1]: cri-containerd-8ba915121c0111a425fad5492e6fe42cb1ec5979640d48d6d09f7649273e3009.scope: Deactivated successfully. Aug 13 07:27:40.719325 containerd[1475]: time="2025-08-13T07:27:40.719042954Z" level=info msg="StartContainer for \"8ba915121c0111a425fad5492e6fe42cb1ec5979640d48d6d09f7649273e3009\" returns successfully" Aug 13 07:27:40.740844 containerd[1475]: time="2025-08-13T07:27:40.740651332Z" level=info msg="shim disconnected" id=8ba915121c0111a425fad5492e6fe42cb1ec5979640d48d6d09f7649273e3009 namespace=k8s.io Aug 13 07:27:40.740844 containerd[1475]: time="2025-08-13T07:27:40.740759889Z" level=warning msg="cleaning up after shim disconnected" id=8ba915121c0111a425fad5492e6fe42cb1ec5979640d48d6d09f7649273e3009 namespace=k8s.io Aug 13 07:27:40.740844 containerd[1475]: time="2025-08-13T07:27:40.740769088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:27:41.495528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ba915121c0111a425fad5492e6fe42cb1ec5979640d48d6d09f7649273e3009-rootfs.mount: Deactivated successfully. Aug 13 07:27:41.648504 kubelet[2582]: E0813 07:27:41.648320 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:41.651116 containerd[1475]: time="2025-08-13T07:27:41.651068371Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 07:27:41.663591 containerd[1475]: time="2025-08-13T07:27:41.663545428Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e\"" Aug 13 07:27:41.664639 containerd[1475]: time="2025-08-13T07:27:41.664021932Z" level=info msg="StartContainer for \"5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e\"" Aug 13 07:27:41.701876 systemd[1]: Started cri-containerd-5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e.scope - libcontainer container 5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e. Aug 13 07:27:41.721110 systemd[1]: cri-containerd-5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e.scope: Deactivated successfully. Aug 13 07:27:41.726229 containerd[1475]: time="2025-08-13T07:27:41.725835238Z" level=info msg="StartContainer for \"5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e\" returns successfully" Aug 13 07:27:41.742814 containerd[1475]: time="2025-08-13T07:27:41.734818774Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44f900f5_053e_4db5_95a7_819ac786c8be.slice/cri-containerd-5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e.scope/memory.events\": no such file or directory" Aug 13 07:27:41.748543 containerd[1475]: time="2025-08-13T07:27:41.748304637Z" level=info msg="shim disconnected" id=5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e namespace=k8s.io Aug 13 07:27:41.748543 containerd[1475]: time="2025-08-13T07:27:41.748359515Z" level=warning msg="cleaning up after shim disconnected" id=5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e namespace=k8s.io Aug 13 07:27:41.748543 containerd[1475]: time="2025-08-13T07:27:41.748367555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:27:42.495419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5faf23109c3c80775127d9a5abaa9587c82822ba8ca6bdc4d5662731abe2c80e-rootfs.mount: Deactivated successfully. Aug 13 07:27:42.526872 kubelet[2582]: E0813 07:27:42.526834 2582 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 07:27:42.652603 kubelet[2582]: E0813 07:27:42.652566 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:42.654366 containerd[1475]: time="2025-08-13T07:27:42.654331087Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 07:27:42.678105 containerd[1475]: time="2025-08-13T07:27:42.678060376Z" level=info msg="CreateContainer within sandbox \"0ae5168da48ee8fb4f2850e978b19fff0967474bf0f3322ae752b0a036d74b85\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"92a7a9547ba8ddfba3a4c10d13531bf28b15cbf1efbcd13980d9d25c8cbe23fa\"" Aug 13 07:27:42.679001 containerd[1475]: time="2025-08-13T07:27:42.678730635Z" level=info msg="StartContainer for \"92a7a9547ba8ddfba3a4c10d13531bf28b15cbf1efbcd13980d9d25c8cbe23fa\"" Aug 13 07:27:42.702843 systemd[1]: Started cri-containerd-92a7a9547ba8ddfba3a4c10d13531bf28b15cbf1efbcd13980d9d25c8cbe23fa.scope - libcontainer container 92a7a9547ba8ddfba3a4c10d13531bf28b15cbf1efbcd13980d9d25c8cbe23fa. Aug 13 07:27:42.726747 containerd[1475]: time="2025-08-13T07:27:42.726681358Z" level=info msg="StartContainer for \"92a7a9547ba8ddfba3a4c10d13531bf28b15cbf1efbcd13980d9d25c8cbe23fa\" returns successfully" Aug 13 07:27:42.993818 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 13 07:27:43.656332 kubelet[2582]: E0813 07:27:43.656186 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:43.678766 kubelet[2582]: I0813 07:27:43.678446 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zff64" podStartSLOduration=5.678432316 podStartE2EDuration="5.678432316s" podCreationTimestamp="2025-08-13 07:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:27:43.66948762 +0000 UTC m=+76.282507021" watchObservedRunningTime="2025-08-13 07:27:43.678432316 +0000 UTC m=+76.291451717" Aug 13 07:27:44.658432 kubelet[2582]: E0813 07:27:44.658336 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:45.851210 systemd-networkd[1398]: lxc_health: Link UP Aug 13 07:27:45.851460 systemd-networkd[1398]: lxc_health: Gained carrier Aug 13 07:27:46.634628 kubelet[2582]: E0813 07:27:46.633712 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:46.665832 kubelet[2582]: E0813 07:27:46.665733 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:46.926941 systemd[1]: run-containerd-runc-k8s.io-92a7a9547ba8ddfba3a4c10d13531bf28b15cbf1efbcd13980d9d25c8cbe23fa-runc.TqDtg9.mount: Deactivated successfully. Aug 13 07:27:47.402820 systemd-networkd[1398]: lxc_health: Gained IPv6LL Aug 13 07:27:47.667821 kubelet[2582]: E0813 07:27:47.667515 2582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 07:27:51.192631 sshd[4434]: Connection closed by 10.0.0.1 port 46996 Aug 13 07:27:51.193415 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Aug 13 07:27:51.196571 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Aug 13 07:27:51.196737 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:46996.service: Deactivated successfully. Aug 13 07:27:51.198951 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 07:27:51.201095 systemd-logind[1457]: Removed session 25.