Aug 19 00:22:45.879748 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 19 00:22:45.879771 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Aug 18 22:15:14 -00 2025 Aug 19 00:22:45.879781 kernel: KASLR enabled Aug 19 00:22:45.879787 kernel: efi: EFI v2.7 by EDK II Aug 19 00:22:45.879793 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Aug 19 00:22:45.879799 kernel: random: crng init done Aug 19 00:22:45.879806 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Aug 19 00:22:45.879812 kernel: secureboot: Secure boot enabled Aug 19 00:22:45.879818 kernel: ACPI: Early table checksum verification disabled Aug 19 00:22:45.879825 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Aug 19 00:22:45.879831 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 19 00:22:45.879837 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:22:45.879843 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:22:45.879849 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:22:45.879856 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:22:45.879864 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:22:45.879870 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:22:45.879877 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:22:45.879883 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:22:45.879889 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:22:45.879896 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 19 00:22:45.879902 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 19 00:22:45.879908 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:22:45.879915 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Aug 19 00:22:45.879921 kernel: Zone ranges: Aug 19 00:22:45.879929 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:22:45.879935 kernel: DMA32 empty Aug 19 00:22:45.879942 kernel: Normal empty Aug 19 00:22:45.879948 kernel: Device empty Aug 19 00:22:45.879954 kernel: Movable zone start for each node Aug 19 00:22:45.879961 kernel: Early memory node ranges Aug 19 00:22:45.879967 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Aug 19 00:22:45.879974 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Aug 19 00:22:45.879981 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Aug 19 00:22:45.879987 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Aug 19 00:22:45.879993 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Aug 19 00:22:45.879999 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Aug 19 00:22:45.880007 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Aug 19 00:22:45.880014 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Aug 19 00:22:45.880020 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 19 00:22:45.880031 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:22:45.880038 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 19 00:22:45.880051 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Aug 19 00:22:45.880061 kernel: psci: probing for conduit method from ACPI. Aug 19 00:22:45.880070 kernel: psci: PSCIv1.1 detected in firmware. Aug 19 00:22:45.880080 kernel: psci: Using standard PSCI v0.2 function IDs Aug 19 00:22:45.880087 kernel: psci: Trusted OS migration not required Aug 19 00:22:45.880095 kernel: psci: SMC Calling Convention v1.1 Aug 19 00:22:45.880103 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 19 00:22:45.880109 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 19 00:22:45.880116 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 19 00:22:45.880123 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 19 00:22:45.880130 kernel: Detected PIPT I-cache on CPU0 Aug 19 00:22:45.880140 kernel: CPU features: detected: GIC system register CPU interface Aug 19 00:22:45.880147 kernel: CPU features: detected: Spectre-v4 Aug 19 00:22:45.880153 kernel: CPU features: detected: Spectre-BHB Aug 19 00:22:45.880160 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 19 00:22:45.880167 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 19 00:22:45.880174 kernel: CPU features: detected: ARM erratum 1418040 Aug 19 00:22:45.880181 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 19 00:22:45.880188 kernel: alternatives: applying boot alternatives Aug 19 00:22:45.880196 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:22:45.880203 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 00:22:45.880210 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 00:22:45.880218 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 00:22:45.880225 kernel: Fallback order for Node 0: 0 Aug 19 00:22:45.880232 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Aug 19 00:22:45.880239 kernel: Policy zone: DMA Aug 19 00:22:45.880245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 00:22:45.880252 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Aug 19 00:22:45.880259 kernel: software IO TLB: area num 4. Aug 19 00:22:45.880266 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Aug 19 00:22:45.880273 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Aug 19 00:22:45.880280 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 19 00:22:45.880287 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 00:22:45.880297 kernel: rcu: RCU event tracing is enabled. Aug 19 00:22:45.880306 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 19 00:22:45.880313 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 00:22:45.880320 kernel: Tracing variant of Tasks RCU enabled. Aug 19 00:22:45.880327 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 00:22:45.880334 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 19 00:22:45.880340 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 00:22:45.880347 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 00:22:45.880354 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 19 00:22:45.880361 kernel: GICv3: 256 SPIs implemented Aug 19 00:22:45.880367 kernel: GICv3: 0 Extended SPIs implemented Aug 19 00:22:45.880374 kernel: Root IRQ handler: gic_handle_irq Aug 19 00:22:45.880393 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 19 00:22:45.880401 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 19 00:22:45.880407 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 19 00:22:45.880414 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 19 00:22:45.880421 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Aug 19 00:22:45.880428 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Aug 19 00:22:45.880435 kernel: GICv3: using LPI property table @0x0000000040130000 Aug 19 00:22:45.880443 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Aug 19 00:22:45.880450 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 00:22:45.880457 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:22:45.880464 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 19 00:22:45.880471 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 19 00:22:45.880480 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 19 00:22:45.880487 kernel: arm-pv: using stolen time PV Aug 19 00:22:45.880494 kernel: Console: colour dummy device 80x25 Aug 19 00:22:45.880501 kernel: ACPI: Core revision 20240827 Aug 19 00:22:45.880508 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 19 00:22:45.880515 kernel: pid_max: default: 32768 minimum: 301 Aug 19 00:22:45.880522 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 00:22:45.880530 kernel: landlock: Up and running. Aug 19 00:22:45.880536 kernel: SELinux: Initializing. Aug 19 00:22:45.880543 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:22:45.880552 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:22:45.880559 kernel: rcu: Hierarchical SRCU implementation. Aug 19 00:22:45.880566 kernel: rcu: Max phase no-delay instances is 400. Aug 19 00:22:45.880574 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 00:22:45.880581 kernel: Remapping and enabling EFI services. Aug 19 00:22:45.880588 kernel: smp: Bringing up secondary CPUs ... Aug 19 00:22:45.880595 kernel: Detected PIPT I-cache on CPU1 Aug 19 00:22:45.880602 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 19 00:22:45.880609 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Aug 19 00:22:45.880623 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:22:45.880631 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 19 00:22:45.880640 kernel: Detected PIPT I-cache on CPU2 Aug 19 00:22:45.880648 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 19 00:22:45.880661 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Aug 19 00:22:45.880670 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:22:45.880678 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 19 00:22:45.880686 kernel: Detected PIPT I-cache on CPU3 Aug 19 00:22:45.880696 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 19 00:22:45.880704 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Aug 19 00:22:45.880712 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:22:45.880719 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 19 00:22:45.880726 kernel: smp: Brought up 1 node, 4 CPUs Aug 19 00:22:45.880733 kernel: SMP: Total of 4 processors activated. Aug 19 00:22:45.880741 kernel: CPU: All CPU(s) started at EL1 Aug 19 00:22:45.880748 kernel: CPU features: detected: 32-bit EL0 Support Aug 19 00:22:45.880756 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 19 00:22:45.880765 kernel: CPU features: detected: Common not Private translations Aug 19 00:22:45.880772 kernel: CPU features: detected: CRC32 instructions Aug 19 00:22:45.880779 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 19 00:22:45.880787 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 19 00:22:45.880794 kernel: CPU features: detected: LSE atomic instructions Aug 19 00:22:45.880801 kernel: CPU features: detected: Privileged Access Never Aug 19 00:22:45.880809 kernel: CPU features: detected: RAS Extension Support Aug 19 00:22:45.880816 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 19 00:22:45.880823 kernel: alternatives: applying system-wide alternatives Aug 19 00:22:45.880832 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Aug 19 00:22:45.880840 kernel: Memory: 2422436K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 127516K reserved, 16384K cma-reserved) Aug 19 00:22:45.880847 kernel: devtmpfs: initialized Aug 19 00:22:45.880855 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 00:22:45.880862 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 19 00:22:45.880870 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 19 00:22:45.880877 kernel: 0 pages in range for non-PLT usage Aug 19 00:22:45.880884 kernel: 508576 pages in range for PLT usage Aug 19 00:22:45.880891 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 00:22:45.880900 kernel: SMBIOS 3.0.0 present. Aug 19 00:22:45.880908 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Aug 19 00:22:45.880915 kernel: DMI: Memory slots populated: 1/1 Aug 19 00:22:45.880922 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 00:22:45.880929 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 19 00:22:45.880937 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 19 00:22:45.880944 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 19 00:22:45.880952 kernel: audit: initializing netlink subsys (disabled) Aug 19 00:22:45.880959 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Aug 19 00:22:45.880968 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 00:22:45.880975 kernel: cpuidle: using governor menu Aug 19 00:22:45.880982 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 19 00:22:45.880990 kernel: ASID allocator initialised with 32768 entries Aug 19 00:22:45.880997 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 00:22:45.881005 kernel: Serial: AMBA PL011 UART driver Aug 19 00:22:45.881012 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 00:22:45.881020 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 00:22:45.881030 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 19 00:22:45.881040 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 19 00:22:45.881047 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 00:22:45.881057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 00:22:45.881065 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 19 00:22:45.881074 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 19 00:22:45.881082 kernel: ACPI: Added _OSI(Module Device) Aug 19 00:22:45.881091 kernel: ACPI: Added _OSI(Processor Device) Aug 19 00:22:45.881098 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 00:22:45.881108 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 00:22:45.881117 kernel: ACPI: Interpreter enabled Aug 19 00:22:45.881125 kernel: ACPI: Using GIC for interrupt routing Aug 19 00:22:45.881132 kernel: ACPI: MCFG table detected, 1 entries Aug 19 00:22:45.881139 kernel: ACPI: CPU0 has been hot-added Aug 19 00:22:45.881146 kernel: ACPI: CPU1 has been hot-added Aug 19 00:22:45.881154 kernel: ACPI: CPU2 has been hot-added Aug 19 00:22:45.881161 kernel: ACPI: CPU3 has been hot-added Aug 19 00:22:45.881168 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 19 00:22:45.881176 kernel: printk: legacy console [ttyAMA0] enabled Aug 19 00:22:45.881185 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 00:22:45.881335 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 00:22:45.881428 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 19 00:22:45.881497 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 19 00:22:45.881559 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 19 00:22:45.881619 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 19 00:22:45.881628 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 19 00:22:45.881639 kernel: PCI host bridge to bus 0000:00 Aug 19 00:22:45.881718 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 19 00:22:45.881781 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 19 00:22:45.881866 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 19 00:22:45.881927 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 00:22:45.882017 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Aug 19 00:22:45.882109 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 00:22:45.882181 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Aug 19 00:22:45.882252 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Aug 19 00:22:45.883478 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Aug 19 00:22:45.883584 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Aug 19 00:22:45.883659 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Aug 19 00:22:45.883735 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Aug 19 00:22:45.883799 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 19 00:22:45.883865 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 19 00:22:45.883921 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 19 00:22:45.883931 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 19 00:22:45.883939 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 19 00:22:45.883947 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 19 00:22:45.883954 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 19 00:22:45.883962 kernel: iommu: Default domain type: Translated Aug 19 00:22:45.883969 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 19 00:22:45.883978 kernel: efivars: Registered efivars operations Aug 19 00:22:45.884436 kernel: vgaarb: loaded Aug 19 00:22:45.884445 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 19 00:22:45.884452 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 00:22:45.884460 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 00:22:45.884467 kernel: pnp: PnP ACPI init Aug 19 00:22:45.884570 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 19 00:22:45.884583 kernel: pnp: PnP ACPI: found 1 devices Aug 19 00:22:45.884595 kernel: NET: Registered PF_INET protocol family Aug 19 00:22:45.884603 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 00:22:45.884610 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 00:22:45.884618 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 00:22:45.884625 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 00:22:45.884633 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 00:22:45.884640 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 00:22:45.884647 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:22:45.884664 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:22:45.884673 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 00:22:45.884681 kernel: PCI: CLS 0 bytes, default 64 Aug 19 00:22:45.884692 kernel: kvm [1]: HYP mode not available Aug 19 00:22:45.884699 kernel: Initialise system trusted keyrings Aug 19 00:22:45.884706 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 00:22:45.884713 kernel: Key type asymmetric registered Aug 19 00:22:45.884721 kernel: Asymmetric key parser 'x509' registered Aug 19 00:22:45.884728 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 19 00:22:45.884735 kernel: io scheduler mq-deadline registered Aug 19 00:22:45.884744 kernel: io scheduler kyber registered Aug 19 00:22:45.884751 kernel: io scheduler bfq registered Aug 19 00:22:45.884760 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 19 00:22:45.884767 kernel: ACPI: button: Power Button [PWRB] Aug 19 00:22:45.884775 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 19 00:22:45.884848 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 19 00:22:45.884859 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 00:22:45.884866 kernel: thunder_xcv, ver 1.0 Aug 19 00:22:45.884873 kernel: thunder_bgx, ver 1.0 Aug 19 00:22:45.884882 kernel: nicpf, ver 1.0 Aug 19 00:22:45.884889 kernel: nicvf, ver 1.0 Aug 19 00:22:45.884962 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 19 00:22:45.885020 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-19T00:22:45 UTC (1755562965) Aug 19 00:22:45.885030 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 19 00:22:45.885038 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Aug 19 00:22:45.885045 kernel: watchdog: NMI not fully supported Aug 19 00:22:45.885053 kernel: watchdog: Hard watchdog permanently disabled Aug 19 00:22:45.885071 kernel: NET: Registered PF_INET6 protocol family Aug 19 00:22:45.885082 kernel: Segment Routing with IPv6 Aug 19 00:22:45.885089 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 00:22:45.885097 kernel: NET: Registered PF_PACKET protocol family Aug 19 00:22:45.885104 kernel: Key type dns_resolver registered Aug 19 00:22:45.885111 kernel: registered taskstats version 1 Aug 19 00:22:45.885118 kernel: Loading compiled-in X.509 certificates Aug 19 00:22:45.885125 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: becc5a61d1c5dcbcd174f4649c64b863031dbaa8' Aug 19 00:22:45.885133 kernel: Demotion targets for Node 0: null Aug 19 00:22:45.885141 kernel: Key type .fscrypt registered Aug 19 00:22:45.885148 kernel: Key type fscrypt-provisioning registered Aug 19 00:22:45.885156 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 00:22:45.885163 kernel: ima: Allocated hash algorithm: sha1 Aug 19 00:22:45.885170 kernel: ima: No architecture policies found Aug 19 00:22:45.885177 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 19 00:22:45.885184 kernel: clk: Disabling unused clocks Aug 19 00:22:45.885192 kernel: PM: genpd: Disabling unused power domains Aug 19 00:22:45.885199 kernel: Warning: unable to open an initial console. Aug 19 00:22:45.885208 kernel: Freeing unused kernel memory: 38912K Aug 19 00:22:45.885216 kernel: Run /init as init process Aug 19 00:22:45.885223 kernel: with arguments: Aug 19 00:22:45.885231 kernel: /init Aug 19 00:22:45.885238 kernel: with environment: Aug 19 00:22:45.885245 kernel: HOME=/ Aug 19 00:22:45.885252 kernel: TERM=linux Aug 19 00:22:45.885259 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 00:22:45.885267 systemd[1]: Successfully made /usr/ read-only. Aug 19 00:22:45.885281 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:22:45.885289 systemd[1]: Detected virtualization kvm. Aug 19 00:22:45.885297 systemd[1]: Detected architecture arm64. Aug 19 00:22:45.885305 systemd[1]: Running in initrd. Aug 19 00:22:45.885312 systemd[1]: No hostname configured, using default hostname. Aug 19 00:22:45.885320 systemd[1]: Hostname set to . Aug 19 00:22:45.885327 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:22:45.885337 systemd[1]: Queued start job for default target initrd.target. Aug 19 00:22:45.885345 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:22:45.885352 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:22:45.885361 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 00:22:45.885368 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:22:45.885376 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 00:22:45.885402 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 00:22:45.885414 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 00:22:45.885422 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 00:22:45.885430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:22:45.885438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:22:45.885446 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:22:45.885470 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:22:45.885478 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:22:45.885489 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:22:45.885505 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:22:45.885514 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:22:45.885522 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 00:22:45.885530 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 00:22:45.885538 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:22:45.885546 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:22:45.885553 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:22:45.885561 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:22:45.885570 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 00:22:45.885579 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:22:45.885587 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 00:22:45.885595 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 00:22:45.885604 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 00:22:45.885611 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:22:45.885619 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:22:45.885627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:22:45.885635 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:22:45.885645 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 00:22:45.885660 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 00:22:45.885669 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 00:22:45.885699 systemd-journald[243]: Collecting audit messages is disabled. Aug 19 00:22:45.885722 systemd-journald[243]: Journal started Aug 19 00:22:45.885741 systemd-journald[243]: Runtime Journal (/run/log/journal/59c53e6fac5b49ee93549215f627abde) is 6M, max 48.5M, 42.4M free. Aug 19 00:22:45.876906 systemd-modules-load[245]: Inserted module 'overlay' Aug 19 00:22:45.889479 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:22:45.889880 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:22:45.892811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:22:45.896197 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 00:22:45.898529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 00:22:45.898561 kernel: Bridge firewalling registered Aug 19 00:22:45.899016 systemd-modules-load[245]: Inserted module 'br_netfilter' Aug 19 00:22:45.899535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:22:45.901815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:22:45.914623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:22:45.917217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:22:45.922148 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 00:22:45.925301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:22:45.927755 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:22:45.929436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:22:45.933507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:22:45.936674 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:22:45.938722 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 00:22:45.960817 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:22:45.976694 systemd-resolved[286]: Positive Trust Anchors: Aug 19 00:22:45.976713 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:22:45.976744 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:22:45.981795 systemd-resolved[286]: Defaulting to hostname 'linux'. Aug 19 00:22:45.982792 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:22:45.984530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:22:46.045411 kernel: SCSI subsystem initialized Aug 19 00:22:46.050400 kernel: Loading iSCSI transport class v2.0-870. Aug 19 00:22:46.058430 kernel: iscsi: registered transport (tcp) Aug 19 00:22:46.081404 kernel: iscsi: registered transport (qla4xxx) Aug 19 00:22:46.081460 kernel: QLogic iSCSI HBA Driver Aug 19 00:22:46.102351 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:22:46.129431 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:22:46.131248 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:22:46.205076 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 00:22:46.207595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 00:22:46.276413 kernel: raid6: neonx8 gen() 15725 MB/s Aug 19 00:22:46.293398 kernel: raid6: neonx4 gen() 15789 MB/s Aug 19 00:22:46.310396 kernel: raid6: neonx2 gen() 13148 MB/s Aug 19 00:22:46.327393 kernel: raid6: neonx1 gen() 10415 MB/s Aug 19 00:22:46.344396 kernel: raid6: int64x8 gen() 6897 MB/s Aug 19 00:22:46.361392 kernel: raid6: int64x4 gen() 7346 MB/s Aug 19 00:22:46.378392 kernel: raid6: int64x2 gen() 6104 MB/s Aug 19 00:22:46.395394 kernel: raid6: int64x1 gen() 5046 MB/s Aug 19 00:22:46.395417 kernel: raid6: using algorithm neonx4 gen() 15789 MB/s Aug 19 00:22:46.412400 kernel: raid6: .... xor() 12359 MB/s, rmw enabled Aug 19 00:22:46.412429 kernel: raid6: using neon recovery algorithm Aug 19 00:22:46.419485 kernel: xor: measuring software checksum speed Aug 19 00:22:46.419515 kernel: 8regs : 21499 MB/sec Aug 19 00:22:46.420633 kernel: 32regs : 21670 MB/sec Aug 19 00:22:46.420659 kernel: arm64_neon : 28070 MB/sec Aug 19 00:22:46.420670 kernel: xor: using function: arm64_neon (28070 MB/sec) Aug 19 00:22:46.488408 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 00:22:46.497995 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:22:46.500877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:22:46.544712 systemd-udevd[497]: Using default interface naming scheme 'v255'. Aug 19 00:22:46.549232 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:22:46.551305 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 00:22:46.585509 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Aug 19 00:22:46.612029 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:22:46.614416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:22:46.672665 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:22:46.676906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 00:22:46.724750 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 19 00:22:46.725211 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 19 00:22:46.729424 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 00:22:46.729460 kernel: GPT:9289727 != 19775487 Aug 19 00:22:46.729471 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 00:22:46.729481 kernel: GPT:9289727 != 19775487 Aug 19 00:22:46.730844 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 00:22:46.730880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:22:46.736003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:22:46.737463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:22:46.739805 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:22:46.745154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:22:46.764591 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 19 00:22:46.766998 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:22:46.777619 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 19 00:22:46.780031 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 00:22:46.799108 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 00:22:46.805433 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 19 00:22:46.806377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 19 00:22:46.808510 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:22:46.810790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:22:46.812359 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:22:46.814938 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 00:22:46.816805 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 00:22:46.831414 disk-uuid[586]: Primary Header is updated. Aug 19 00:22:46.831414 disk-uuid[586]: Secondary Entries is updated. Aug 19 00:22:46.831414 disk-uuid[586]: Secondary Header is updated. Aug 19 00:22:46.835405 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:22:46.836211 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:22:47.848123 disk-uuid[590]: The operation has completed successfully. Aug 19 00:22:47.849266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:22:47.873793 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 00:22:47.873898 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 00:22:47.900083 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 00:22:47.927564 sh[609]: Success Aug 19 00:22:47.940401 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 00:22:47.940440 kernel: device-mapper: uevent: version 1.0.3 Aug 19 00:22:47.940450 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 00:22:47.950410 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 19 00:22:47.977883 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 00:22:47.980562 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 00:22:48.006015 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 00:22:48.010925 kernel: BTRFS: device fsid 1e492084-d287-4a43-8dc6-ad086a072625 devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (621) Aug 19 00:22:48.010965 kernel: BTRFS info (device dm-0): first mount of filesystem 1e492084-d287-4a43-8dc6-ad086a072625 Aug 19 00:22:48.010975 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:22:48.012412 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 00:22:48.015871 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 00:22:48.016992 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:22:48.017990 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 00:22:48.018842 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 00:22:48.022090 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 00:22:48.045433 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (652) Aug 19 00:22:48.047900 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:22:48.047953 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:22:48.047966 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:22:48.054416 kernel: BTRFS info (device vda6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:22:48.056483 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 00:22:48.059140 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 00:22:48.142176 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:22:48.144751 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:22:48.194501 systemd-networkd[795]: lo: Link UP Aug 19 00:22:48.194511 systemd-networkd[795]: lo: Gained carrier Aug 19 00:22:48.195925 systemd-networkd[795]: Enumeration completed Aug 19 00:22:48.196053 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:22:48.196721 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:22:48.196725 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:22:48.196966 systemd[1]: Reached target network.target - Network. Aug 19 00:22:48.197649 systemd-networkd[795]: eth0: Link UP Aug 19 00:22:48.197764 systemd-networkd[795]: eth0: Gained carrier Aug 19 00:22:48.197772 systemd-networkd[795]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:22:48.214454 systemd-networkd[795]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 00:22:48.225334 ignition[693]: Ignition 2.21.0 Aug 19 00:22:48.225350 ignition[693]: Stage: fetch-offline Aug 19 00:22:48.225399 ignition[693]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:22:48.225410 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:22:48.225597 ignition[693]: parsed url from cmdline: "" Aug 19 00:22:48.225601 ignition[693]: no config URL provided Aug 19 00:22:48.225605 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 00:22:48.225612 ignition[693]: no config at "/usr/lib/ignition/user.ign" Aug 19 00:22:48.225632 ignition[693]: op(1): [started] loading QEMU firmware config module Aug 19 00:22:48.225636 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 19 00:22:48.233161 ignition[693]: op(1): [finished] loading QEMU firmware config module Aug 19 00:22:48.270122 ignition[693]: parsing config with SHA512: 8476829b706451300aabdb52dcb2351d5eb4c3548a1483a9ee8aef47ddb60329376f5499b46bfb5f480c014796ddae5b5282068f88efedbaedfbe7daabf0c838 Aug 19 00:22:48.275873 unknown[693]: fetched base config from "system" Aug 19 00:22:48.275886 unknown[693]: fetched user config from "qemu" Aug 19 00:22:48.276364 ignition[693]: fetch-offline: fetch-offline passed Aug 19 00:22:48.276450 ignition[693]: Ignition finished successfully Aug 19 00:22:48.278222 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:22:48.280474 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 19 00:22:48.282638 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 00:22:48.323073 ignition[808]: Ignition 2.21.0 Aug 19 00:22:48.323091 ignition[808]: Stage: kargs Aug 19 00:22:48.323237 ignition[808]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:22:48.323248 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:22:48.325734 ignition[808]: kargs: kargs passed Aug 19 00:22:48.325816 ignition[808]: Ignition finished successfully Aug 19 00:22:48.328305 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 00:22:48.330274 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 00:22:48.365246 ignition[817]: Ignition 2.21.0 Aug 19 00:22:48.365263 ignition[817]: Stage: disks Aug 19 00:22:48.365439 ignition[817]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:22:48.365450 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:22:48.369241 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 00:22:48.367021 ignition[817]: disks: disks passed Aug 19 00:22:48.370520 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 00:22:48.367105 ignition[817]: Ignition finished successfully Aug 19 00:22:48.371770 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 00:22:48.373344 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:22:48.374954 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:22:48.376402 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:22:48.379071 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 00:22:48.415324 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 00:22:48.471120 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 00:22:48.473854 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 00:22:48.540401 kernel: EXT4-fs (vda9): mounted filesystem 593a9299-85f8-44ab-a00f-cf95b7233713 r/w with ordered data mode. Quota mode: none. Aug 19 00:22:48.540990 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 00:22:48.542233 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 00:22:48.545591 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:22:48.547871 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 00:22:48.549482 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 00:22:48.551182 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 00:22:48.552772 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:22:48.566978 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 00:22:48.569394 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 00:22:48.571930 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (836) Aug 19 00:22:48.571952 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:22:48.573661 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:22:48.573682 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:22:48.577399 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:22:48.652351 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 00:22:48.658494 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Aug 19 00:22:48.664358 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 00:22:48.670817 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 00:22:48.757899 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 00:22:48.759983 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 00:22:48.761482 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 00:22:48.788400 kernel: BTRFS info (device vda6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:22:48.810571 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 00:22:48.819019 ignition[950]: INFO : Ignition 2.21.0 Aug 19 00:22:48.819019 ignition[950]: INFO : Stage: mount Aug 19 00:22:48.820555 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:22:48.820555 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:22:48.820555 ignition[950]: INFO : mount: mount passed Aug 19 00:22:48.820555 ignition[950]: INFO : Ignition finished successfully Aug 19 00:22:48.822912 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 00:22:48.826348 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 00:22:49.009772 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 00:22:49.011428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:22:49.032417 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (963) Aug 19 00:22:49.035867 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:22:49.035897 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:22:49.035908 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:22:49.039242 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:22:49.070898 ignition[980]: INFO : Ignition 2.21.0 Aug 19 00:22:49.070898 ignition[980]: INFO : Stage: files Aug 19 00:22:49.073100 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:22:49.073100 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:22:49.075026 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Aug 19 00:22:49.075026 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 00:22:49.075026 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 00:22:49.079004 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 00:22:49.079004 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 00:22:49.079004 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 00:22:49.078044 unknown[980]: wrote ssh authorized keys file for user: core Aug 19 00:22:49.083396 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 19 00:22:49.083396 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 19 00:22:49.199698 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 00:22:49.391535 systemd-networkd[795]: eth0: Gained IPv6LL Aug 19 00:22:49.760783 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 19 00:22:49.760783 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 00:22:49.764343 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 19 00:22:49.956810 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 19 00:22:50.065040 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 00:22:50.065040 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:22:50.068231 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:22:50.083033 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:22:50.083033 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:22:50.083033 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 19 00:22:50.347913 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 19 00:22:50.740108 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:22:50.740108 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 19 00:22:50.745927 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:22:50.745927 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:22:50.745927 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 19 00:22:50.745927 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 19 00:22:50.745927 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 00:22:50.745927 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 00:22:50.745927 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 19 00:22:50.745927 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 19 00:22:50.763460 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 00:22:50.766950 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 00:22:50.768696 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 19 00:22:50.768696 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 19 00:22:50.768696 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 00:22:50.768696 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:22:50.768696 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:22:50.768696 ignition[980]: INFO : files: files passed Aug 19 00:22:50.768696 ignition[980]: INFO : Ignition finished successfully Aug 19 00:22:50.771288 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 00:22:50.774522 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 00:22:50.775935 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 00:22:50.791639 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 00:22:50.791753 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 00:22:50.794757 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Aug 19 00:22:50.796549 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:22:50.796549 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:22:50.801496 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:22:50.798105 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:22:50.799655 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 00:22:50.803539 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 00:22:50.884746 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 00:22:50.885473 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 00:22:50.886971 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 00:22:50.888819 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 00:22:50.890643 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 00:22:50.891517 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 00:22:50.919242 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:22:50.922123 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 00:22:50.942485 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:22:50.943538 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:22:50.945311 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 00:22:50.947170 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 00:22:50.947300 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:22:50.949298 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 00:22:50.957551 systemd[1]: Stopped target basic.target - Basic System. Aug 19 00:22:50.959261 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 00:22:50.960976 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:22:50.963063 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 00:22:50.964782 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:22:50.966329 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 00:22:50.968128 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:22:50.969860 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 00:22:50.971523 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 00:22:50.973144 systemd[1]: Stopped target swap.target - Swaps. Aug 19 00:22:50.974542 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 00:22:50.974694 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:22:50.976674 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:22:50.978197 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:22:50.979715 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 00:22:50.979791 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:22:50.981450 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 00:22:50.981579 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 00:22:50.984155 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 00:22:50.984362 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:22:50.986071 systemd[1]: Stopped target paths.target - Path Units. Aug 19 00:22:50.987502 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 00:22:50.991432 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:22:50.992466 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 00:22:50.994447 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 00:22:50.995887 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 00:22:50.995987 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:22:50.997423 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 00:22:50.997507 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:22:50.998672 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 00:22:50.998796 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:22:51.001237 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 00:22:51.001485 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 00:22:51.003755 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 00:22:51.005140 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 00:22:51.005249 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:22:51.007948 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 00:22:51.011419 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 00:22:51.011537 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:22:51.013227 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 00:22:51.013329 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:22:51.019271 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 00:22:51.024911 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 00:22:51.032961 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 00:22:51.041714 ignition[1035]: INFO : Ignition 2.21.0 Aug 19 00:22:51.041714 ignition[1035]: INFO : Stage: umount Aug 19 00:22:51.044808 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:22:51.044808 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:22:51.044808 ignition[1035]: INFO : umount: umount passed Aug 19 00:22:51.044808 ignition[1035]: INFO : Ignition finished successfully Aug 19 00:22:51.046947 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 00:22:51.047049 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 00:22:51.050433 systemd[1]: Stopped target network.target - Network. Aug 19 00:22:51.052400 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 00:22:51.052482 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 00:22:51.054207 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 00:22:51.054254 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 00:22:51.055683 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 00:22:51.055734 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 00:22:51.057352 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 00:22:51.057456 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 00:22:51.059259 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 00:22:51.060949 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 00:22:51.073635 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 00:22:51.073776 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 00:22:51.078664 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 00:22:51.079189 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 00:22:51.079267 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:22:51.082339 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:22:51.082676 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 00:22:51.082782 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 00:22:51.085621 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 00:22:51.086095 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 00:22:51.088135 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 00:22:51.088175 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:22:51.091516 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 00:22:51.092935 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 00:22:51.093007 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:22:51.096230 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 00:22:51.096273 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:22:51.100102 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 00:22:51.100150 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 00:22:51.102545 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:22:51.105518 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 00:22:51.106567 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 00:22:51.106662 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 00:22:51.109258 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 00:22:51.109337 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 00:22:51.124130 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 00:22:51.127610 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:22:51.129095 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 00:22:51.129150 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 00:22:51.130970 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 00:22:51.131000 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:22:51.132525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 00:22:51.132579 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:22:51.134871 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 00:22:51.134928 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 00:22:51.137049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 00:22:51.137100 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:22:51.140082 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 00:22:51.140917 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 00:22:51.140971 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:22:51.143995 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 00:22:51.144038 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:22:51.146964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:22:51.147013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:22:51.150273 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 00:22:51.154574 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 00:22:51.159699 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 00:22:51.159790 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 00:22:51.163948 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 00:22:51.165857 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 00:22:51.188092 systemd[1]: Switching root. Aug 19 00:22:51.222906 systemd-journald[243]: Journal stopped Aug 19 00:22:52.128221 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Aug 19 00:22:52.128273 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 00:22:52.128285 kernel: SELinux: policy capability open_perms=1 Aug 19 00:22:52.128296 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 00:22:52.128305 kernel: SELinux: policy capability always_check_network=0 Aug 19 00:22:52.128321 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 00:22:52.128335 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 00:22:52.128344 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 00:22:52.128354 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 00:22:52.128365 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 00:22:52.128394 kernel: audit: type=1403 audit(1755562971.470:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 00:22:52.128426 systemd[1]: Successfully loaded SELinux policy in 59.067ms. Aug 19 00:22:52.128446 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.363ms. Aug 19 00:22:52.128462 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:22:52.128474 systemd[1]: Detected virtualization kvm. Aug 19 00:22:52.128484 systemd[1]: Detected architecture arm64. Aug 19 00:22:52.128497 systemd[1]: Detected first boot. Aug 19 00:22:52.128507 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:22:52.128522 zram_generator::config[1079]: No configuration found. Aug 19 00:22:52.128534 kernel: NET: Registered PF_VSOCK protocol family Aug 19 00:22:52.128545 systemd[1]: Populated /etc with preset unit settings. Aug 19 00:22:52.128557 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 00:22:52.128568 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 00:22:52.128580 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 00:22:52.128590 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 00:22:52.128603 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 00:22:52.128618 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 00:22:52.128637 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 00:22:52.128650 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 00:22:52.128666 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 00:22:52.128680 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 00:22:52.128690 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 00:22:52.128701 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 00:22:52.128715 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:22:52.128727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:22:52.128738 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 00:22:52.128749 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 00:22:52.128761 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 00:22:52.128772 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:22:52.128783 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 19 00:22:52.128794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:22:52.128807 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:22:52.128818 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 00:22:52.128828 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 00:22:52.128839 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 00:22:52.128849 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 00:22:52.128861 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:22:52.128872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:22:52.128883 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:22:52.128897 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:22:52.128910 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 00:22:52.128921 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 00:22:52.128932 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 00:22:52.128943 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:22:52.128954 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:22:52.128968 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:22:52.128980 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 00:22:52.128991 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 00:22:52.129015 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 00:22:52.129028 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 00:22:52.129039 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 00:22:52.129050 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 00:22:52.129061 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 00:22:52.129071 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 00:22:52.129082 systemd[1]: Reached target machines.target - Containers. Aug 19 00:22:52.129093 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 00:22:52.129104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:22:52.129116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:22:52.129127 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 00:22:52.129138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:22:52.129149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:22:52.129163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:22:52.129175 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 00:22:52.129186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:22:52.129199 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 00:22:52.129223 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 00:22:52.129235 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 00:22:52.129245 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 00:22:52.129256 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 00:22:52.129267 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:22:52.129278 kernel: fuse: init (API version 7.41) Aug 19 00:22:52.129288 kernel: loop: module loaded Aug 19 00:22:52.129298 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:22:52.129308 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:22:52.129319 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:22:52.129332 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 00:22:52.129342 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 00:22:52.129353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:22:52.129363 kernel: ACPI: bus type drm_connector registered Aug 19 00:22:52.129373 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 00:22:52.129398 systemd[1]: Stopped verity-setup.service. Aug 19 00:22:52.129412 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 00:22:52.129422 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 00:22:52.129433 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 00:22:52.129445 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 00:22:52.129455 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 00:22:52.129466 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 00:22:52.129477 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:22:52.129513 systemd-journald[1147]: Collecting audit messages is disabled. Aug 19 00:22:52.129538 systemd-journald[1147]: Journal started Aug 19 00:22:52.129559 systemd-journald[1147]: Runtime Journal (/run/log/journal/59c53e6fac5b49ee93549215f627abde) is 6M, max 48.5M, 42.4M free. Aug 19 00:22:51.899648 systemd[1]: Queued start job for default target multi-user.target. Aug 19 00:22:51.918353 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 19 00:22:51.918739 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 00:22:52.131539 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:22:52.132337 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 00:22:52.133568 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 00:22:52.133740 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 00:22:52.134983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:22:52.135153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:22:52.136275 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:22:52.136445 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:22:52.137511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:22:52.137684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:22:52.138784 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 00:22:52.138930 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 00:22:52.140253 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:22:52.140438 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:22:52.141565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:22:52.142675 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:22:52.144823 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 00:22:52.146309 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 00:22:52.159374 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:22:52.161471 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 00:22:52.163441 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 00:22:52.164511 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 00:22:52.164554 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:22:52.166465 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 00:22:52.174278 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 00:22:52.175279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:22:52.176426 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 00:22:52.178288 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 00:22:52.179395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:22:52.180660 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 00:22:52.181564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:22:52.185528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:22:52.188641 systemd-journald[1147]: Time spent on flushing to /var/log/journal/59c53e6fac5b49ee93549215f627abde is 19.581ms for 880 entries. Aug 19 00:22:52.188641 systemd-journald[1147]: System Journal (/var/log/journal/59c53e6fac5b49ee93549215f627abde) is 8M, max 195.6M, 187.6M free. Aug 19 00:22:52.213722 systemd-journald[1147]: Received client request to flush runtime journal. Aug 19 00:22:52.213774 kernel: loop0: detected capacity change from 0 to 100608 Aug 19 00:22:52.187525 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 00:22:52.190672 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 00:22:52.193586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:22:52.195303 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 00:22:52.196706 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 00:22:52.209016 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 00:22:52.210148 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 00:22:52.212596 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 00:22:52.219048 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 00:22:52.229177 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 00:22:52.227652 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:22:52.232611 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 00:22:52.236175 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:22:52.258432 kernel: loop1: detected capacity change from 0 to 119320 Aug 19 00:22:52.258554 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 00:22:52.277218 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Aug 19 00:22:52.277239 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Aug 19 00:22:52.281291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:22:52.301425 kernel: loop2: detected capacity change from 0 to 207008 Aug 19 00:22:52.324417 kernel: loop3: detected capacity change from 0 to 100608 Aug 19 00:22:52.332417 kernel: loop4: detected capacity change from 0 to 119320 Aug 19 00:22:52.339416 kernel: loop5: detected capacity change from 0 to 207008 Aug 19 00:22:52.344977 (sd-merge)[1218]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 19 00:22:52.345407 (sd-merge)[1218]: Merged extensions into '/usr'. Aug 19 00:22:52.348762 systemd[1]: Reload requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 00:22:52.348784 systemd[1]: Reloading... Aug 19 00:22:52.427613 zram_generator::config[1242]: No configuration found. Aug 19 00:22:52.481444 ldconfig[1190]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 00:22:52.558466 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 00:22:52.558740 systemd[1]: Reloading finished in 209 ms. Aug 19 00:22:52.589076 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 00:22:52.592110 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 00:22:52.610692 systemd[1]: Starting ensure-sysext.service... Aug 19 00:22:52.612394 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:22:52.627417 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 00:22:52.627564 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 00:22:52.627889 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 00:22:52.627959 systemd[1]: Reload requested from client PID 1278 ('systemctl') (unit ensure-sysext.service)... Aug 19 00:22:52.627974 systemd[1]: Reloading... Aug 19 00:22:52.628093 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 00:22:52.628763 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 00:22:52.629009 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Aug 19 00:22:52.629057 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Aug 19 00:22:52.631933 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:22:52.631949 systemd-tmpfiles[1279]: Skipping /boot Aug 19 00:22:52.638216 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:22:52.638234 systemd-tmpfiles[1279]: Skipping /boot Aug 19 00:22:52.687427 zram_generator::config[1303]: No configuration found. Aug 19 00:22:52.822205 systemd[1]: Reloading finished in 193 ms. Aug 19 00:22:52.832052 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 00:22:52.837681 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:22:52.844560 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:22:52.846937 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 00:22:52.849411 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 00:22:52.852235 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:22:52.857545 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:22:52.860317 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 00:22:52.871361 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:22:52.880105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:22:52.882741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:22:52.885876 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:22:52.887319 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:22:52.887455 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:22:52.889401 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 00:22:52.891777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:22:52.892072 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:22:52.893721 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:22:52.893906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:22:52.896068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:22:52.896241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:22:52.907310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:22:52.911110 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:22:52.913359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:22:52.919760 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Aug 19 00:22:52.925824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:22:52.927483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:22:52.927604 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:22:52.928770 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 00:22:52.933013 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 00:22:52.937413 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 00:22:52.939555 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 00:22:52.941223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:22:52.941374 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:22:52.942916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:22:52.943058 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:22:52.944733 augenrules[1379]: No rules Aug 19 00:22:52.944815 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:22:52.946154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:22:52.948109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:22:52.949717 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:22:52.950158 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:22:52.952949 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 00:22:52.968993 systemd[1]: Finished ensure-sysext.service. Aug 19 00:22:52.971907 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:22:52.972849 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:22:52.974139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:22:52.977617 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:22:52.981173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:22:52.987594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:22:52.988483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:22:52.988534 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:22:52.992562 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:22:52.996002 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 19 00:22:52.996849 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 00:22:52.997516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:22:52.999406 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:22:53.000711 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 00:22:53.001934 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:22:53.003408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:22:53.004720 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:22:53.004889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:22:53.008217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:22:53.009706 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:22:53.012607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:22:53.012684 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:22:53.017247 augenrules[1417]: /sbin/augenrules: No change Aug 19 00:22:53.039523 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 19 00:22:53.042646 augenrules[1451]: No rules Aug 19 00:22:53.043683 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:22:53.043874 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:22:53.103852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 00:22:53.107026 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 00:22:53.148902 systemd-networkd[1424]: lo: Link UP Aug 19 00:22:53.148914 systemd-networkd[1424]: lo: Gained carrier Aug 19 00:22:53.149047 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 19 00:22:53.150333 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 00:22:53.156303 systemd-networkd[1424]: Enumeration completed Aug 19 00:22:53.157527 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:22:53.160050 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 00:22:53.164301 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 00:22:53.175913 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 00:22:53.182278 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:22:53.182289 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:22:53.191991 systemd-resolved[1346]: Positive Trust Anchors: Aug 19 00:22:53.192344 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:22:53.192454 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:22:53.196154 systemd-networkd[1424]: eth0: Link UP Aug 19 00:22:53.196270 systemd-networkd[1424]: eth0: Gained carrier Aug 19 00:22:53.196291 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:22:53.230539 systemd-resolved[1346]: Defaulting to hostname 'linux'. Aug 19 00:22:53.244502 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 00:22:53.253305 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Aug 19 00:22:53.254238 systemd-timesyncd[1427]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 19 00:22:53.254290 systemd-timesyncd[1427]: Initial clock synchronization to Tue 2025-08-19 00:22:53.593063 UTC. Aug 19 00:22:53.254431 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 00:22:53.257851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:22:53.261063 systemd[1]: Reached target network.target - Network. Aug 19 00:22:53.261936 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:22:53.263535 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:22:53.264833 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 00:22:53.266023 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 00:22:53.267397 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 00:22:53.268758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 00:22:53.269888 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 00:22:53.271369 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 00:22:53.271418 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:22:53.272297 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:22:53.274013 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 00:22:53.276540 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 00:22:53.280447 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 00:22:53.281791 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 00:22:53.283004 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 00:22:53.286770 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 00:22:53.288426 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 00:22:53.290010 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 00:22:53.294886 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:22:53.296045 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:22:53.297230 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:22:53.297338 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:22:53.298578 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 00:22:53.300766 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 00:22:53.302674 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 00:22:53.304609 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 00:22:53.306338 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 00:22:53.307427 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 00:22:53.308539 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 00:22:53.312569 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 00:22:53.315489 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 00:22:53.318270 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 00:22:53.322633 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 00:22:53.324641 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 00:22:53.325194 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 00:22:53.325858 jq[1493]: false Aug 19 00:22:53.327274 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 00:22:53.328178 extend-filesystems[1494]: Found /dev/vda6 Aug 19 00:22:53.329303 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 00:22:53.336989 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 00:22:53.338942 extend-filesystems[1494]: Found /dev/vda9 Aug 19 00:22:53.340105 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 00:22:53.340311 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 00:22:53.342000 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 00:22:53.343474 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 00:22:53.353327 jq[1505]: true Aug 19 00:22:53.355967 extend-filesystems[1494]: Checking size of /dev/vda9 Aug 19 00:22:53.367674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:22:53.373120 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 00:22:53.373323 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 00:22:53.377710 jq[1522]: true Aug 19 00:22:53.400831 tar[1511]: linux-arm64/LICENSE Aug 19 00:22:53.403390 tar[1511]: linux-arm64/helm Aug 19 00:22:53.418107 (ntainerd)[1531]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 00:22:53.429344 extend-filesystems[1494]: Resized partition /dev/vda9 Aug 19 00:22:53.435725 extend-filesystems[1544]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 00:22:53.442816 dbus-daemon[1491]: [system] SELinux support is enabled Aug 19 00:22:53.443063 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 00:22:53.446902 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 00:22:53.446943 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 00:22:53.448493 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 00:22:53.451251 update_engine[1503]: I20250819 00:22:53.442520 1503 main.cc:92] Flatcar Update Engine starting Aug 19 00:22:53.448519 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 00:22:53.459721 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 19 00:22:53.458902 systemd[1]: Started update-engine.service - Update Engine. Aug 19 00:22:53.461612 update_engine[1503]: I20250819 00:22:53.461557 1503 update_check_scheduler.cc:74] Next update check in 7m19s Aug 19 00:22:53.463543 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 00:22:53.485795 systemd-logind[1502]: Watching system buttons on /dev/input/event0 (Power Button) Aug 19 00:22:53.488054 systemd-logind[1502]: New seat seat0. Aug 19 00:22:53.489971 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 00:22:53.525746 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:22:53.561947 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 19 00:22:53.593220 locksmithd[1552]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 00:22:53.605309 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 19 00:22:53.605309 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 00:22:53.605309 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 19 00:22:53.608705 extend-filesystems[1494]: Resized filesystem in /dev/vda9 Aug 19 00:22:53.610944 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 00:22:53.612317 bash[1553]: Updated "/home/core/.ssh/authorized_keys" Aug 19 00:22:53.612886 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 00:22:53.616952 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 00:22:53.620566 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 19 00:22:53.732841 containerd[1531]: time="2025-08-19T00:22:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 00:22:53.733761 containerd[1531]: time="2025-08-19T00:22:53.733721720Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 00:22:53.746649 containerd[1531]: time="2025-08-19T00:22:53.746523520Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="49.12µs" Aug 19 00:22:53.746696 containerd[1531]: time="2025-08-19T00:22:53.746646200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 00:22:53.746814 containerd[1531]: time="2025-08-19T00:22:53.746792400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 00:22:53.747553 containerd[1531]: time="2025-08-19T00:22:53.747485440Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 00:22:53.747599 containerd[1531]: time="2025-08-19T00:22:53.747559480Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 00:22:53.747698 containerd[1531]: time="2025-08-19T00:22:53.747676920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:22:53.748083 containerd[1531]: time="2025-08-19T00:22:53.748012680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:22:53.748177 containerd[1531]: time="2025-08-19T00:22:53.748117440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:22:53.748564 containerd[1531]: time="2025-08-19T00:22:53.748537040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:22:53.748593 containerd[1531]: time="2025-08-19T00:22:53.748563040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:22:53.748593 containerd[1531]: time="2025-08-19T00:22:53.748575320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:22:53.748593 containerd[1531]: time="2025-08-19T00:22:53.748584240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 00:22:53.750284 containerd[1531]: time="2025-08-19T00:22:53.748666160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 00:22:53.750284 containerd[1531]: time="2025-08-19T00:22:53.748884560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:22:53.750284 containerd[1531]: time="2025-08-19T00:22:53.748924160Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:22:53.750284 containerd[1531]: time="2025-08-19T00:22:53.748937040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 00:22:53.750284 containerd[1531]: time="2025-08-19T00:22:53.748993360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 00:22:53.750944 containerd[1531]: time="2025-08-19T00:22:53.750892800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 00:22:53.751036 containerd[1531]: time="2025-08-19T00:22:53.751008560Z" level=info msg="metadata content store policy set" policy=shared Aug 19 00:22:53.758078 containerd[1531]: time="2025-08-19T00:22:53.758020480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 00:22:53.758171 containerd[1531]: time="2025-08-19T00:22:53.758114720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 00:22:53.758171 containerd[1531]: time="2025-08-19T00:22:53.758133320Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 00:22:53.758171 containerd[1531]: time="2025-08-19T00:22:53.758145760Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 00:22:53.758171 containerd[1531]: time="2025-08-19T00:22:53.758159280Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 00:22:53.758171 containerd[1531]: time="2025-08-19T00:22:53.758171000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 00:22:53.758283 containerd[1531]: time="2025-08-19T00:22:53.758184280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 00:22:53.758283 containerd[1531]: time="2025-08-19T00:22:53.758198120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 00:22:53.758283 containerd[1531]: time="2025-08-19T00:22:53.758210960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 00:22:53.758283 containerd[1531]: time="2025-08-19T00:22:53.758221120Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 00:22:53.758283 containerd[1531]: time="2025-08-19T00:22:53.758231320Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 00:22:53.758283 containerd[1531]: time="2025-08-19T00:22:53.758245240Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 00:22:53.758565 containerd[1531]: time="2025-08-19T00:22:53.758539200Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 00:22:53.758681 containerd[1531]: time="2025-08-19T00:22:53.758660160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 00:22:53.758749 containerd[1531]: time="2025-08-19T00:22:53.758734080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 00:22:53.758810 containerd[1531]: time="2025-08-19T00:22:53.758795200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 00:22:53.758837 containerd[1531]: time="2025-08-19T00:22:53.758823240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 00:22:53.758858 containerd[1531]: time="2025-08-19T00:22:53.758843760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 00:22:53.758890 containerd[1531]: time="2025-08-19T00:22:53.758859760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 00:22:53.758890 containerd[1531]: time="2025-08-19T00:22:53.758870880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 00:22:53.758890 containerd[1531]: time="2025-08-19T00:22:53.758885600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 00:22:53.758947 containerd[1531]: time="2025-08-19T00:22:53.758897320Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 00:22:53.758947 containerd[1531]: time="2025-08-19T00:22:53.758908320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 00:22:53.759429 containerd[1531]: time="2025-08-19T00:22:53.759406960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 00:22:53.759457 containerd[1531]: time="2025-08-19T00:22:53.759435880Z" level=info msg="Start snapshots syncer" Aug 19 00:22:53.759491 containerd[1531]: time="2025-08-19T00:22:53.759477760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 00:22:53.760092 containerd[1531]: time="2025-08-19T00:22:53.759980600Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 00:22:53.760207 containerd[1531]: time="2025-08-19T00:22:53.760110920Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 00:22:53.760207 containerd[1531]: time="2025-08-19T00:22:53.760197520Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 00:22:53.760554 containerd[1531]: time="2025-08-19T00:22:53.760480680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 00:22:53.760554 containerd[1531]: time="2025-08-19T00:22:53.760515120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 00:22:53.760554 containerd[1531]: time="2025-08-19T00:22:53.760528520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 00:22:53.760554 containerd[1531]: time="2025-08-19T00:22:53.760539840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 00:22:53.760554 containerd[1531]: time="2025-08-19T00:22:53.760555000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 00:22:53.760681 containerd[1531]: time="2025-08-19T00:22:53.760567040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 00:22:53.760681 containerd[1531]: time="2025-08-19T00:22:53.760578960Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 00:22:53.760681 containerd[1531]: time="2025-08-19T00:22:53.760622800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 00:22:53.760681 containerd[1531]: time="2025-08-19T00:22:53.760638480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 00:22:53.760681 containerd[1531]: time="2025-08-19T00:22:53.760650840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 00:22:53.761555 containerd[1531]: time="2025-08-19T00:22:53.761517560Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:22:53.761664 containerd[1531]: time="2025-08-19T00:22:53.761611000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:22:53.761664 containerd[1531]: time="2025-08-19T00:22:53.761657000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:22:53.761712 containerd[1531]: time="2025-08-19T00:22:53.761670200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:22:53.761712 containerd[1531]: time="2025-08-19T00:22:53.761678800Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 00:22:53.761712 containerd[1531]: time="2025-08-19T00:22:53.761689680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 00:22:53.761712 containerd[1531]: time="2025-08-19T00:22:53.761701400Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 00:22:53.761944 containerd[1531]: time="2025-08-19T00:22:53.761892320Z" level=info msg="runtime interface created" Aug 19 00:22:53.761944 containerd[1531]: time="2025-08-19T00:22:53.761908520Z" level=info msg="created NRI interface" Aug 19 00:22:53.761944 containerd[1531]: time="2025-08-19T00:22:53.761923800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 00:22:53.761944 containerd[1531]: time="2025-08-19T00:22:53.761939160Z" level=info msg="Connect containerd service" Aug 19 00:22:53.762050 containerd[1531]: time="2025-08-19T00:22:53.761974640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 00:22:53.763107 containerd[1531]: time="2025-08-19T00:22:53.763080040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 00:22:53.764783 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 00:22:53.789472 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 00:22:53.793165 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 00:22:53.820325 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 00:22:53.820741 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 00:22:53.824802 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 00:22:53.833400 tar[1511]: linux-arm64/README.md Aug 19 00:22:53.851538 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 00:22:53.855471 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 00:22:53.860107 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 00:22:53.862081 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 19 00:22:53.864715 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 00:22:53.896698 containerd[1531]: time="2025-08-19T00:22:53.896629840Z" level=info msg="Start subscribing containerd event" Aug 19 00:22:53.896864 containerd[1531]: time="2025-08-19T00:22:53.896850400Z" level=info msg="Start recovering state" Aug 19 00:22:53.897175 containerd[1531]: time="2025-08-19T00:22:53.897102640Z" level=info msg="Start event monitor" Aug 19 00:22:53.897175 containerd[1531]: time="2025-08-19T00:22:53.897139800Z" level=info msg="Start cni network conf syncer for default" Aug 19 00:22:53.897175 containerd[1531]: time="2025-08-19T00:22:53.897108960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 00:22:53.897268 containerd[1531]: time="2025-08-19T00:22:53.897151280Z" level=info msg="Start streaming server" Aug 19 00:22:53.897268 containerd[1531]: time="2025-08-19T00:22:53.897238960Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 00:22:53.897268 containerd[1531]: time="2025-08-19T00:22:53.897246680Z" level=info msg="runtime interface starting up..." Aug 19 00:22:53.897268 containerd[1531]: time="2025-08-19T00:22:53.897255280Z" level=info msg="starting plugins..." Aug 19 00:22:53.897403 containerd[1531]: time="2025-08-19T00:22:53.897273520Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 00:22:53.897403 containerd[1531]: time="2025-08-19T00:22:53.897334800Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 00:22:53.898580 containerd[1531]: time="2025-08-19T00:22:53.898538560Z" level=info msg="containerd successfully booted in 0.166106s" Aug 19 00:22:53.898677 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 00:22:54.256850 systemd-networkd[1424]: eth0: Gained IPv6LL Aug 19 00:22:54.260350 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 00:22:54.262680 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 00:22:54.265775 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 19 00:22:54.268846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:22:54.270895 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 00:22:54.305672 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 19 00:22:54.306491 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 19 00:22:54.308043 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 00:22:54.313861 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 00:22:54.971359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:22:54.972954 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 00:22:54.977491 systemd[1]: Startup finished in 2.033s (kernel) + 5.838s (initrd) + 3.570s (userspace) = 11.441s. Aug 19 00:22:54.978610 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:22:55.452788 kubelet[1629]: E0819 00:22:55.452674 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:22:55.454991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:22:55.455149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:22:55.456503 systemd[1]: kubelet.service: Consumed 850ms CPU time, 257M memory peak. Aug 19 00:22:59.122781 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 00:22:59.123757 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:35492.service - OpenSSH per-connection server daemon (10.0.0.1:35492). Aug 19 00:22:59.211357 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 35492 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:22:59.212985 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:22:59.222590 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 00:22:59.223493 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 00:22:59.229966 systemd-logind[1502]: New session 1 of user core. Aug 19 00:22:59.247070 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 00:22:59.249702 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 00:22:59.265515 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 00:22:59.268055 systemd-logind[1502]: New session c1 of user core. Aug 19 00:22:59.382965 systemd[1648]: Queued start job for default target default.target. Aug 19 00:22:59.394527 systemd[1648]: Created slice app.slice - User Application Slice. Aug 19 00:22:59.394558 systemd[1648]: Reached target paths.target - Paths. Aug 19 00:22:59.394596 systemd[1648]: Reached target timers.target - Timers. Aug 19 00:22:59.395846 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 00:22:59.409985 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 00:22:59.410062 systemd[1648]: Reached target sockets.target - Sockets. Aug 19 00:22:59.410112 systemd[1648]: Reached target basic.target - Basic System. Aug 19 00:22:59.410254 systemd[1648]: Reached target default.target - Main User Target. Aug 19 00:22:59.410304 systemd[1648]: Startup finished in 136ms. Aug 19 00:22:59.410338 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 00:22:59.411506 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 00:22:59.478111 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:35504.service - OpenSSH per-connection server daemon (10.0.0.1:35504). Aug 19 00:22:59.537476 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 35504 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:22:59.538777 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:22:59.543342 systemd-logind[1502]: New session 2 of user core. Aug 19 00:22:59.550576 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 00:22:59.603380 sshd[1662]: Connection closed by 10.0.0.1 port 35504 Aug 19 00:22:59.603681 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Aug 19 00:22:59.614263 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:35504.service: Deactivated successfully. Aug 19 00:22:59.615938 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 00:22:59.617911 systemd-logind[1502]: Session 2 logged out. Waiting for processes to exit. Aug 19 00:22:59.619129 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:35520.service - OpenSSH per-connection server daemon (10.0.0.1:35520). Aug 19 00:22:59.620373 systemd-logind[1502]: Removed session 2. Aug 19 00:22:59.666526 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 35520 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:22:59.667836 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:22:59.672249 systemd-logind[1502]: New session 3 of user core. Aug 19 00:22:59.683591 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 00:22:59.732263 sshd[1671]: Connection closed by 10.0.0.1 port 35520 Aug 19 00:22:59.732550 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Aug 19 00:22:59.743349 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:35520.service: Deactivated successfully. Aug 19 00:22:59.744939 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 00:22:59.745987 systemd-logind[1502]: Session 3 logged out. Waiting for processes to exit. Aug 19 00:22:59.748290 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:35528.service - OpenSSH per-connection server daemon (10.0.0.1:35528). Aug 19 00:22:59.748970 systemd-logind[1502]: Removed session 3. Aug 19 00:22:59.802733 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 35528 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:22:59.804058 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:22:59.808052 systemd-logind[1502]: New session 4 of user core. Aug 19 00:22:59.824594 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 00:22:59.876874 sshd[1680]: Connection closed by 10.0.0.1 port 35528 Aug 19 00:22:59.877224 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Aug 19 00:22:59.888443 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:35528.service: Deactivated successfully. Aug 19 00:22:59.889983 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 00:22:59.890795 systemd-logind[1502]: Session 4 logged out. Waiting for processes to exit. Aug 19 00:22:59.892891 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:35542.service - OpenSSH per-connection server daemon (10.0.0.1:35542). Aug 19 00:22:59.893594 systemd-logind[1502]: Removed session 4. Aug 19 00:22:59.946582 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 35542 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:22:59.947892 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:22:59.952487 systemd-logind[1502]: New session 5 of user core. Aug 19 00:22:59.960558 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 00:23:00.021036 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 00:23:00.021317 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:23:00.032227 sudo[1690]: pam_unix(sudo:session): session closed for user root Aug 19 00:23:00.033561 sshd[1689]: Connection closed by 10.0.0.1 port 35542 Aug 19 00:23:00.033941 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Aug 19 00:23:00.046639 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:35542.service: Deactivated successfully. Aug 19 00:23:00.049830 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 00:23:00.050608 systemd-logind[1502]: Session 5 logged out. Waiting for processes to exit. Aug 19 00:23:00.052751 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:35546.service - OpenSSH per-connection server daemon (10.0.0.1:35546). Aug 19 00:23:00.054899 systemd-logind[1502]: Removed session 5. Aug 19 00:23:00.111398 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 35546 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:23:00.112632 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:23:00.116534 systemd-logind[1502]: New session 6 of user core. Aug 19 00:23:00.125561 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 00:23:00.177390 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 00:23:00.177663 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:23:00.267707 sudo[1701]: pam_unix(sudo:session): session closed for user root Aug 19 00:23:00.272807 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 00:23:00.273056 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:23:00.281502 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:23:00.321212 augenrules[1723]: No rules Aug 19 00:23:00.322294 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:23:00.323508 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:23:00.324784 sudo[1700]: pam_unix(sudo:session): session closed for user root Aug 19 00:23:00.326914 sshd[1699]: Connection closed by 10.0.0.1 port 35546 Aug 19 00:23:00.326394 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Aug 19 00:23:00.337264 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:35546.service: Deactivated successfully. Aug 19 00:23:00.338736 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 00:23:00.339375 systemd-logind[1502]: Session 6 logged out. Waiting for processes to exit. Aug 19 00:23:00.341219 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:35552.service - OpenSSH per-connection server daemon (10.0.0.1:35552). Aug 19 00:23:00.342145 systemd-logind[1502]: Removed session 6. Aug 19 00:23:00.393228 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 35552 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:23:00.394347 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:23:00.398734 systemd-logind[1502]: New session 7 of user core. Aug 19 00:23:00.409544 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 00:23:00.460566 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 00:23:00.461114 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:23:00.822115 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 00:23:00.842791 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 00:23:01.115967 dockerd[1757]: time="2025-08-19T00:23:01.115733408Z" level=info msg="Starting up" Aug 19 00:23:01.117103 dockerd[1757]: time="2025-08-19T00:23:01.117057068Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 00:23:01.127693 dockerd[1757]: time="2025-08-19T00:23:01.127616379Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 00:23:01.165942 dockerd[1757]: time="2025-08-19T00:23:01.165887806Z" level=info msg="Loading containers: start." Aug 19 00:23:01.174424 kernel: Initializing XFRM netlink socket Aug 19 00:23:01.413728 systemd-networkd[1424]: docker0: Link UP Aug 19 00:23:01.417484 dockerd[1757]: time="2025-08-19T00:23:01.417391535Z" level=info msg="Loading containers: done." Aug 19 00:23:01.436922 dockerd[1757]: time="2025-08-19T00:23:01.436850559Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 00:23:01.437083 dockerd[1757]: time="2025-08-19T00:23:01.436949458Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 00:23:01.437083 dockerd[1757]: time="2025-08-19T00:23:01.437034367Z" level=info msg="Initializing buildkit" Aug 19 00:23:01.467276 dockerd[1757]: time="2025-08-19T00:23:01.467202555Z" level=info msg="Completed buildkit initialization" Aug 19 00:23:01.474355 dockerd[1757]: time="2025-08-19T00:23:01.474302050Z" level=info msg="Daemon has completed initialization" Aug 19 00:23:01.474749 dockerd[1757]: time="2025-08-19T00:23:01.474414042Z" level=info msg="API listen on /run/docker.sock" Aug 19 00:23:01.474703 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 00:23:02.112855 containerd[1531]: time="2025-08-19T00:23:02.112791364Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Aug 19 00:23:02.735240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224599542.mount: Deactivated successfully. Aug 19 00:23:03.735855 containerd[1531]: time="2025-08-19T00:23:03.735799967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:03.736469 containerd[1531]: time="2025-08-19T00:23:03.736429363Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Aug 19 00:23:03.737174 containerd[1531]: time="2025-08-19T00:23:03.737136942Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:03.739995 containerd[1531]: time="2025-08-19T00:23:03.739958306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:03.740816 containerd[1531]: time="2025-08-19T00:23:03.740780000Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.627943806s" Aug 19 00:23:03.740857 containerd[1531]: time="2025-08-19T00:23:03.740818039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Aug 19 00:23:03.741737 containerd[1531]: time="2025-08-19T00:23:03.741710989Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Aug 19 00:23:04.934626 containerd[1531]: time="2025-08-19T00:23:04.934555795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:04.935355 containerd[1531]: time="2025-08-19T00:23:04.935262424Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Aug 19 00:23:04.935947 containerd[1531]: time="2025-08-19T00:23:04.935911700Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:04.938878 containerd[1531]: time="2025-08-19T00:23:04.938840906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:04.940672 containerd[1531]: time="2025-08-19T00:23:04.940628691Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.198882917s" Aug 19 00:23:04.940672 containerd[1531]: time="2025-08-19T00:23:04.940664445Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Aug 19 00:23:04.941255 containerd[1531]: time="2025-08-19T00:23:04.941221384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Aug 19 00:23:05.705777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 00:23:05.708593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:23:05.907251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:23:05.912549 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:23:05.956188 kubelet[2045]: E0819 00:23:05.956020 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:23:05.960465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:23:05.960622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:23:05.962539 systemd[1]: kubelet.service: Consumed 153ms CPU time, 108.1M memory peak. Aug 19 00:23:06.157898 containerd[1531]: time="2025-08-19T00:23:06.157841253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:06.158353 containerd[1531]: time="2025-08-19T00:23:06.158321113Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Aug 19 00:23:06.159244 containerd[1531]: time="2025-08-19T00:23:06.159212374Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:06.161728 containerd[1531]: time="2025-08-19T00:23:06.161684933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:06.162830 containerd[1531]: time="2025-08-19T00:23:06.162793875Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.221535467s" Aug 19 00:23:06.162830 containerd[1531]: time="2025-08-19T00:23:06.162830303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Aug 19 00:23:06.163292 containerd[1531]: time="2025-08-19T00:23:06.163271395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Aug 19 00:23:07.170871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368470865.mount: Deactivated successfully. Aug 19 00:23:07.393505 containerd[1531]: time="2025-08-19T00:23:07.393442773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:07.394162 containerd[1531]: time="2025-08-19T00:23:07.394116083Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Aug 19 00:23:07.395323 containerd[1531]: time="2025-08-19T00:23:07.395284905Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:07.397384 containerd[1531]: time="2025-08-19T00:23:07.397327241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:07.398035 containerd[1531]: time="2025-08-19T00:23:07.397894686Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.234593487s" Aug 19 00:23:07.398035 containerd[1531]: time="2025-08-19T00:23:07.397928980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Aug 19 00:23:07.398517 containerd[1531]: time="2025-08-19T00:23:07.398493685Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 00:23:08.012620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461261070.mount: Deactivated successfully. Aug 19 00:23:08.854483 containerd[1531]: time="2025-08-19T00:23:08.854422852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:08.854888 containerd[1531]: time="2025-08-19T00:23:08.854861982Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 19 00:23:08.855879 containerd[1531]: time="2025-08-19T00:23:08.855829542Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:08.858535 containerd[1531]: time="2025-08-19T00:23:08.858504383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:08.859556 containerd[1531]: time="2025-08-19T00:23:08.859523800Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.460997077s" Aug 19 00:23:08.859624 containerd[1531]: time="2025-08-19T00:23:08.859558827Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 19 00:23:08.860136 containerd[1531]: time="2025-08-19T00:23:08.859961682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 00:23:09.309965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181947837.mount: Deactivated successfully. Aug 19 00:23:09.314426 containerd[1531]: time="2025-08-19T00:23:09.314298292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:23:09.315070 containerd[1531]: time="2025-08-19T00:23:09.315021080Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 19 00:23:09.316299 containerd[1531]: time="2025-08-19T00:23:09.316250025Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:23:09.317936 containerd[1531]: time="2025-08-19T00:23:09.317880653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:23:09.318829 containerd[1531]: time="2025-08-19T00:23:09.318761862Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 458.773735ms" Aug 19 00:23:09.318829 containerd[1531]: time="2025-08-19T00:23:09.318792154Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 19 00:23:09.319285 containerd[1531]: time="2025-08-19T00:23:09.319253697Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 19 00:23:09.909967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104265473.mount: Deactivated successfully. Aug 19 00:23:11.475430 containerd[1531]: time="2025-08-19T00:23:11.474705955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:11.476747 containerd[1531]: time="2025-08-19T00:23:11.476701089Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Aug 19 00:23:11.477878 containerd[1531]: time="2025-08-19T00:23:11.477845194Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:11.481045 containerd[1531]: time="2025-08-19T00:23:11.481010145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:11.482975 containerd[1531]: time="2025-08-19T00:23:11.482928223Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.163555178s" Aug 19 00:23:11.482975 containerd[1531]: time="2025-08-19T00:23:11.482964220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 19 00:23:16.211103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 19 00:23:16.212902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:23:16.381992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:23:16.386334 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:23:16.425422 kubelet[2202]: E0819 00:23:16.425336 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:23:16.430759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:23:16.431176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:23:16.434238 systemd[1]: kubelet.service: Consumed 147ms CPU time, 106.7M memory peak. Aug 19 00:23:18.416022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:23:18.416488 systemd[1]: kubelet.service: Consumed 147ms CPU time, 106.7M memory peak. Aug 19 00:23:18.418567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:23:18.440925 systemd[1]: Reload requested from client PID 2217 ('systemctl') (unit session-7.scope)... Aug 19 00:23:18.440940 systemd[1]: Reloading... Aug 19 00:23:18.515482 zram_generator::config[2265]: No configuration found. Aug 19 00:23:18.822042 systemd[1]: Reloading finished in 380 ms. Aug 19 00:23:18.886967 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 00:23:18.887086 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 00:23:18.887342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:23:18.887417 systemd[1]: kubelet.service: Consumed 88ms CPU time, 94.9M memory peak. Aug 19 00:23:18.889766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:23:19.035929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:23:19.040065 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:23:19.074078 kubelet[2304]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:23:19.074078 kubelet[2304]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 00:23:19.074078 kubelet[2304]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:23:19.074564 kubelet[2304]: I0819 00:23:19.074507 2304 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:23:19.564411 kubelet[2304]: I0819 00:23:19.564132 2304 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 00:23:19.564411 kubelet[2304]: I0819 00:23:19.564168 2304 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:23:19.565026 kubelet[2304]: I0819 00:23:19.565003 2304 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 00:23:19.596913 kubelet[2304]: E0819 00:23:19.596869 2304 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:23:19.598136 kubelet[2304]: I0819 00:23:19.598111 2304 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:23:19.605503 kubelet[2304]: I0819 00:23:19.605472 2304 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:23:19.608827 kubelet[2304]: I0819 00:23:19.608800 2304 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:23:19.609524 kubelet[2304]: I0819 00:23:19.609465 2304 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:23:19.609700 kubelet[2304]: I0819 00:23:19.609518 2304 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:23:19.609981 kubelet[2304]: I0819 00:23:19.609956 2304 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:23:19.609981 kubelet[2304]: I0819 00:23:19.609969 2304 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 00:23:19.610499 kubelet[2304]: I0819 00:23:19.610485 2304 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:23:19.614120 kubelet[2304]: I0819 00:23:19.614072 2304 kubelet.go:446] "Attempting to sync node with API server" Aug 19 00:23:19.614120 kubelet[2304]: I0819 00:23:19.614096 2304 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:23:19.614239 kubelet[2304]: I0819 00:23:19.614128 2304 kubelet.go:352] "Adding apiserver pod source" Aug 19 00:23:19.614239 kubelet[2304]: I0819 00:23:19.614147 2304 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:23:19.624369 kubelet[2304]: W0819 00:23:19.624173 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 19 00:23:19.624369 kubelet[2304]: E0819 00:23:19.624256 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:23:19.624873 kubelet[2304]: W0819 00:23:19.624833 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 19 00:23:19.625009 kubelet[2304]: E0819 00:23:19.624991 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:23:19.625827 kubelet[2304]: I0819 00:23:19.625803 2304 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:23:19.627122 kubelet[2304]: I0819 00:23:19.627095 2304 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:23:19.627598 kubelet[2304]: W0819 00:23:19.627585 2304 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 00:23:19.630235 kubelet[2304]: I0819 00:23:19.630206 2304 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 00:23:19.630326 kubelet[2304]: I0819 00:23:19.630251 2304 server.go:1287] "Started kubelet" Aug 19 00:23:19.630932 kubelet[2304]: I0819 00:23:19.630889 2304 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:23:19.631583 kubelet[2304]: I0819 00:23:19.631516 2304 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:23:19.631901 kubelet[2304]: I0819 00:23:19.631873 2304 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:23:19.632295 kubelet[2304]: I0819 00:23:19.632272 2304 server.go:479] "Adding debug handlers to kubelet server" Aug 19 00:23:19.642363 kubelet[2304]: I0819 00:23:19.642331 2304 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:23:19.642720 kubelet[2304]: E0819 00:23:19.642255 2304 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185d033f2dc76a95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 00:23:19.630228117 +0000 UTC m=+0.587227078,LastTimestamp:2025-08-19 00:23:19.630228117 +0000 UTC m=+0.587227078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 00:23:19.643280 kubelet[2304]: I0819 00:23:19.643251 2304 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:23:19.643828 kubelet[2304]: E0819 00:23:19.643782 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:23:19.643898 kubelet[2304]: I0819 00:23:19.643833 2304 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 00:23:19.644061 kubelet[2304]: I0819 00:23:19.644035 2304 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 00:23:19.644109 kubelet[2304]: I0819 00:23:19.644104 2304 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:23:19.645236 kubelet[2304]: W0819 00:23:19.645172 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 19 00:23:19.645371 kubelet[2304]: E0819 00:23:19.645348 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:23:19.645472 kubelet[2304]: E0819 00:23:19.645441 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Aug 19 00:23:19.645949 kubelet[2304]: I0819 00:23:19.645919 2304 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:23:19.646224 kubelet[2304]: I0819 00:23:19.646087 2304 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:23:19.646701 kubelet[2304]: E0819 00:23:19.646681 2304 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:23:19.647695 kubelet[2304]: I0819 00:23:19.647670 2304 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:23:19.659712 kubelet[2304]: I0819 00:23:19.659687 2304 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 00:23:19.659712 kubelet[2304]: I0819 00:23:19.659705 2304 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 00:23:19.659870 kubelet[2304]: I0819 00:23:19.659727 2304 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:23:19.669095 kubelet[2304]: I0819 00:23:19.669052 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:23:19.670326 kubelet[2304]: I0819 00:23:19.670286 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:23:19.670326 kubelet[2304]: I0819 00:23:19.670322 2304 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 00:23:19.670453 kubelet[2304]: I0819 00:23:19.670350 2304 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 00:23:19.670453 kubelet[2304]: I0819 00:23:19.670358 2304 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 00:23:19.671078 kubelet[2304]: E0819 00:23:19.670852 2304 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:23:19.671205 kubelet[2304]: W0819 00:23:19.671131 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 19 00:23:19.671205 kubelet[2304]: E0819 00:23:19.671180 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:23:19.729295 kubelet[2304]: I0819 00:23:19.729220 2304 policy_none.go:49] "None policy: Start" Aug 19 00:23:19.729295 kubelet[2304]: I0819 00:23:19.729286 2304 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 00:23:19.729295 kubelet[2304]: I0819 00:23:19.729301 2304 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:23:19.736550 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 00:23:19.744842 kubelet[2304]: E0819 00:23:19.744803 2304 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:23:19.750859 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 00:23:19.754001 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 00:23:19.771015 kubelet[2304]: E0819 00:23:19.770960 2304 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 00:23:19.772788 kubelet[2304]: I0819 00:23:19.772535 2304 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:23:19.772983 kubelet[2304]: I0819 00:23:19.772949 2304 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:23:19.773039 kubelet[2304]: I0819 00:23:19.772969 2304 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:23:19.773241 kubelet[2304]: I0819 00:23:19.773223 2304 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:23:19.775080 kubelet[2304]: E0819 00:23:19.775049 2304 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 00:23:19.775163 kubelet[2304]: E0819 00:23:19.775127 2304 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 19 00:23:19.846622 kubelet[2304]: E0819 00:23:19.846496 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Aug 19 00:23:19.874911 kubelet[2304]: I0819 00:23:19.874878 2304 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:23:19.875573 kubelet[2304]: E0819 00:23:19.875518 2304 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Aug 19 00:23:19.980718 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Aug 19 00:23:19.994407 kubelet[2304]: E0819 00:23:19.994327 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:23:19.997540 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Aug 19 00:23:20.016813 kubelet[2304]: E0819 00:23:20.016704 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:23:20.019411 systemd[1]: Created slice kubepods-burstable-pod12639cff03608c27604fbdc87b410364.slice - libcontainer container kubepods-burstable-pod12639cff03608c27604fbdc87b410364.slice. Aug 19 00:23:20.021211 kubelet[2304]: E0819 00:23:20.021181 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:23:20.046462 kubelet[2304]: I0819 00:23:20.046420 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:20.046525 kubelet[2304]: I0819 00:23:20.046459 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:20.046569 kubelet[2304]: I0819 00:23:20.046528 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 00:23:20.046569 kubelet[2304]: I0819 00:23:20.046559 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12639cff03608c27604fbdc87b410364-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12639cff03608c27604fbdc87b410364\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:20.046686 kubelet[2304]: I0819 00:23:20.046576 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12639cff03608c27604fbdc87b410364-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12639cff03608c27604fbdc87b410364\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:20.046725 kubelet[2304]: I0819 00:23:20.046702 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:20.046725 kubelet[2304]: I0819 00:23:20.046719 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:20.046782 kubelet[2304]: I0819 00:23:20.046735 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:20.046782 kubelet[2304]: I0819 00:23:20.046758 2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12639cff03608c27604fbdc87b410364-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12639cff03608c27604fbdc87b410364\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:20.077680 kubelet[2304]: I0819 00:23:20.077644 2304 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:23:20.078059 kubelet[2304]: E0819 00:23:20.078012 2304 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Aug 19 00:23:20.247856 kubelet[2304]: E0819 00:23:20.247715 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Aug 19 00:23:20.295117 kubelet[2304]: E0819 00:23:20.295071 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:20.295739 containerd[1531]: time="2025-08-19T00:23:20.295693561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Aug 19 00:23:20.314186 containerd[1531]: time="2025-08-19T00:23:20.314013796Z" level=info msg="connecting to shim 77d7e69327729a7256b23074ca9ef5a213dea83bf43dee2599d686aa624cd811" address="unix:///run/containerd/s/ce41ccf206db36d638ea79ea96492ffac7468da558f6886b96c3ffff69be5875" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:23:20.318249 kubelet[2304]: E0819 00:23:20.317926 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:20.318541 containerd[1531]: time="2025-08-19T00:23:20.318500092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Aug 19 00:23:20.321847 kubelet[2304]: E0819 00:23:20.321804 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:20.323499 containerd[1531]: time="2025-08-19T00:23:20.323453881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12639cff03608c27604fbdc87b410364,Namespace:kube-system,Attempt:0,}" Aug 19 00:23:20.338597 systemd[1]: Started cri-containerd-77d7e69327729a7256b23074ca9ef5a213dea83bf43dee2599d686aa624cd811.scope - libcontainer container 77d7e69327729a7256b23074ca9ef5a213dea83bf43dee2599d686aa624cd811. Aug 19 00:23:20.355459 containerd[1531]: time="2025-08-19T00:23:20.355399702Z" level=info msg="connecting to shim 3a6f716747a24f3d4b365167bbe3dab655e4975f13fa72c26feb174c94a8ec2e" address="unix:///run/containerd/s/0685aeb90abb747038ff0a5e74ebdd284a7dabf221e879e1cd214dca7e7b4a3e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:23:20.358496 containerd[1531]: time="2025-08-19T00:23:20.358329512Z" level=info msg="connecting to shim 1a93c79f764b79390ca317c619bbf116969b36746130b1f0a81e62631aa7247c" address="unix:///run/containerd/s/4ef72ee8562c4333a620a992f68e2b055f06209b569f33a72e0e93ca1eb2049b" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:23:20.388610 systemd[1]: Started cri-containerd-3a6f716747a24f3d4b365167bbe3dab655e4975f13fa72c26feb174c94a8ec2e.scope - libcontainer container 3a6f716747a24f3d4b365167bbe3dab655e4975f13fa72c26feb174c94a8ec2e. Aug 19 00:23:20.392741 systemd[1]: Started cri-containerd-1a93c79f764b79390ca317c619bbf116969b36746130b1f0a81e62631aa7247c.scope - libcontainer container 1a93c79f764b79390ca317c619bbf116969b36746130b1f0a81e62631aa7247c. Aug 19 00:23:20.395012 containerd[1531]: time="2025-08-19T00:23:20.394935016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"77d7e69327729a7256b23074ca9ef5a213dea83bf43dee2599d686aa624cd811\"" Aug 19 00:23:20.396401 kubelet[2304]: E0819 00:23:20.396360 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:20.400223 containerd[1531]: time="2025-08-19T00:23:20.399725271Z" level=info msg="CreateContainer within sandbox \"77d7e69327729a7256b23074ca9ef5a213dea83bf43dee2599d686aa624cd811\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 00:23:20.414016 containerd[1531]: time="2025-08-19T00:23:20.413958535Z" level=info msg="Container 8673a4f962efb626b2296cd1435ea61e842c22d09f70a5906be73ec6f9b685e2: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:20.424845 containerd[1531]: time="2025-08-19T00:23:20.424785723Z" level=info msg="CreateContainer within sandbox \"77d7e69327729a7256b23074ca9ef5a213dea83bf43dee2599d686aa624cd811\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8673a4f962efb626b2296cd1435ea61e842c22d09f70a5906be73ec6f9b685e2\"" Aug 19 00:23:20.426807 containerd[1531]: time="2025-08-19T00:23:20.426761079Z" level=info msg="StartContainer for \"8673a4f962efb626b2296cd1435ea61e842c22d09f70a5906be73ec6f9b685e2\"" Aug 19 00:23:20.429183 containerd[1531]: time="2025-08-19T00:23:20.429116094Z" level=info msg="connecting to shim 8673a4f962efb626b2296cd1435ea61e842c22d09f70a5906be73ec6f9b685e2" address="unix:///run/containerd/s/ce41ccf206db36d638ea79ea96492ffac7468da558f6886b96c3ffff69be5875" protocol=ttrpc version=3 Aug 19 00:23:20.431371 containerd[1531]: time="2025-08-19T00:23:20.431329322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a6f716747a24f3d4b365167bbe3dab655e4975f13fa72c26feb174c94a8ec2e\"" Aug 19 00:23:20.433694 kubelet[2304]: E0819 00:23:20.433512 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:20.435218 containerd[1531]: time="2025-08-19T00:23:20.435180062Z" level=info msg="CreateContainer within sandbox \"3a6f716747a24f3d4b365167bbe3dab655e4975f13fa72c26feb174c94a8ec2e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 00:23:20.441836 containerd[1531]: time="2025-08-19T00:23:20.441788827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12639cff03608c27604fbdc87b410364,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a93c79f764b79390ca317c619bbf116969b36746130b1f0a81e62631aa7247c\"" Aug 19 00:23:20.442951 kubelet[2304]: E0819 00:23:20.442765 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:20.445044 containerd[1531]: time="2025-08-19T00:23:20.445006615Z" level=info msg="CreateContainer within sandbox \"1a93c79f764b79390ca317c619bbf116969b36746130b1f0a81e62631aa7247c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 00:23:20.447208 containerd[1531]: time="2025-08-19T00:23:20.447174705Z" level=info msg="Container c9dc3cd9639c39e288d8416e4445d90e546f11450833686d3f7179f3f44a7490: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:20.456301 containerd[1531]: time="2025-08-19T00:23:20.456129312Z" level=info msg="Container 25049b1bac6110c5bfe6c2b6bf3568537b5c1a867a47f9362b9e5b2f2841cb73: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:20.457734 containerd[1531]: time="2025-08-19T00:23:20.457681832Z" level=info msg="CreateContainer within sandbox \"3a6f716747a24f3d4b365167bbe3dab655e4975f13fa72c26feb174c94a8ec2e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c9dc3cd9639c39e288d8416e4445d90e546f11450833686d3f7179f3f44a7490\"" Aug 19 00:23:20.458070 kubelet[2304]: W0819 00:23:20.457876 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 19 00:23:20.458070 kubelet[2304]: E0819 00:23:20.457939 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:23:20.458159 containerd[1531]: time="2025-08-19T00:23:20.458102665Z" level=info msg="StartContainer for \"c9dc3cd9639c39e288d8416e4445d90e546f11450833686d3f7179f3f44a7490\"" Aug 19 00:23:20.459253 containerd[1531]: time="2025-08-19T00:23:20.459220254Z" level=info msg="connecting to shim c9dc3cd9639c39e288d8416e4445d90e546f11450833686d3f7179f3f44a7490" address="unix:///run/containerd/s/0685aeb90abb747038ff0a5e74ebdd284a7dabf221e879e1cd214dca7e7b4a3e" protocol=ttrpc version=3 Aug 19 00:23:20.459643 systemd[1]: Started cri-containerd-8673a4f962efb626b2296cd1435ea61e842c22d09f70a5906be73ec6f9b685e2.scope - libcontainer container 8673a4f962efb626b2296cd1435ea61e842c22d09f70a5906be73ec6f9b685e2. Aug 19 00:23:20.467206 containerd[1531]: time="2025-08-19T00:23:20.467144027Z" level=info msg="CreateContainer within sandbox \"1a93c79f764b79390ca317c619bbf116969b36746130b1f0a81e62631aa7247c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"25049b1bac6110c5bfe6c2b6bf3568537b5c1a867a47f9362b9e5b2f2841cb73\"" Aug 19 00:23:20.467806 containerd[1531]: time="2025-08-19T00:23:20.467775096Z" level=info msg="StartContainer for \"25049b1bac6110c5bfe6c2b6bf3568537b5c1a867a47f9362b9e5b2f2841cb73\"" Aug 19 00:23:20.469800 containerd[1531]: time="2025-08-19T00:23:20.469750972Z" level=info msg="connecting to shim 25049b1bac6110c5bfe6c2b6bf3568537b5c1a867a47f9362b9e5b2f2841cb73" address="unix:///run/containerd/s/4ef72ee8562c4333a620a992f68e2b055f06209b569f33a72e0e93ca1eb2049b" protocol=ttrpc version=3 Aug 19 00:23:20.480334 kubelet[2304]: I0819 00:23:20.480265 2304 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:23:20.480769 kubelet[2304]: E0819 00:23:20.480733 2304 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Aug 19 00:23:20.482577 systemd[1]: Started cri-containerd-c9dc3cd9639c39e288d8416e4445d90e546f11450833686d3f7179f3f44a7490.scope - libcontainer container c9dc3cd9639c39e288d8416e4445d90e546f11450833686d3f7179f3f44a7490. Aug 19 00:23:20.486032 systemd[1]: Started cri-containerd-25049b1bac6110c5bfe6c2b6bf3568537b5c1a867a47f9362b9e5b2f2841cb73.scope - libcontainer container 25049b1bac6110c5bfe6c2b6bf3568537b5c1a867a47f9362b9e5b2f2841cb73. Aug 19 00:23:20.509207 containerd[1531]: time="2025-08-19T00:23:20.509094114Z" level=info msg="StartContainer for \"8673a4f962efb626b2296cd1435ea61e842c22d09f70a5906be73ec6f9b685e2\" returns successfully" Aug 19 00:23:20.541292 containerd[1531]: time="2025-08-19T00:23:20.541241279Z" level=info msg="StartContainer for \"c9dc3cd9639c39e288d8416e4445d90e546f11450833686d3f7179f3f44a7490\" returns successfully" Aug 19 00:23:20.570484 containerd[1531]: time="2025-08-19T00:23:20.570431358Z" level=info msg="StartContainer for \"25049b1bac6110c5bfe6c2b6bf3568537b5c1a867a47f9362b9e5b2f2841cb73\" returns successfully" Aug 19 00:23:20.573740 kubelet[2304]: W0819 00:23:20.573684 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 19 00:23:20.573868 kubelet[2304]: E0819 00:23:20.573750 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:23:20.611711 kubelet[2304]: W0819 00:23:20.611600 2304 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.89:6443: connect: connection refused Aug 19 00:23:20.611711 kubelet[2304]: E0819 00:23:20.611678 2304 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:23:20.682759 kubelet[2304]: E0819 00:23:20.682594 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:23:20.682759 kubelet[2304]: E0819 00:23:20.682734 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:20.686217 kubelet[2304]: E0819 00:23:20.686171 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:23:20.686535 kubelet[2304]: E0819 00:23:20.686307 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:20.690414 kubelet[2304]: E0819 00:23:20.690326 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:23:20.690558 kubelet[2304]: E0819 00:23:20.690502 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:21.283093 kubelet[2304]: I0819 00:23:21.283053 2304 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:23:21.693419 kubelet[2304]: E0819 00:23:21.692761 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:23:21.693419 kubelet[2304]: E0819 00:23:21.692911 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:21.693419 kubelet[2304]: E0819 00:23:21.693158 2304 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:23:21.693419 kubelet[2304]: E0819 00:23:21.693244 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:21.888130 kubelet[2304]: E0819 00:23:21.888091 2304 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 19 00:23:21.995897 kubelet[2304]: I0819 00:23:21.995645 2304 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 00:23:22.044438 kubelet[2304]: I0819 00:23:22.044394 2304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:22.069267 kubelet[2304]: E0819 00:23:22.069210 2304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:22.069267 kubelet[2304]: I0819 00:23:22.069260 2304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:22.073367 kubelet[2304]: E0819 00:23:22.073328 2304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:22.073367 kubelet[2304]: I0819 00:23:22.073368 2304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 00:23:22.075874 kubelet[2304]: E0819 00:23:22.075842 2304 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 19 00:23:22.615722 kubelet[2304]: I0819 00:23:22.615684 2304 apiserver.go:52] "Watching apiserver" Aug 19 00:23:22.644146 kubelet[2304]: I0819 00:23:22.644106 2304 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 00:23:23.524109 kubelet[2304]: I0819 00:23:23.524075 2304 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 00:23:23.528870 kubelet[2304]: E0819 00:23:23.528823 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:23.695669 kubelet[2304]: E0819 00:23:23.695577 2304 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:24.012949 systemd[1]: Reload requested from client PID 2582 ('systemctl') (unit session-7.scope)... Aug 19 00:23:24.012966 systemd[1]: Reloading... Aug 19 00:23:24.101544 zram_generator::config[2629]: No configuration found. Aug 19 00:23:24.269093 systemd[1]: Reloading finished in 255 ms. Aug 19 00:23:24.292852 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:23:24.311374 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 00:23:24.311661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:23:24.311731 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 129.9M memory peak. Aug 19 00:23:24.315635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:23:24.468022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:23:24.481803 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:23:24.522371 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:23:24.522371 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 00:23:24.522371 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:23:24.522371 kubelet[2667]: I0819 00:23:24.522237 2667 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:23:24.529180 kubelet[2667]: I0819 00:23:24.529120 2667 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 00:23:24.529180 kubelet[2667]: I0819 00:23:24.529165 2667 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:23:24.529518 kubelet[2667]: I0819 00:23:24.529492 2667 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 00:23:24.531005 kubelet[2667]: I0819 00:23:24.530977 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 00:23:24.534034 kubelet[2667]: I0819 00:23:24.533995 2667 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:23:24.538309 kubelet[2667]: I0819 00:23:24.538269 2667 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:23:24.545955 kubelet[2667]: I0819 00:23:24.544057 2667 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:23:24.545955 kubelet[2667]: I0819 00:23:24.544246 2667 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:23:24.545955 kubelet[2667]: I0819 00:23:24.544269 2667 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:23:24.545955 kubelet[2667]: I0819 00:23:24.544491 2667 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:23:24.546207 kubelet[2667]: I0819 00:23:24.544501 2667 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 00:23:24.546207 kubelet[2667]: I0819 00:23:24.544545 2667 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:23:24.546207 kubelet[2667]: I0819 00:23:24.544684 2667 kubelet.go:446] "Attempting to sync node with API server" Aug 19 00:23:24.546207 kubelet[2667]: I0819 00:23:24.544696 2667 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:23:24.546207 kubelet[2667]: I0819 00:23:24.544719 2667 kubelet.go:352] "Adding apiserver pod source" Aug 19 00:23:24.546207 kubelet[2667]: I0819 00:23:24.544729 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:23:24.549315 kubelet[2667]: I0819 00:23:24.549274 2667 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:23:24.550546 kubelet[2667]: I0819 00:23:24.550512 2667 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:23:24.551051 kubelet[2667]: I0819 00:23:24.551031 2667 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 00:23:24.551106 kubelet[2667]: I0819 00:23:24.551077 2667 server.go:1287] "Started kubelet" Aug 19 00:23:24.551492 kubelet[2667]: I0819 00:23:24.551444 2667 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:23:24.552721 kubelet[2667]: I0819 00:23:24.552144 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:23:24.552721 kubelet[2667]: I0819 00:23:24.552465 2667 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:23:24.556455 kubelet[2667]: I0819 00:23:24.555890 2667 server.go:479] "Adding debug handlers to kubelet server" Aug 19 00:23:24.560046 kubelet[2667]: I0819 00:23:24.559912 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:23:24.572845 kubelet[2667]: I0819 00:23:24.572697 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:23:24.573607 kubelet[2667]: I0819 00:23:24.573466 2667 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 00:23:24.574048 kubelet[2667]: I0819 00:23:24.574006 2667 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 00:23:24.574270 kubelet[2667]: I0819 00:23:24.574242 2667 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:23:24.578112 kubelet[2667]: E0819 00:23:24.578006 2667 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:23:24.579040 kubelet[2667]: I0819 00:23:24.579005 2667 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:23:24.579040 kubelet[2667]: I0819 00:23:24.579038 2667 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:23:24.579435 kubelet[2667]: I0819 00:23:24.579163 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:23:24.585182 kubelet[2667]: I0819 00:23:24.584990 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:23:24.588239 kubelet[2667]: I0819 00:23:24.588202 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:23:24.588442 kubelet[2667]: I0819 00:23:24.588422 2667 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 00:23:24.589074 kubelet[2667]: I0819 00:23:24.588568 2667 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 00:23:24.589074 kubelet[2667]: I0819 00:23:24.588583 2667 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 00:23:24.589074 kubelet[2667]: E0819 00:23:24.588643 2667 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:23:24.619869 kubelet[2667]: I0819 00:23:24.619824 2667 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 00:23:24.619869 kubelet[2667]: I0819 00:23:24.619847 2667 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 00:23:24.619869 kubelet[2667]: I0819 00:23:24.619882 2667 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:23:24.620073 kubelet[2667]: I0819 00:23:24.620054 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 00:23:24.620101 kubelet[2667]: I0819 00:23:24.620071 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 00:23:24.620101 kubelet[2667]: I0819 00:23:24.620091 2667 policy_none.go:49] "None policy: Start" Aug 19 00:23:24.620101 kubelet[2667]: I0819 00:23:24.620100 2667 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 00:23:24.620174 kubelet[2667]: I0819 00:23:24.620110 2667 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:23:24.620225 kubelet[2667]: I0819 00:23:24.620214 2667 state_mem.go:75] "Updated machine memory state" Aug 19 00:23:24.626571 kubelet[2667]: I0819 00:23:24.626517 2667 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:23:24.627029 kubelet[2667]: I0819 00:23:24.626738 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:23:24.627029 kubelet[2667]: I0819 00:23:24.626758 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:23:24.627029 kubelet[2667]: I0819 00:23:24.626948 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:23:24.628577 kubelet[2667]: E0819 00:23:24.628220 2667 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 00:23:24.689528 kubelet[2667]: I0819 00:23:24.689481 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:24.689528 kubelet[2667]: I0819 00:23:24.689533 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:24.689676 kubelet[2667]: I0819 00:23:24.689539 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 00:23:24.696373 kubelet[2667]: E0819 00:23:24.696322 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 19 00:23:24.738221 kubelet[2667]: I0819 00:23:24.738186 2667 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:23:24.744734 kubelet[2667]: I0819 00:23:24.744686 2667 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 19 00:23:24.744876 kubelet[2667]: I0819 00:23:24.744776 2667 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 00:23:24.775977 kubelet[2667]: I0819 00:23:24.775837 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12639cff03608c27604fbdc87b410364-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12639cff03608c27604fbdc87b410364\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:24.775977 kubelet[2667]: I0819 00:23:24.775887 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:24.775977 kubelet[2667]: I0819 00:23:24.775913 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12639cff03608c27604fbdc87b410364-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12639cff03608c27604fbdc87b410364\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:24.775977 kubelet[2667]: I0819 00:23:24.775929 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:24.775977 kubelet[2667]: I0819 00:23:24.775946 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:24.776194 kubelet[2667]: I0819 00:23:24.775965 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:24.776194 kubelet[2667]: I0819 00:23:24.775983 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:23:24.776194 kubelet[2667]: I0819 00:23:24.776000 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 00:23:24.776194 kubelet[2667]: I0819 00:23:24.776014 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12639cff03608c27604fbdc87b410364-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12639cff03608c27604fbdc87b410364\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:24.996097 kubelet[2667]: E0819 00:23:24.996048 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:24.997217 kubelet[2667]: E0819 00:23:24.997156 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:24.997350 kubelet[2667]: E0819 00:23:24.997335 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:25.020786 sudo[2708]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 19 00:23:25.021098 sudo[2708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 19 00:23:25.345902 sudo[2708]: pam_unix(sudo:session): session closed for user root Aug 19 00:23:25.545635 kubelet[2667]: I0819 00:23:25.545587 2667 apiserver.go:52] "Watching apiserver" Aug 19 00:23:25.574306 kubelet[2667]: I0819 00:23:25.574210 2667 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 00:23:25.605889 kubelet[2667]: I0819 00:23:25.605611 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:25.605889 kubelet[2667]: E0819 00:23:25.605832 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:25.606330 kubelet[2667]: E0819 00:23:25.606259 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:25.615975 kubelet[2667]: E0819 00:23:25.615923 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 19 00:23:25.616840 kubelet[2667]: E0819 00:23:25.616694 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:25.630972 kubelet[2667]: I0819 00:23:25.630906 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.630889899 podStartE2EDuration="1.630889899s" podCreationTimestamp="2025-08-19 00:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:23:25.63030705 +0000 UTC m=+1.144982097" watchObservedRunningTime="2025-08-19 00:23:25.630889899 +0000 UTC m=+1.145564906" Aug 19 00:23:25.652158 kubelet[2667]: I0819 00:23:25.652073 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.652054572 podStartE2EDuration="2.652054572s" podCreationTimestamp="2025-08-19 00:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:23:25.643514149 +0000 UTC m=+1.158189116" watchObservedRunningTime="2025-08-19 00:23:25.652054572 +0000 UTC m=+1.166729579" Aug 19 00:23:25.665017 kubelet[2667]: I0819 00:23:25.664945 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.664927133 podStartE2EDuration="1.664927133s" podCreationTimestamp="2025-08-19 00:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:23:25.65212955 +0000 UTC m=+1.166804557" watchObservedRunningTime="2025-08-19 00:23:25.664927133 +0000 UTC m=+1.179602140" Aug 19 00:23:26.607538 kubelet[2667]: E0819 00:23:26.607468 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:26.607538 kubelet[2667]: E0819 00:23:26.607509 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:27.532470 sudo[1736]: pam_unix(sudo:session): session closed for user root Aug 19 00:23:27.533688 sshd[1735]: Connection closed by 10.0.0.1 port 35552 Aug 19 00:23:27.534225 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Aug 19 00:23:27.538711 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:35552.service: Deactivated successfully. Aug 19 00:23:27.542601 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 00:23:27.542868 systemd[1]: session-7.scope: Consumed 9.652s CPU time, 261M memory peak. Aug 19 00:23:27.544084 systemd-logind[1502]: Session 7 logged out. Waiting for processes to exit. Aug 19 00:23:27.545873 systemd-logind[1502]: Removed session 7. Aug 19 00:23:27.609283 kubelet[2667]: E0819 00:23:27.609227 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:30.264500 kubelet[2667]: E0819 00:23:30.264466 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:30.614152 kubelet[2667]: I0819 00:23:30.613873 2667 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 00:23:30.614256 containerd[1531]: time="2025-08-19T00:23:30.614210935Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 00:23:30.615138 kubelet[2667]: I0819 00:23:30.614450 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 00:23:30.615138 kubelet[2667]: E0819 00:23:30.614780 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:31.570975 systemd[1]: Created slice kubepods-besteffort-pod803cc5c5_f48a_45e5_be8a_fb7f354def96.slice - libcontainer container kubepods-besteffort-pod803cc5c5_f48a_45e5_be8a_fb7f354def96.slice. Aug 19 00:23:31.593043 systemd[1]: Created slice kubepods-burstable-pod115eb11b_db07_43a6_ab0b_f6525ceb2c72.slice - libcontainer container kubepods-burstable-pod115eb11b_db07_43a6_ab0b_f6525ceb2c72.slice. Aug 19 00:23:31.617488 kubelet[2667]: E0819 00:23:31.617295 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:31.684706 systemd[1]: Created slice kubepods-besteffort-pod0020ed66_0db3_4260_952a_343926b4ee57.slice - libcontainer container kubepods-besteffort-pod0020ed66_0db3_4260_952a_343926b4ee57.slice. Aug 19 00:23:31.715298 kubelet[2667]: I0819 00:23:31.715229 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-host-proc-sys-kernel\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715298 kubelet[2667]: I0819 00:23:31.715293 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/115eb11b-db07-43a6-ab0b-f6525ceb2c72-hubble-tls\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715298 kubelet[2667]: I0819 00:23:31.715313 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-bpf-maps\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715545 kubelet[2667]: I0819 00:23:31.715331 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-cgroup\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715545 kubelet[2667]: I0819 00:23:31.715351 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-config-path\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715545 kubelet[2667]: I0819 00:23:31.715368 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/803cc5c5-f48a-45e5-be8a-fb7f354def96-xtables-lock\") pod \"kube-proxy-lrbnf\" (UID: \"803cc5c5-f48a-45e5-be8a-fb7f354def96\") " pod="kube-system/kube-proxy-lrbnf" Aug 19 00:23:31.715545 kubelet[2667]: I0819 00:23:31.715530 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/803cc5c5-f48a-45e5-be8a-fb7f354def96-lib-modules\") pod \"kube-proxy-lrbnf\" (UID: \"803cc5c5-f48a-45e5-be8a-fb7f354def96\") " pod="kube-system/kube-proxy-lrbnf" Aug 19 00:23:31.715635 kubelet[2667]: I0819 00:23:31.715581 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-etc-cni-netd\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715635 kubelet[2667]: I0819 00:23:31.715630 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-lib-modules\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715682 kubelet[2667]: I0819 00:23:31.715654 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-host-proc-sys-net\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715706 kubelet[2667]: I0819 00:23:31.715682 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ktwq\" (UniqueName: \"kubernetes.io/projected/803cc5c5-f48a-45e5-be8a-fb7f354def96-kube-api-access-4ktwq\") pod \"kube-proxy-lrbnf\" (UID: \"803cc5c5-f48a-45e5-be8a-fb7f354def96\") " pod="kube-system/kube-proxy-lrbnf" Aug 19 00:23:31.715730 kubelet[2667]: I0819 00:23:31.715711 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-run\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715753 kubelet[2667]: I0819 00:23:31.715746 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cni-path\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715780 kubelet[2667]: I0819 00:23:31.715765 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-xtables-lock\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715804 kubelet[2667]: I0819 00:23:31.715787 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/803cc5c5-f48a-45e5-be8a-fb7f354def96-kube-proxy\") pod \"kube-proxy-lrbnf\" (UID: \"803cc5c5-f48a-45e5-be8a-fb7f354def96\") " pod="kube-system/kube-proxy-lrbnf" Aug 19 00:23:31.715843 kubelet[2667]: I0819 00:23:31.715820 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sw58\" (UniqueName: \"kubernetes.io/projected/115eb11b-db07-43a6-ab0b-f6525ceb2c72-kube-api-access-6sw58\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715883 kubelet[2667]: I0819 00:23:31.715848 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-hostproc\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.715911 kubelet[2667]: I0819 00:23:31.715901 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/115eb11b-db07-43a6-ab0b-f6525ceb2c72-clustermesh-secrets\") pod \"cilium-mxqld\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " pod="kube-system/cilium-mxqld" Aug 19 00:23:31.816646 kubelet[2667]: I0819 00:23:31.816557 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlchl\" (UniqueName: \"kubernetes.io/projected/0020ed66-0db3-4260-952a-343926b4ee57-kube-api-access-tlchl\") pod \"cilium-operator-6c4d7847fc-xxffp\" (UID: \"0020ed66-0db3-4260-952a-343926b4ee57\") " pod="kube-system/cilium-operator-6c4d7847fc-xxffp" Aug 19 00:23:31.816646 kubelet[2667]: I0819 00:23:31.816636 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0020ed66-0db3-4260-952a-343926b4ee57-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xxffp\" (UID: \"0020ed66-0db3-4260-952a-343926b4ee57\") " pod="kube-system/cilium-operator-6c4d7847fc-xxffp" Aug 19 00:23:31.886532 kubelet[2667]: E0819 00:23:31.886400 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:31.887706 containerd[1531]: time="2025-08-19T00:23:31.887665791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lrbnf,Uid:803cc5c5-f48a-45e5-be8a-fb7f354def96,Namespace:kube-system,Attempt:0,}" Aug 19 00:23:31.896514 kubelet[2667]: E0819 00:23:31.896466 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:31.897419 containerd[1531]: time="2025-08-19T00:23:31.897356176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxqld,Uid:115eb11b-db07-43a6-ab0b-f6525ceb2c72,Namespace:kube-system,Attempt:0,}" Aug 19 00:23:31.909460 containerd[1531]: time="2025-08-19T00:23:31.909405906Z" level=info msg="connecting to shim 81565c916442f9d4f530df91aca7d0c225b6278e85b86647c849d28412bd591d" address="unix:///run/containerd/s/f818749d498aa2b35294f3524e0869af5f0a84cb6df304472814f671a0006719" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:23:31.921913 containerd[1531]: time="2025-08-19T00:23:31.921831242Z" level=info msg="connecting to shim 705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb" address="unix:///run/containerd/s/7d9ba2c29c6675f060b2d3a9eed3fcfa26d43894da85620680b3cf6cd6dc068f" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:23:31.937668 systemd[1]: Started cri-containerd-81565c916442f9d4f530df91aca7d0c225b6278e85b86647c849d28412bd591d.scope - libcontainer container 81565c916442f9d4f530df91aca7d0c225b6278e85b86647c849d28412bd591d. Aug 19 00:23:31.944690 systemd[1]: Started cri-containerd-705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb.scope - libcontainer container 705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb. Aug 19 00:23:31.975754 containerd[1531]: time="2025-08-19T00:23:31.975679981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lrbnf,Uid:803cc5c5-f48a-45e5-be8a-fb7f354def96,Namespace:kube-system,Attempt:0,} returns sandbox id \"81565c916442f9d4f530df91aca7d0c225b6278e85b86647c849d28412bd591d\"" Aug 19 00:23:31.977054 kubelet[2667]: E0819 00:23:31.977007 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:31.980640 containerd[1531]: time="2025-08-19T00:23:31.980167479Z" level=info msg="CreateContainer within sandbox \"81565c916442f9d4f530df91aca7d0c225b6278e85b86647c849d28412bd591d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 00:23:31.989714 kubelet[2667]: E0819 00:23:31.989525 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:31.990731 containerd[1531]: time="2025-08-19T00:23:31.990676080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xxffp,Uid:0020ed66-0db3-4260-952a-343926b4ee57,Namespace:kube-system,Attempt:0,}" Aug 19 00:23:31.992091 containerd[1531]: time="2025-08-19T00:23:31.992023683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxqld,Uid:115eb11b-db07-43a6-ab0b-f6525ceb2c72,Namespace:kube-system,Attempt:0,} returns sandbox id \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\"" Aug 19 00:23:31.992890 kubelet[2667]: E0819 00:23:31.992848 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:31.993875 containerd[1531]: time="2025-08-19T00:23:31.993810853Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 19 00:23:32.005117 containerd[1531]: time="2025-08-19T00:23:32.004801090Z" level=info msg="Container 04cea142db45dd28b68892c3df106d7ed38ad5b139b283441b84cdefd40ca01f: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:32.013995 kubelet[2667]: E0819 00:23:32.013949 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:32.021310 containerd[1531]: time="2025-08-19T00:23:32.021234757Z" level=info msg="CreateContainer within sandbox \"81565c916442f9d4f530df91aca7d0c225b6278e85b86647c849d28412bd591d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04cea142db45dd28b68892c3df106d7ed38ad5b139b283441b84cdefd40ca01f\"" Aug 19 00:23:32.022634 containerd[1531]: time="2025-08-19T00:23:32.022593199Z" level=info msg="StartContainer for \"04cea142db45dd28b68892c3df106d7ed38ad5b139b283441b84cdefd40ca01f\"" Aug 19 00:23:32.025603 containerd[1531]: time="2025-08-19T00:23:32.025281786Z" level=info msg="connecting to shim 04cea142db45dd28b68892c3df106d7ed38ad5b139b283441b84cdefd40ca01f" address="unix:///run/containerd/s/f818749d498aa2b35294f3524e0869af5f0a84cb6df304472814f671a0006719" protocol=ttrpc version=3 Aug 19 00:23:32.037638 containerd[1531]: time="2025-08-19T00:23:32.037563759Z" level=info msg="connecting to shim ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417" address="unix:///run/containerd/s/3aead7d9337f06af7e64f37895275478a47a63b6ae30d6cb111920050245633a" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:23:32.051598 systemd[1]: Started cri-containerd-04cea142db45dd28b68892c3df106d7ed38ad5b139b283441b84cdefd40ca01f.scope - libcontainer container 04cea142db45dd28b68892c3df106d7ed38ad5b139b283441b84cdefd40ca01f. Aug 19 00:23:32.065654 systemd[1]: Started cri-containerd-ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417.scope - libcontainer container ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417. Aug 19 00:23:32.106670 containerd[1531]: time="2025-08-19T00:23:32.106575820Z" level=info msg="StartContainer for \"04cea142db45dd28b68892c3df106d7ed38ad5b139b283441b84cdefd40ca01f\" returns successfully" Aug 19 00:23:32.110478 containerd[1531]: time="2025-08-19T00:23:32.109329928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xxffp,Uid:0020ed66-0db3-4260-952a-343926b4ee57,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417\"" Aug 19 00:23:32.112929 kubelet[2667]: E0819 00:23:32.112895 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:32.624069 kubelet[2667]: E0819 00:23:32.623705 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:32.626059 kubelet[2667]: E0819 00:23:32.625884 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:32.650238 kubelet[2667]: I0819 00:23:32.650097 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lrbnf" podStartSLOduration=1.650068934 podStartE2EDuration="1.650068934s" podCreationTimestamp="2025-08-19 00:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:23:32.637410647 +0000 UTC m=+8.152085654" watchObservedRunningTime="2025-08-19 00:23:32.650068934 +0000 UTC m=+8.164743941" Aug 19 00:23:37.144783 kubelet[2667]: E0819 00:23:37.144738 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:38.542797 update_engine[1503]: I20250819 00:23:38.542729 1503 update_attempter.cc:509] Updating boot flags... Aug 19 00:23:38.805322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587192801.mount: Deactivated successfully. Aug 19 00:23:40.129706 containerd[1531]: time="2025-08-19T00:23:40.129642062Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:40.130427 containerd[1531]: time="2025-08-19T00:23:40.130372721Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Aug 19 00:23:40.131119 containerd[1531]: time="2025-08-19T00:23:40.131074088Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:40.133273 containerd[1531]: time="2025-08-19T00:23:40.133234091Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.139349592s" Aug 19 00:23:40.133327 containerd[1531]: time="2025-08-19T00:23:40.133272787Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 19 00:23:40.144865 containerd[1531]: time="2025-08-19T00:23:40.144824750Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 19 00:23:40.152197 containerd[1531]: time="2025-08-19T00:23:40.152134459Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 00:23:40.164259 containerd[1531]: time="2025-08-19T00:23:40.164187347Z" level=info msg="Container 9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:40.170361 containerd[1531]: time="2025-08-19T00:23:40.170304007Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\"" Aug 19 00:23:40.172943 containerd[1531]: time="2025-08-19T00:23:40.172903830Z" level=info msg="StartContainer for \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\"" Aug 19 00:23:40.174498 containerd[1531]: time="2025-08-19T00:23:40.174426973Z" level=info msg="connecting to shim 9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778" address="unix:///run/containerd/s/7d9ba2c29c6675f060b2d3a9eed3fcfa26d43894da85620680b3cf6cd6dc068f" protocol=ttrpc version=3 Aug 19 00:23:40.219605 systemd[1]: Started cri-containerd-9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778.scope - libcontainer container 9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778. Aug 19 00:23:40.259006 containerd[1531]: time="2025-08-19T00:23:40.258947651Z" level=info msg="StartContainer for \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\" returns successfully" Aug 19 00:23:40.365626 systemd[1]: cri-containerd-9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778.scope: Deactivated successfully. Aug 19 00:23:40.391643 containerd[1531]: time="2025-08-19T00:23:40.391273154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\" id:\"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\" pid:3107 exited_at:{seconds:1755563020 nanos:390749260}" Aug 19 00:23:40.397163 containerd[1531]: time="2025-08-19T00:23:40.396951596Z" level=info msg="received exit event container_id:\"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\" id:\"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\" pid:3107 exited_at:{seconds:1755563020 nanos:390749260}" Aug 19 00:23:40.442281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778-rootfs.mount: Deactivated successfully. Aug 19 00:23:40.663444 kubelet[2667]: E0819 00:23:40.663082 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:40.668298 containerd[1531]: time="2025-08-19T00:23:40.668224670Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 00:23:40.688028 containerd[1531]: time="2025-08-19T00:23:40.687966622Z" level=info msg="Container 0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:40.700405 containerd[1531]: time="2025-08-19T00:23:40.700249364Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\"" Aug 19 00:23:40.700804 containerd[1531]: time="2025-08-19T00:23:40.700775579Z" level=info msg="StartContainer for \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\"" Aug 19 00:23:40.701913 containerd[1531]: time="2025-08-19T00:23:40.701878910Z" level=info msg="connecting to shim 0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5" address="unix:///run/containerd/s/7d9ba2c29c6675f060b2d3a9eed3fcfa26d43894da85620680b3cf6cd6dc068f" protocol=ttrpc version=3 Aug 19 00:23:40.727590 systemd[1]: Started cri-containerd-0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5.scope - libcontainer container 0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5. Aug 19 00:23:40.767543 containerd[1531]: time="2025-08-19T00:23:40.767495379Z" level=info msg="StartContainer for \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\" returns successfully" Aug 19 00:23:40.802892 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 00:23:40.803109 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:23:40.803291 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:23:40.804738 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:23:40.806292 systemd[1]: cri-containerd-0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5.scope: Deactivated successfully. Aug 19 00:23:40.807515 containerd[1531]: time="2025-08-19T00:23:40.806939386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\" id:\"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\" pid:3154 exited_at:{seconds:1755563020 nanos:806626738}" Aug 19 00:23:40.807515 containerd[1531]: time="2025-08-19T00:23:40.807112257Z" level=info msg="received exit event container_id:\"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\" id:\"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\" pid:3154 exited_at:{seconds:1755563020 nanos:806626738}" Aug 19 00:23:40.835441 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:23:41.356001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123533863.mount: Deactivated successfully. Aug 19 00:23:41.674273 kubelet[2667]: E0819 00:23:41.674228 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:41.682901 containerd[1531]: time="2025-08-19T00:23:41.682858964Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 00:23:41.705518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581768772.mount: Deactivated successfully. Aug 19 00:23:41.707278 containerd[1531]: time="2025-08-19T00:23:41.707226731Z" level=info msg="Container 5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:41.715183 containerd[1531]: time="2025-08-19T00:23:41.715138812Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\"" Aug 19 00:23:41.716178 containerd[1531]: time="2025-08-19T00:23:41.716149365Z" level=info msg="StartContainer for \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\"" Aug 19 00:23:41.718089 containerd[1531]: time="2025-08-19T00:23:41.718060949Z" level=info msg="connecting to shim 5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4" address="unix:///run/containerd/s/7d9ba2c29c6675f060b2d3a9eed3fcfa26d43894da85620680b3cf6cd6dc068f" protocol=ttrpc version=3 Aug 19 00:23:41.737602 systemd[1]: Started cri-containerd-5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4.scope - libcontainer container 5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4. Aug 19 00:23:41.747737 containerd[1531]: time="2025-08-19T00:23:41.747677721Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:41.748773 containerd[1531]: time="2025-08-19T00:23:41.748743576Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Aug 19 00:23:41.749549 containerd[1531]: time="2025-08-19T00:23:41.749519197Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:23:41.751152 containerd[1531]: time="2025-08-19T00:23:41.750699617Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.605680428s" Aug 19 00:23:41.751152 containerd[1531]: time="2025-08-19T00:23:41.750735631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 19 00:23:41.752606 containerd[1531]: time="2025-08-19T00:23:41.752574907Z" level=info msg="CreateContainer within sandbox \"ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 19 00:23:41.799355 systemd[1]: cri-containerd-5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4.scope: Deactivated successfully. Aug 19 00:23:41.805584 containerd[1531]: time="2025-08-19T00:23:41.805514119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\" id:\"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\" pid:3218 exited_at:{seconds:1755563021 nanos:805116404}" Aug 19 00:23:41.813671 containerd[1531]: time="2025-08-19T00:23:41.813561892Z" level=info msg="received exit event container_id:\"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\" id:\"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\" pid:3218 exited_at:{seconds:1755563021 nanos:805116404}" Aug 19 00:23:41.815990 containerd[1531]: time="2025-08-19T00:23:41.815961546Z" level=info msg="StartContainer for \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\" returns successfully" Aug 19 00:23:41.821747 containerd[1531]: time="2025-08-19T00:23:41.821703742Z" level=info msg="Container dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:41.827556 containerd[1531]: time="2025-08-19T00:23:41.827513524Z" level=info msg="CreateContainer within sandbox \"ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\"" Aug 19 00:23:41.828115 containerd[1531]: time="2025-08-19T00:23:41.828089308Z" level=info msg="StartContainer for \"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\"" Aug 19 00:23:41.829185 containerd[1531]: time="2025-08-19T00:23:41.829151642Z" level=info msg="connecting to shim dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500" address="unix:///run/containerd/s/3aead7d9337f06af7e64f37895275478a47a63b6ae30d6cb111920050245633a" protocol=ttrpc version=3 Aug 19 00:23:41.850540 systemd[1]: Started cri-containerd-dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500.scope - libcontainer container dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500. Aug 19 00:23:41.881750 containerd[1531]: time="2025-08-19T00:23:41.881706144Z" level=info msg="StartContainer for \"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" returns successfully" Aug 19 00:23:42.677668 kubelet[2667]: E0819 00:23:42.677457 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:42.683239 kubelet[2667]: E0819 00:23:42.683145 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:42.685567 containerd[1531]: time="2025-08-19T00:23:42.685525546Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 00:23:42.691793 kubelet[2667]: I0819 00:23:42.691728 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xxffp" podStartSLOduration=2.05576045 podStartE2EDuration="11.6917082s" podCreationTimestamp="2025-08-19 00:23:31 +0000 UTC" firstStartedPulling="2025-08-19 00:23:32.115351741 +0000 UTC m=+7.630026748" lastFinishedPulling="2025-08-19 00:23:41.751299491 +0000 UTC m=+17.265974498" observedRunningTime="2025-08-19 00:23:42.690158105 +0000 UTC m=+18.204833112" watchObservedRunningTime="2025-08-19 00:23:42.6917082 +0000 UTC m=+18.206383287" Aug 19 00:23:42.712913 containerd[1531]: time="2025-08-19T00:23:42.712861769Z" level=info msg="Container edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:42.716009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082884894.mount: Deactivated successfully. Aug 19 00:23:42.720019 containerd[1531]: time="2025-08-19T00:23:42.719975688Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\"" Aug 19 00:23:42.720608 containerd[1531]: time="2025-08-19T00:23:42.720575271Z" level=info msg="StartContainer for \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\"" Aug 19 00:23:42.721453 containerd[1531]: time="2025-08-19T00:23:42.721375088Z" level=info msg="connecting to shim edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766" address="unix:///run/containerd/s/7d9ba2c29c6675f060b2d3a9eed3fcfa26d43894da85620680b3cf6cd6dc068f" protocol=ttrpc version=3 Aug 19 00:23:42.747574 systemd[1]: Started cri-containerd-edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766.scope - libcontainer container edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766. Aug 19 00:23:42.771423 systemd[1]: cri-containerd-edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766.scope: Deactivated successfully. Aug 19 00:23:42.772781 containerd[1531]: time="2025-08-19T00:23:42.772735625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\" id:\"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\" pid:3292 exited_at:{seconds:1755563022 nanos:771622572}" Aug 19 00:23:42.773690 containerd[1531]: time="2025-08-19T00:23:42.773655006Z" level=info msg="received exit event container_id:\"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\" id:\"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\" pid:3292 exited_at:{seconds:1755563022 nanos:771622572}" Aug 19 00:23:42.775788 containerd[1531]: time="2025-08-19T00:23:42.775744741Z" level=info msg="StartContainer for \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\" returns successfully" Aug 19 00:23:42.796627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766-rootfs.mount: Deactivated successfully. Aug 19 00:23:43.688974 kubelet[2667]: E0819 00:23:43.688939 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:43.690353 kubelet[2667]: E0819 00:23:43.688998 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:43.695962 containerd[1531]: time="2025-08-19T00:23:43.695880955Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 00:23:43.717393 containerd[1531]: time="2025-08-19T00:23:43.717311299Z" level=info msg="Container 67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:43.719607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1857042946.mount: Deactivated successfully. Aug 19 00:23:43.730617 containerd[1531]: time="2025-08-19T00:23:43.730553385Z" level=info msg="CreateContainer within sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\"" Aug 19 00:23:43.731391 containerd[1531]: time="2025-08-19T00:23:43.731298769Z" level=info msg="StartContainer for \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\"" Aug 19 00:23:43.732907 containerd[1531]: time="2025-08-19T00:23:43.732868564Z" level=info msg="connecting to shim 67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da" address="unix:///run/containerd/s/7d9ba2c29c6675f060b2d3a9eed3fcfa26d43894da85620680b3cf6cd6dc068f" protocol=ttrpc version=3 Aug 19 00:23:43.760624 systemd[1]: Started cri-containerd-67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da.scope - libcontainer container 67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da. Aug 19 00:23:43.802950 containerd[1531]: time="2025-08-19T00:23:43.799616745Z" level=info msg="StartContainer for \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" returns successfully" Aug 19 00:23:43.930145 containerd[1531]: time="2025-08-19T00:23:43.929662967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" id:\"a65231e742f5df6230b11419a132275fbd3ee83f7b0326faa259adcd5d8e3c90\" pid:3360 exited_at:{seconds:1755563023 nanos:929355618}" Aug 19 00:23:44.019543 kubelet[2667]: I0819 00:23:44.018391 2667 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 19 00:23:44.072205 systemd[1]: Created slice kubepods-burstable-poda9d702d8_7112_4d68_a127_f2f3d20d90b8.slice - libcontainer container kubepods-burstable-poda9d702d8_7112_4d68_a127_f2f3d20d90b8.slice. Aug 19 00:23:44.083579 systemd[1]: Created slice kubepods-burstable-pod160b9868_26cb_43ed_9fdc_c0e749c2d60d.slice - libcontainer container kubepods-burstable-pod160b9868_26cb_43ed_9fdc_c0e749c2d60d.slice. Aug 19 00:23:44.196168 kubelet[2667]: I0819 00:23:44.196123 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswzf\" (UniqueName: \"kubernetes.io/projected/a9d702d8-7112-4d68-a127-f2f3d20d90b8-kube-api-access-wswzf\") pod \"coredns-668d6bf9bc-52dq7\" (UID: \"a9d702d8-7112-4d68-a127-f2f3d20d90b8\") " pod="kube-system/coredns-668d6bf9bc-52dq7" Aug 19 00:23:44.196168 kubelet[2667]: I0819 00:23:44.196167 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/160b9868-26cb-43ed-9fdc-c0e749c2d60d-config-volume\") pod \"coredns-668d6bf9bc-k8j77\" (UID: \"160b9868-26cb-43ed-9fdc-c0e749c2d60d\") " pod="kube-system/coredns-668d6bf9bc-k8j77" Aug 19 00:23:44.196322 kubelet[2667]: I0819 00:23:44.196190 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjngg\" (UniqueName: \"kubernetes.io/projected/160b9868-26cb-43ed-9fdc-c0e749c2d60d-kube-api-access-rjngg\") pod \"coredns-668d6bf9bc-k8j77\" (UID: \"160b9868-26cb-43ed-9fdc-c0e749c2d60d\") " pod="kube-system/coredns-668d6bf9bc-k8j77" Aug 19 00:23:44.196559 kubelet[2667]: I0819 00:23:44.196541 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9d702d8-7112-4d68-a127-f2f3d20d90b8-config-volume\") pod \"coredns-668d6bf9bc-52dq7\" (UID: \"a9d702d8-7112-4d68-a127-f2f3d20d90b8\") " pod="kube-system/coredns-668d6bf9bc-52dq7" Aug 19 00:23:44.379372 kubelet[2667]: E0819 00:23:44.378908 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:44.380507 containerd[1531]: time="2025-08-19T00:23:44.380397362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-52dq7,Uid:a9d702d8-7112-4d68-a127-f2f3d20d90b8,Namespace:kube-system,Attempt:0,}" Aug 19 00:23:44.387316 kubelet[2667]: E0819 00:23:44.387214 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:44.388057 containerd[1531]: time="2025-08-19T00:23:44.387916662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8j77,Uid:160b9868-26cb-43ed-9fdc-c0e749c2d60d,Namespace:kube-system,Attempt:0,}" Aug 19 00:23:44.699628 kubelet[2667]: E0819 00:23:44.699467 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:44.713947 kubelet[2667]: I0819 00:23:44.713858 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mxqld" podStartSLOduration=5.562544115 podStartE2EDuration="13.713839198s" podCreationTimestamp="2025-08-19 00:23:31 +0000 UTC" firstStartedPulling="2025-08-19 00:23:31.993356876 +0000 UTC m=+7.508031883" lastFinishedPulling="2025-08-19 00:23:40.144651959 +0000 UTC m=+15.659326966" observedRunningTime="2025-08-19 00:23:44.713166451 +0000 UTC m=+20.227841458" watchObservedRunningTime="2025-08-19 00:23:44.713839198 +0000 UTC m=+20.228514285" Aug 19 00:23:45.697423 kubelet[2667]: E0819 00:23:45.697370 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:46.189815 systemd-networkd[1424]: cilium_host: Link UP Aug 19 00:23:46.189935 systemd-networkd[1424]: cilium_net: Link UP Aug 19 00:23:46.190048 systemd-networkd[1424]: cilium_host: Gained carrier Aug 19 00:23:46.190162 systemd-networkd[1424]: cilium_net: Gained carrier Aug 19 00:23:46.290419 systemd-networkd[1424]: cilium_vxlan: Link UP Aug 19 00:23:46.290427 systemd-networkd[1424]: cilium_vxlan: Gained carrier Aug 19 00:23:46.471644 systemd-networkd[1424]: cilium_host: Gained IPv6LL Aug 19 00:23:46.602428 kernel: NET: Registered PF_ALG protocol family Aug 19 00:23:46.699341 kubelet[2667]: E0819 00:23:46.699310 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:47.119660 systemd-networkd[1424]: cilium_net: Gained IPv6LL Aug 19 00:23:47.269934 systemd-networkd[1424]: lxc_health: Link UP Aug 19 00:23:47.270190 systemd-networkd[1424]: lxc_health: Gained carrier Aug 19 00:23:47.532986 systemd-networkd[1424]: lxcb6e0d978b696: Link UP Aug 19 00:23:47.533627 systemd-networkd[1424]: lxccfe4d16156cd: Link UP Aug 19 00:23:47.536476 kernel: eth0: renamed from tmpd474e Aug 19 00:23:47.536679 kernel: eth0: renamed from tmp11094 Aug 19 00:23:47.536655 systemd-networkd[1424]: lxccfe4d16156cd: Gained carrier Aug 19 00:23:47.541844 systemd-networkd[1424]: lxcb6e0d978b696: Gained carrier Aug 19 00:23:47.903157 kubelet[2667]: E0819 00:23:47.903055 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:48.276560 systemd-networkd[1424]: cilium_vxlan: Gained IPv6LL Aug 19 00:23:48.592620 systemd-networkd[1424]: lxc_health: Gained IPv6LL Aug 19 00:23:49.234425 systemd-networkd[1424]: lxccfe4d16156cd: Gained IPv6LL Aug 19 00:23:49.423603 systemd-networkd[1424]: lxcb6e0d978b696: Gained IPv6LL Aug 19 00:23:51.249056 containerd[1531]: time="2025-08-19T00:23:51.249004006Z" level=info msg="connecting to shim 11094e8e40e68394f5fd7d2016f7eb0c8b903b4ca499bf1876bad65192fd7025" address="unix:///run/containerd/s/85a7344e0358a68d006eb2ccd73fc3ec5cf93e6cbc0acdb1047772f7e42cd3c7" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:23:51.249574 containerd[1531]: time="2025-08-19T00:23:51.249006967Z" level=info msg="connecting to shim d474e0ac3438e5d60a6f3e7e209e4aa1d89cddd6c98a9d35cb24cd3dfb79a52e" address="unix:///run/containerd/s/cb8fdf4ffb6c2e80d91c2e626094676749df175e6f967b1912e047ff9e42bb3a" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:23:51.280557 systemd[1]: Started cri-containerd-11094e8e40e68394f5fd7d2016f7eb0c8b903b4ca499bf1876bad65192fd7025.scope - libcontainer container 11094e8e40e68394f5fd7d2016f7eb0c8b903b4ca499bf1876bad65192fd7025. Aug 19 00:23:51.281907 systemd[1]: Started cri-containerd-d474e0ac3438e5d60a6f3e7e209e4aa1d89cddd6c98a9d35cb24cd3dfb79a52e.scope - libcontainer container d474e0ac3438e5d60a6f3e7e209e4aa1d89cddd6c98a9d35cb24cd3dfb79a52e. Aug 19 00:23:51.297251 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:23:51.304281 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:23:51.338284 containerd[1531]: time="2025-08-19T00:23:51.338136376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8j77,Uid:160b9868-26cb-43ed-9fdc-c0e749c2d60d,Namespace:kube-system,Attempt:0,} returns sandbox id \"11094e8e40e68394f5fd7d2016f7eb0c8b903b4ca499bf1876bad65192fd7025\"" Aug 19 00:23:51.338988 kubelet[2667]: E0819 00:23:51.338946 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:51.344964 containerd[1531]: time="2025-08-19T00:23:51.344911871Z" level=info msg="CreateContainer within sandbox \"11094e8e40e68394f5fd7d2016f7eb0c8b903b4ca499bf1876bad65192fd7025\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:23:51.349579 containerd[1531]: time="2025-08-19T00:23:51.349536107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-52dq7,Uid:a9d702d8-7112-4d68-a127-f2f3d20d90b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d474e0ac3438e5d60a6f3e7e209e4aa1d89cddd6c98a9d35cb24cd3dfb79a52e\"" Aug 19 00:23:51.352408 kubelet[2667]: E0819 00:23:51.352361 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:51.355040 containerd[1531]: time="2025-08-19T00:23:51.355002674Z" level=info msg="CreateContainer within sandbox \"d474e0ac3438e5d60a6f3e7e209e4aa1d89cddd6c98a9d35cb24cd3dfb79a52e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:23:51.358167 containerd[1531]: time="2025-08-19T00:23:51.358008106Z" level=info msg="Container 2123f5552b4312a1ac5afc573bc67d1f741b9e2584eb4b1d029b4bf92b022463: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:51.366480 containerd[1531]: time="2025-08-19T00:23:51.366434093Z" level=info msg="Container db014b9f21f2dc798216d45291ba6e4f308aeb08f087e4776ac4f3194e38a989: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:23:51.368280 containerd[1531]: time="2025-08-19T00:23:51.368246186Z" level=info msg="CreateContainer within sandbox \"11094e8e40e68394f5fd7d2016f7eb0c8b903b4ca499bf1876bad65192fd7025\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2123f5552b4312a1ac5afc573bc67d1f741b9e2584eb4b1d029b4bf92b022463\"" Aug 19 00:23:51.369029 containerd[1531]: time="2025-08-19T00:23:51.368985731Z" level=info msg="StartContainer for \"2123f5552b4312a1ac5afc573bc67d1f741b9e2584eb4b1d029b4bf92b022463\"" Aug 19 00:23:51.369798 containerd[1531]: time="2025-08-19T00:23:51.369774848Z" level=info msg="connecting to shim 2123f5552b4312a1ac5afc573bc67d1f741b9e2584eb4b1d029b4bf92b022463" address="unix:///run/containerd/s/85a7344e0358a68d006eb2ccd73fc3ec5cf93e6cbc0acdb1047772f7e42cd3c7" protocol=ttrpc version=3 Aug 19 00:23:51.373548 containerd[1531]: time="2025-08-19T00:23:51.373423521Z" level=info msg="CreateContainer within sandbox \"d474e0ac3438e5d60a6f3e7e209e4aa1d89cddd6c98a9d35cb24cd3dfb79a52e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db014b9f21f2dc798216d45291ba6e4f308aeb08f087e4776ac4f3194e38a989\"" Aug 19 00:23:51.373924 containerd[1531]: time="2025-08-19T00:23:51.373898760Z" level=info msg="StartContainer for \"db014b9f21f2dc798216d45291ba6e4f308aeb08f087e4776ac4f3194e38a989\"" Aug 19 00:23:51.376244 containerd[1531]: time="2025-08-19T00:23:51.376205376Z" level=info msg="connecting to shim db014b9f21f2dc798216d45291ba6e4f308aeb08f087e4776ac4f3194e38a989" address="unix:///run/containerd/s/cb8fdf4ffb6c2e80d91c2e626094676749df175e6f967b1912e047ff9e42bb3a" protocol=ttrpc version=3 Aug 19 00:23:51.396651 systemd[1]: Started cri-containerd-2123f5552b4312a1ac5afc573bc67d1f741b9e2584eb4b1d029b4bf92b022463.scope - libcontainer container 2123f5552b4312a1ac5afc573bc67d1f741b9e2584eb4b1d029b4bf92b022463. Aug 19 00:23:51.400280 systemd[1]: Started cri-containerd-db014b9f21f2dc798216d45291ba6e4f308aeb08f087e4776ac4f3194e38a989.scope - libcontainer container db014b9f21f2dc798216d45291ba6e4f308aeb08f087e4776ac4f3194e38a989. Aug 19 00:23:51.433204 containerd[1531]: time="2025-08-19T00:23:51.433167901Z" level=info msg="StartContainer for \"db014b9f21f2dc798216d45291ba6e4f308aeb08f087e4776ac4f3194e38a989\" returns successfully" Aug 19 00:23:51.435340 containerd[1531]: time="2025-08-19T00:23:51.435238699Z" level=info msg="StartContainer for \"2123f5552b4312a1ac5afc573bc67d1f741b9e2584eb4b1d029b4bf92b022463\" returns successfully" Aug 19 00:23:51.715560 kubelet[2667]: E0819 00:23:51.715463 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:51.719704 kubelet[2667]: E0819 00:23:51.719623 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:51.730105 kubelet[2667]: I0819 00:23:51.729821 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k8j77" podStartSLOduration=20.729705779 podStartE2EDuration="20.729705779s" podCreationTimestamp="2025-08-19 00:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:23:51.726781167 +0000 UTC m=+27.241456174" watchObservedRunningTime="2025-08-19 00:23:51.729705779 +0000 UTC m=+27.244380786" Aug 19 00:23:51.757013 kubelet[2667]: I0819 00:23:51.756954 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-52dq7" podStartSLOduration=20.756831842 podStartE2EDuration="20.756831842s" podCreationTimestamp="2025-08-19 00:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:23:51.740246735 +0000 UTC m=+27.254921742" watchObservedRunningTime="2025-08-19 00:23:51.756831842 +0000 UTC m=+27.271506849" Aug 19 00:23:52.724870 kubelet[2667]: E0819 00:23:52.724775 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:52.725579 kubelet[2667]: E0819 00:23:52.725543 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:53.010592 kubelet[2667]: I0819 00:23:53.010137 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 00:23:53.010690 kubelet[2667]: E0819 00:23:53.010629 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:53.726736 kubelet[2667]: E0819 00:23:53.726395 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:53.726736 kubelet[2667]: E0819 00:23:53.726652 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:53.727611 kubelet[2667]: E0819 00:23:53.727568 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:23:54.989257 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:37710.service - OpenSSH per-connection server daemon (10.0.0.1:37710). Aug 19 00:23:55.063865 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 37710 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:23:55.065451 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:23:55.070079 systemd-logind[1502]: New session 8 of user core. Aug 19 00:23:55.080605 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 00:23:55.247430 sshd[4015]: Connection closed by 10.0.0.1 port 37710 Aug 19 00:23:55.248324 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Aug 19 00:23:55.251984 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:37710.service: Deactivated successfully. Aug 19 00:23:55.254202 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 00:23:55.257246 systemd-logind[1502]: Session 8 logged out. Waiting for processes to exit. Aug 19 00:23:55.265909 systemd-logind[1502]: Removed session 8. Aug 19 00:24:00.264089 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:37724.service - OpenSSH per-connection server daemon (10.0.0.1:37724). Aug 19 00:24:00.331804 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 37724 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:00.330371 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:00.336313 systemd-logind[1502]: New session 9 of user core. Aug 19 00:24:00.347635 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 00:24:00.471458 sshd[4032]: Connection closed by 10.0.0.1 port 37724 Aug 19 00:24:00.470717 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:00.475546 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:37724.service: Deactivated successfully. Aug 19 00:24:00.480082 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 00:24:00.481463 systemd-logind[1502]: Session 9 logged out. Waiting for processes to exit. Aug 19 00:24:00.482588 systemd-logind[1502]: Removed session 9. Aug 19 00:24:05.486048 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:40952.service - OpenSSH per-connection server daemon (10.0.0.1:40952). Aug 19 00:24:05.576679 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 40952 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:05.578304 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:05.583456 systemd-logind[1502]: New session 10 of user core. Aug 19 00:24:05.601624 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 00:24:05.740251 sshd[4053]: Connection closed by 10.0.0.1 port 40952 Aug 19 00:24:05.741016 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:05.745852 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:40952.service: Deactivated successfully. Aug 19 00:24:05.750899 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 00:24:05.753285 systemd-logind[1502]: Session 10 logged out. Waiting for processes to exit. Aug 19 00:24:05.755885 systemd-logind[1502]: Removed session 10. Aug 19 00:24:10.755650 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:40954.service - OpenSSH per-connection server daemon (10.0.0.1:40954). Aug 19 00:24:10.813279 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 40954 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:10.814174 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:10.819483 systemd-logind[1502]: New session 11 of user core. Aug 19 00:24:10.826595 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 00:24:10.957615 sshd[4070]: Connection closed by 10.0.0.1 port 40954 Aug 19 00:24:10.958408 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:10.976051 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:40954.service: Deactivated successfully. Aug 19 00:24:10.983677 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 00:24:10.985935 systemd-logind[1502]: Session 11 logged out. Waiting for processes to exit. Aug 19 00:24:10.990538 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:40970.service - OpenSSH per-connection server daemon (10.0.0.1:40970). Aug 19 00:24:10.992468 systemd-logind[1502]: Removed session 11. Aug 19 00:24:11.052814 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 40970 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:11.054437 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:11.059891 systemd-logind[1502]: New session 12 of user core. Aug 19 00:24:11.074610 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 00:24:11.231715 sshd[4087]: Connection closed by 10.0.0.1 port 40970 Aug 19 00:24:11.232160 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:11.241694 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:40970.service: Deactivated successfully. Aug 19 00:24:11.245161 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 00:24:11.246742 systemd-logind[1502]: Session 12 logged out. Waiting for processes to exit. Aug 19 00:24:11.254408 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:40984.service - OpenSSH per-connection server daemon (10.0.0.1:40984). Aug 19 00:24:11.255771 systemd-logind[1502]: Removed session 12. Aug 19 00:24:11.314819 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 40984 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:11.318059 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:11.322687 systemd-logind[1502]: New session 13 of user core. Aug 19 00:24:11.333566 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 00:24:11.469914 sshd[4102]: Connection closed by 10.0.0.1 port 40984 Aug 19 00:24:11.470298 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:11.475591 systemd-logind[1502]: Session 13 logged out. Waiting for processes to exit. Aug 19 00:24:11.476711 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:40984.service: Deactivated successfully. Aug 19 00:24:11.479238 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 00:24:11.481716 systemd-logind[1502]: Removed session 13. Aug 19 00:24:16.483228 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:48214.service - OpenSSH per-connection server daemon (10.0.0.1:48214). Aug 19 00:24:16.534784 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 48214 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:16.536134 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:16.540275 systemd-logind[1502]: New session 14 of user core. Aug 19 00:24:16.551636 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 00:24:16.674231 sshd[4118]: Connection closed by 10.0.0.1 port 48214 Aug 19 00:24:16.674617 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:16.679199 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:48214.service: Deactivated successfully. Aug 19 00:24:16.681398 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 00:24:16.682592 systemd-logind[1502]: Session 14 logged out. Waiting for processes to exit. Aug 19 00:24:16.684650 systemd-logind[1502]: Removed session 14. Aug 19 00:24:21.697984 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:48220.service - OpenSSH per-connection server daemon (10.0.0.1:48220). Aug 19 00:24:21.754757 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 48220 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:21.755490 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:21.760635 systemd-logind[1502]: New session 15 of user core. Aug 19 00:24:21.768576 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 00:24:21.906244 sshd[4134]: Connection closed by 10.0.0.1 port 48220 Aug 19 00:24:21.907803 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:21.918712 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:48220.service: Deactivated successfully. Aug 19 00:24:21.921512 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 00:24:21.922830 systemd-logind[1502]: Session 15 logged out. Waiting for processes to exit. Aug 19 00:24:21.927374 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:48236.service - OpenSSH per-connection server daemon (10.0.0.1:48236). Aug 19 00:24:21.928172 systemd-logind[1502]: Removed session 15. Aug 19 00:24:21.997928 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 48236 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:21.999560 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:22.006082 systemd-logind[1502]: New session 16 of user core. Aug 19 00:24:22.017597 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 00:24:22.269044 sshd[4151]: Connection closed by 10.0.0.1 port 48236 Aug 19 00:24:22.270761 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:22.283266 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:48236.service: Deactivated successfully. Aug 19 00:24:22.285512 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 00:24:22.292943 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:48246.service - OpenSSH per-connection server daemon (10.0.0.1:48246). Aug 19 00:24:22.294451 systemd-logind[1502]: Session 16 logged out. Waiting for processes to exit. Aug 19 00:24:22.296460 systemd-logind[1502]: Removed session 16. Aug 19 00:24:22.372854 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 48246 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:22.374577 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:22.379875 systemd-logind[1502]: New session 17 of user core. Aug 19 00:24:22.387585 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 00:24:23.118588 sshd[4165]: Connection closed by 10.0.0.1 port 48246 Aug 19 00:24:23.118896 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:23.139644 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:48246.service: Deactivated successfully. Aug 19 00:24:23.146298 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 00:24:23.148762 systemd-logind[1502]: Session 17 logged out. Waiting for processes to exit. Aug 19 00:24:23.154853 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:55248.service - OpenSSH per-connection server daemon (10.0.0.1:55248). Aug 19 00:24:23.156919 systemd-logind[1502]: Removed session 17. Aug 19 00:24:23.218058 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 55248 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:23.219485 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:23.225455 systemd-logind[1502]: New session 18 of user core. Aug 19 00:24:23.245661 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 00:24:23.511626 sshd[4187]: Connection closed by 10.0.0.1 port 55248 Aug 19 00:24:23.513628 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:23.532276 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:55248.service: Deactivated successfully. Aug 19 00:24:23.537273 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 00:24:23.539887 systemd-logind[1502]: Session 18 logged out. Waiting for processes to exit. Aug 19 00:24:23.544572 systemd-logind[1502]: Removed session 18. Aug 19 00:24:23.547564 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:55254.service - OpenSSH per-connection server daemon (10.0.0.1:55254). Aug 19 00:24:23.630143 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 55254 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:23.631884 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:23.637366 systemd-logind[1502]: New session 19 of user core. Aug 19 00:24:23.647633 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 00:24:23.798799 sshd[4202]: Connection closed by 10.0.0.1 port 55254 Aug 19 00:24:23.797836 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:23.803180 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:55254.service: Deactivated successfully. Aug 19 00:24:23.806245 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 00:24:23.808741 systemd-logind[1502]: Session 19 logged out. Waiting for processes to exit. Aug 19 00:24:23.809857 systemd-logind[1502]: Removed session 19. Aug 19 00:24:28.813732 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:55266.service - OpenSSH per-connection server daemon (10.0.0.1:55266). Aug 19 00:24:28.883613 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 55266 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:28.884728 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:28.893048 systemd-logind[1502]: New session 20 of user core. Aug 19 00:24:28.903581 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 00:24:29.019540 sshd[4222]: Connection closed by 10.0.0.1 port 55266 Aug 19 00:24:29.020022 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:29.023147 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:55266.service: Deactivated successfully. Aug 19 00:24:29.024730 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 00:24:29.025569 systemd-logind[1502]: Session 20 logged out. Waiting for processes to exit. Aug 19 00:24:29.026984 systemd-logind[1502]: Removed session 20. Aug 19 00:24:34.041617 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:42032.service - OpenSSH per-connection server daemon (10.0.0.1:42032). Aug 19 00:24:34.101767 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 42032 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:34.103212 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:34.110885 systemd-logind[1502]: New session 21 of user core. Aug 19 00:24:34.121595 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 00:24:34.256104 sshd[4241]: Connection closed by 10.0.0.1 port 42032 Aug 19 00:24:34.256710 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:34.261716 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:42032.service: Deactivated successfully. Aug 19 00:24:34.265643 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 00:24:34.267496 systemd-logind[1502]: Session 21 logged out. Waiting for processes to exit. Aug 19 00:24:34.268587 systemd-logind[1502]: Removed session 21. Aug 19 00:24:34.602271 kubelet[2667]: E0819 00:24:34.602212 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:39.278314 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:42046.service - OpenSSH per-connection server daemon (10.0.0.1:42046). Aug 19 00:24:39.342806 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 42046 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:39.344212 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:39.348914 systemd-logind[1502]: New session 22 of user core. Aug 19 00:24:39.360875 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 00:24:39.499251 sshd[4257]: Connection closed by 10.0.0.1 port 42046 Aug 19 00:24:39.499927 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:39.503652 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:42046.service: Deactivated successfully. Aug 19 00:24:39.505837 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 00:24:39.507498 systemd-logind[1502]: Session 22 logged out. Waiting for processes to exit. Aug 19 00:24:39.510030 systemd-logind[1502]: Removed session 22. Aug 19 00:24:39.590050 kubelet[2667]: E0819 00:24:39.589918 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:44.517196 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:42798.service - OpenSSH per-connection server daemon (10.0.0.1:42798). Aug 19 00:24:44.598987 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 42798 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:44.600546 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:44.606054 systemd-logind[1502]: New session 23 of user core. Aug 19 00:24:44.612498 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 00:24:44.738071 sshd[4274]: Connection closed by 10.0.0.1 port 42798 Aug 19 00:24:44.738678 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:44.752410 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:42798.service: Deactivated successfully. Aug 19 00:24:44.754110 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 00:24:44.757003 systemd-logind[1502]: Session 23 logged out. Waiting for processes to exit. Aug 19 00:24:44.764092 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:42804.service - OpenSSH per-connection server daemon (10.0.0.1:42804). Aug 19 00:24:44.764838 systemd-logind[1502]: Removed session 23. Aug 19 00:24:44.816420 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 42804 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:44.817694 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:44.827709 systemd-logind[1502]: New session 24 of user core. Aug 19 00:24:44.834630 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 00:24:46.593236 kubelet[2667]: E0819 00:24:46.592703 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:46.741835 containerd[1531]: time="2025-08-19T00:24:46.741780860Z" level=info msg="StopContainer for \"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" with timeout 30 (s)" Aug 19 00:24:46.745965 containerd[1531]: time="2025-08-19T00:24:46.745921332Z" level=info msg="Stop container \"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" with signal terminated" Aug 19 00:24:46.767100 systemd[1]: cri-containerd-dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500.scope: Deactivated successfully. Aug 19 00:24:46.772183 containerd[1531]: time="2025-08-19T00:24:46.772110549Z" level=info msg="received exit event container_id:\"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" id:\"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" pid:3257 exited_at:{seconds:1755563086 nanos:771784642}" Aug 19 00:24:46.772304 containerd[1531]: time="2025-08-19T00:24:46.772192225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" id:\"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" pid:3257 exited_at:{seconds:1755563086 nanos:771784642}" Aug 19 00:24:46.793748 containerd[1531]: time="2025-08-19T00:24:46.793700632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" id:\"58ba251cd8bc0204a92216379566612b2ee84be9012971938bd8fa45c29dfbba\" pid:4317 exited_at:{seconds:1755563086 nanos:793328328}" Aug 19 00:24:46.796051 containerd[1531]: time="2025-08-19T00:24:46.795989060Z" level=info msg="StopContainer for \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" with timeout 2 (s)" Aug 19 00:24:46.797396 containerd[1531]: time="2025-08-19T00:24:46.797327005Z" level=info msg="Stop container \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" with signal terminated" Aug 19 00:24:46.798722 containerd[1531]: time="2025-08-19T00:24:46.798650431Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 00:24:46.804601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500-rootfs.mount: Deactivated successfully. Aug 19 00:24:46.810471 systemd-networkd[1424]: lxc_health: Link DOWN Aug 19 00:24:46.810478 systemd-networkd[1424]: lxc_health: Lost carrier Aug 19 00:24:46.819548 containerd[1531]: time="2025-08-19T00:24:46.819500745Z" level=info msg="StopContainer for \"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" returns successfully" Aug 19 00:24:46.826864 containerd[1531]: time="2025-08-19T00:24:46.826804529Z" level=info msg="StopPodSandbox for \"ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417\"" Aug 19 00:24:46.829010 systemd[1]: cri-containerd-67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da.scope: Deactivated successfully. Aug 19 00:24:46.829782 systemd[1]: cri-containerd-67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da.scope: Consumed 6.809s CPU time, 124.6M memory peak, 144K read from disk, 12.9M written to disk. Aug 19 00:24:46.830782 containerd[1531]: time="2025-08-19T00:24:46.830729929Z" level=info msg="received exit event container_id:\"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" id:\"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" pid:3329 exited_at:{seconds:1755563086 nanos:830409582}" Aug 19 00:24:46.830908 containerd[1531]: time="2025-08-19T00:24:46.830857924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" id:\"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" pid:3329 exited_at:{seconds:1755563086 nanos:830409582}" Aug 19 00:24:46.836167 containerd[1531]: time="2025-08-19T00:24:46.836041394Z" level=info msg="Container to stop \"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:24:46.846833 systemd[1]: cri-containerd-ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417.scope: Deactivated successfully. Aug 19 00:24:46.850944 containerd[1531]: time="2025-08-19T00:24:46.850864312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417\" id:\"ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417\" pid:2893 exit_status:137 exited_at:{seconds:1755563086 nanos:848780997}" Aug 19 00:24:46.858222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da-rootfs.mount: Deactivated successfully. Aug 19 00:24:46.873693 containerd[1531]: time="2025-08-19T00:24:46.873649627Z" level=info msg="StopContainer for \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" returns successfully" Aug 19 00:24:46.874198 containerd[1531]: time="2025-08-19T00:24:46.874169926Z" level=info msg="StopPodSandbox for \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\"" Aug 19 00:24:46.874348 containerd[1531]: time="2025-08-19T00:24:46.874327800Z" level=info msg="Container to stop \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:24:46.874451 containerd[1531]: time="2025-08-19T00:24:46.874437035Z" level=info msg="Container to stop \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:24:46.874515 containerd[1531]: time="2025-08-19T00:24:46.874498393Z" level=info msg="Container to stop \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:24:46.874605 containerd[1531]: time="2025-08-19T00:24:46.874586789Z" level=info msg="Container to stop \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:24:46.874671 containerd[1531]: time="2025-08-19T00:24:46.874656186Z" level=info msg="Container to stop \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:24:46.881520 systemd[1]: cri-containerd-705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb.scope: Deactivated successfully. Aug 19 00:24:46.887477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417-rootfs.mount: Deactivated successfully. Aug 19 00:24:46.910738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb-rootfs.mount: Deactivated successfully. Aug 19 00:24:46.924336 containerd[1531]: time="2025-08-19T00:24:46.924272693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" id:\"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" pid:2825 exit_status:137 exited_at:{seconds:1755563086 nanos:883648102}" Aug 19 00:24:46.926514 containerd[1531]: time="2025-08-19T00:24:46.925242173Z" level=info msg="shim disconnected" id=705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb namespace=k8s.io Aug 19 00:24:46.926415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417-shm.mount: Deactivated successfully. Aug 19 00:24:46.926523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb-shm.mount: Deactivated successfully. Aug 19 00:24:46.927375 containerd[1531]: time="2025-08-19T00:24:46.927131457Z" level=warning msg="cleaning up after shim disconnected" id=705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb namespace=k8s.io Aug 19 00:24:46.927375 containerd[1531]: time="2025-08-19T00:24:46.927187814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 00:24:46.928855 containerd[1531]: time="2025-08-19T00:24:46.928566038Z" level=info msg="TearDown network for sandbox \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" successfully" Aug 19 00:24:46.928855 containerd[1531]: time="2025-08-19T00:24:46.928624396Z" level=info msg="StopPodSandbox for \"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" returns successfully" Aug 19 00:24:46.928855 containerd[1531]: time="2025-08-19T00:24:46.928829388Z" level=info msg="shim disconnected" id=ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417 namespace=k8s.io Aug 19 00:24:46.929103 containerd[1531]: time="2025-08-19T00:24:46.928844227Z" level=warning msg="cleaning up after shim disconnected" id=ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417 namespace=k8s.io Aug 19 00:24:46.929103 containerd[1531]: time="2025-08-19T00:24:46.928878266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 00:24:46.929804 containerd[1531]: time="2025-08-19T00:24:46.929758390Z" level=info msg="TearDown network for sandbox \"ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417\" successfully" Aug 19 00:24:46.929804 containerd[1531]: time="2025-08-19T00:24:46.929796228Z" level=info msg="StopPodSandbox for \"ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417\" returns successfully" Aug 19 00:24:46.944634 containerd[1531]: time="2025-08-19T00:24:46.944573389Z" level=info msg="received exit event sandbox_id:\"705d7fc9dcabf90d31d82803173f494b1691829ae70b6a659b4f33d27307dacb\" exit_status:137 exited_at:{seconds:1755563086 nanos:883648102}" Aug 19 00:24:46.944795 containerd[1531]: time="2025-08-19T00:24:46.944658505Z" level=info msg="received exit event sandbox_id:\"ab2fc9b62e57d5180bab25b3406573b9a94a714dda9fadb06efab5f0dad8a417\" exit_status:137 exited_at:{seconds:1755563086 nanos:848780997}" Aug 19 00:24:47.039721 kubelet[2667]: I0819 00:24:47.039665 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/115eb11b-db07-43a6-ab0b-f6525ceb2c72-hubble-tls\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.039887 kubelet[2667]: I0819 00:24:47.039721 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sw58\" (UniqueName: \"kubernetes.io/projected/115eb11b-db07-43a6-ab0b-f6525ceb2c72-kube-api-access-6sw58\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.039887 kubelet[2667]: I0819 00:24:47.039821 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/115eb11b-db07-43a6-ab0b-f6525ceb2c72-clustermesh-secrets\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.039887 kubelet[2667]: I0819 00:24:47.039847 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-xtables-lock\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.039887 kubelet[2667]: I0819 00:24:47.039884 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlchl\" (UniqueName: \"kubernetes.io/projected/0020ed66-0db3-4260-952a-343926b4ee57-kube-api-access-tlchl\") pod \"0020ed66-0db3-4260-952a-343926b4ee57\" (UID: \"0020ed66-0db3-4260-952a-343926b4ee57\") " Aug 19 00:24:47.039986 kubelet[2667]: I0819 00:24:47.039901 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-etc-cni-netd\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.039986 kubelet[2667]: I0819 00:24:47.039921 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-bpf-maps\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.039986 kubelet[2667]: I0819 00:24:47.039938 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-host-proc-sys-kernel\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.039986 kubelet[2667]: I0819 00:24:47.039965 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-config-path\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.039986 kubelet[2667]: I0819 00:24:47.039981 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-hostproc\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.040092 kubelet[2667]: I0819 00:24:47.039999 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cni-path\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.040092 kubelet[2667]: I0819 00:24:47.040014 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-host-proc-sys-net\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.040092 kubelet[2667]: I0819 00:24:47.040038 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-run\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.040092 kubelet[2667]: I0819 00:24:47.040060 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-cgroup\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.040092 kubelet[2667]: I0819 00:24:47.040077 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-lib-modules\") pod \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\" (UID: \"115eb11b-db07-43a6-ab0b-f6525ceb2c72\") " Aug 19 00:24:47.040193 kubelet[2667]: I0819 00:24:47.040119 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0020ed66-0db3-4260-952a-343926b4ee57-cilium-config-path\") pod \"0020ed66-0db3-4260-952a-343926b4ee57\" (UID: \"0020ed66-0db3-4260-952a-343926b4ee57\") " Aug 19 00:24:47.045699 kubelet[2667]: I0819 00:24:47.045649 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0020ed66-0db3-4260-952a-343926b4ee57-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0020ed66-0db3-4260-952a-343926b4ee57" (UID: "0020ed66-0db3-4260-952a-343926b4ee57"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 00:24:47.045831 kubelet[2667]: I0819 00:24:47.045737 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055424 kubelet[2667]: I0819 00:24:47.053560 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055424 kubelet[2667]: I0819 00:24:47.053601 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055424 kubelet[2667]: I0819 00:24:47.053576 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cni-path" (OuterVolumeSpecName: "cni-path") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055424 kubelet[2667]: I0819 00:24:47.053628 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055424 kubelet[2667]: I0819 00:24:47.053650 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055717 kubelet[2667]: I0819 00:24:47.053637 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055717 kubelet[2667]: I0819 00:24:47.053666 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055717 kubelet[2667]: I0819 00:24:47.053678 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055717 kubelet[2667]: I0819 00:24:47.053702 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-hostproc" (OuterVolumeSpecName: "hostproc") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:24:47.055717 kubelet[2667]: I0819 00:24:47.055653 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 00:24:47.056188 kubelet[2667]: I0819 00:24:47.056155 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115eb11b-db07-43a6-ab0b-f6525ceb2c72-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 00:24:47.056506 kubelet[2667]: I0819 00:24:47.056321 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115eb11b-db07-43a6-ab0b-f6525ceb2c72-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 19 00:24:47.056567 kubelet[2667]: I0819 00:24:47.056517 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115eb11b-db07-43a6-ab0b-f6525ceb2c72-kube-api-access-6sw58" (OuterVolumeSpecName: "kube-api-access-6sw58") pod "115eb11b-db07-43a6-ab0b-f6525ceb2c72" (UID: "115eb11b-db07-43a6-ab0b-f6525ceb2c72"). InnerVolumeSpecName "kube-api-access-6sw58". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 00:24:47.057120 kubelet[2667]: I0819 00:24:47.057066 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0020ed66-0db3-4260-952a-343926b4ee57-kube-api-access-tlchl" (OuterVolumeSpecName: "kube-api-access-tlchl") pod "0020ed66-0db3-4260-952a-343926b4ee57" (UID: "0020ed66-0db3-4260-952a-343926b4ee57"). InnerVolumeSpecName "kube-api-access-tlchl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 00:24:47.141150 kubelet[2667]: I0819 00:24:47.141028 2667 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141150 kubelet[2667]: I0819 00:24:47.141069 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141150 kubelet[2667]: I0819 00:24:47.141081 2667 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141150 kubelet[2667]: I0819 00:24:47.141091 2667 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141150 kubelet[2667]: I0819 00:24:47.141100 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141150 kubelet[2667]: I0819 00:24:47.141109 2667 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141150 kubelet[2667]: I0819 00:24:47.141117 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141150 kubelet[2667]: I0819 00:24:47.141125 2667 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141461 kubelet[2667]: I0819 00:24:47.141134 2667 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0020ed66-0db3-4260-952a-343926b4ee57-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141461 kubelet[2667]: I0819 00:24:47.141152 2667 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/115eb11b-db07-43a6-ab0b-f6525ceb2c72-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141461 kubelet[2667]: I0819 00:24:47.141161 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6sw58\" (UniqueName: \"kubernetes.io/projected/115eb11b-db07-43a6-ab0b-f6525ceb2c72-kube-api-access-6sw58\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141461 kubelet[2667]: I0819 00:24:47.141176 2667 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/115eb11b-db07-43a6-ab0b-f6525ceb2c72-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141461 kubelet[2667]: I0819 00:24:47.141184 2667 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141461 kubelet[2667]: I0819 00:24:47.141193 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tlchl\" (UniqueName: \"kubernetes.io/projected/0020ed66-0db3-4260-952a-343926b4ee57-kube-api-access-tlchl\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141461 kubelet[2667]: I0819 00:24:47.141208 2667 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.141461 kubelet[2667]: I0819 00:24:47.141240 2667 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/115eb11b-db07-43a6-ab0b-f6525ceb2c72-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 19 00:24:47.803514 systemd[1]: var-lib-kubelet-pods-0020ed66\x2d0db3\x2d4260\x2d952a\x2d343926b4ee57-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtlchl.mount: Deactivated successfully. Aug 19 00:24:47.803623 systemd[1]: var-lib-kubelet-pods-115eb11b\x2ddb07\x2d43a6\x2dab0b\x2df6525ceb2c72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6sw58.mount: Deactivated successfully. Aug 19 00:24:47.803676 systemd[1]: var-lib-kubelet-pods-115eb11b\x2ddb07\x2d43a6\x2dab0b\x2df6525ceb2c72-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 19 00:24:47.803734 systemd[1]: var-lib-kubelet-pods-115eb11b\x2ddb07\x2d43a6\x2dab0b\x2df6525ceb2c72-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 19 00:24:47.865413 kubelet[2667]: I0819 00:24:47.864639 2667 scope.go:117] "RemoveContainer" containerID="dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500" Aug 19 00:24:47.869148 containerd[1531]: time="2025-08-19T00:24:47.869112141Z" level=info msg="RemoveContainer for \"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\"" Aug 19 00:24:47.872610 systemd[1]: Removed slice kubepods-besteffort-pod0020ed66_0db3_4260_952a_343926b4ee57.slice - libcontainer container kubepods-besteffort-pod0020ed66_0db3_4260_952a_343926b4ee57.slice. Aug 19 00:24:47.883471 systemd[1]: Removed slice kubepods-burstable-pod115eb11b_db07_43a6_ab0b_f6525ceb2c72.slice - libcontainer container kubepods-burstable-pod115eb11b_db07_43a6_ab0b_f6525ceb2c72.slice. Aug 19 00:24:47.883572 systemd[1]: kubepods-burstable-pod115eb11b_db07_43a6_ab0b_f6525ceb2c72.slice: Consumed 7.040s CPU time, 124.9M memory peak, 156K read from disk, 12.9M written to disk. Aug 19 00:24:47.903650 containerd[1531]: time="2025-08-19T00:24:47.903505427Z" level=info msg="RemoveContainer for \"dd7fc2f5604574e0f0d0ba887e66f2e31239c8f98a3597e5f193466b2ba2a500\" returns successfully" Aug 19 00:24:47.903915 kubelet[2667]: I0819 00:24:47.903862 2667 scope.go:117] "RemoveContainer" containerID="67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da" Aug 19 00:24:47.905773 containerd[1531]: time="2025-08-19T00:24:47.905730505Z" level=info msg="RemoveContainer for \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\"" Aug 19 00:24:47.950092 containerd[1531]: time="2025-08-19T00:24:47.949709996Z" level=info msg="RemoveContainer for \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" returns successfully" Aug 19 00:24:47.950245 kubelet[2667]: I0819 00:24:47.949975 2667 scope.go:117] "RemoveContainer" containerID="edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766" Aug 19 00:24:47.952459 containerd[1531]: time="2025-08-19T00:24:47.952413815Z" level=info msg="RemoveContainer for \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\"" Aug 19 00:24:47.969350 containerd[1531]: time="2025-08-19T00:24:47.968739971Z" level=info msg="RemoveContainer for \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\" returns successfully" Aug 19 00:24:47.969533 kubelet[2667]: I0819 00:24:47.969078 2667 scope.go:117] "RemoveContainer" containerID="5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4" Aug 19 00:24:47.971961 containerd[1531]: time="2025-08-19T00:24:47.971923453Z" level=info msg="RemoveContainer for \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\"" Aug 19 00:24:48.004892 containerd[1531]: time="2025-08-19T00:24:48.004792410Z" level=info msg="RemoveContainer for \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\" returns successfully" Aug 19 00:24:48.005098 kubelet[2667]: I0819 00:24:48.005066 2667 scope.go:117] "RemoveContainer" containerID="0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5" Aug 19 00:24:48.006679 containerd[1531]: time="2025-08-19T00:24:48.006650747Z" level=info msg="RemoveContainer for \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\"" Aug 19 00:24:48.151960 containerd[1531]: time="2025-08-19T00:24:48.151827468Z" level=info msg="RemoveContainer for \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\" returns successfully" Aug 19 00:24:48.153563 kubelet[2667]: I0819 00:24:48.153531 2667 scope.go:117] "RemoveContainer" containerID="9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778" Aug 19 00:24:48.155414 containerd[1531]: time="2025-08-19T00:24:48.155362429Z" level=info msg="RemoveContainer for \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\"" Aug 19 00:24:48.265587 containerd[1531]: time="2025-08-19T00:24:48.265469689Z" level=info msg="RemoveContainer for \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\" returns successfully" Aug 19 00:24:48.265768 kubelet[2667]: I0819 00:24:48.265737 2667 scope.go:117] "RemoveContainer" containerID="67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da" Aug 19 00:24:48.266076 containerd[1531]: time="2025-08-19T00:24:48.266044629Z" level=error msg="ContainerStatus for \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\": not found" Aug 19 00:24:48.266242 kubelet[2667]: E0819 00:24:48.266213 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\": not found" containerID="67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da" Aug 19 00:24:48.275439 kubelet[2667]: I0819 00:24:48.275289 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da"} err="failed to get container status \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\": rpc error: code = NotFound desc = an error occurred when try to find container \"67fbe5f8a7864ffbb8977c89c2f1571323a32a6cec7bcc6b4df2c588a76662da\": not found" Aug 19 00:24:48.275439 kubelet[2667]: I0819 00:24:48.275444 2667 scope.go:117] "RemoveContainer" containerID="edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766" Aug 19 00:24:48.275813 containerd[1531]: time="2025-08-19T00:24:48.275763423Z" level=error msg="ContainerStatus for \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\": not found" Aug 19 00:24:48.275944 kubelet[2667]: E0819 00:24:48.275921 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\": not found" containerID="edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766" Aug 19 00:24:48.275978 kubelet[2667]: I0819 00:24:48.275958 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766"} err="failed to get container status \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\": rpc error: code = NotFound desc = an error occurred when try to find container \"edd3287540009b78daa196807dc54f87a5e7a6bbc1b0084292c28f28edc95766\": not found" Aug 19 00:24:48.275978 kubelet[2667]: I0819 00:24:48.275974 2667 scope.go:117] "RemoveContainer" containerID="5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4" Aug 19 00:24:48.276203 containerd[1531]: time="2025-08-19T00:24:48.276149050Z" level=error msg="ContainerStatus for \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\": not found" Aug 19 00:24:48.276339 kubelet[2667]: E0819 00:24:48.276314 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\": not found" containerID="5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4" Aug 19 00:24:48.276408 kubelet[2667]: I0819 00:24:48.276373 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4"} err="failed to get container status \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d6f0169bd752549b491e3be775393a7470d8399f348ca7473ed6f5aa6eb99f4\": not found" Aug 19 00:24:48.276445 kubelet[2667]: I0819 00:24:48.276411 2667 scope.go:117] "RemoveContainer" containerID="0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5" Aug 19 00:24:48.276589 containerd[1531]: time="2025-08-19T00:24:48.276563436Z" level=error msg="ContainerStatus for \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\": not found" Aug 19 00:24:48.276683 kubelet[2667]: E0819 00:24:48.276665 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\": not found" containerID="0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5" Aug 19 00:24:48.276710 kubelet[2667]: I0819 00:24:48.276688 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5"} err="failed to get container status \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ca57adf36c0bc9380c41c60380d67c36f508e22cbaf578f8aa2d774e74bb0a5\": not found" Aug 19 00:24:48.276710 kubelet[2667]: I0819 00:24:48.276702 2667 scope.go:117] "RemoveContainer" containerID="9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778" Aug 19 00:24:48.276912 containerd[1531]: time="2025-08-19T00:24:48.276872305Z" level=error msg="ContainerStatus for \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\": not found" Aug 19 00:24:48.277040 kubelet[2667]: E0819 00:24:48.277015 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\": not found" containerID="9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778" Aug 19 00:24:48.277072 kubelet[2667]: I0819 00:24:48.277046 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778"} err="failed to get container status \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\": rpc error: code = NotFound desc = an error occurred when try to find container \"9385a5b827b70d8f871d1be61713defecf3bda1218093577be5c696acb676778\": not found" Aug 19 00:24:48.593058 kubelet[2667]: I0819 00:24:48.592998 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0020ed66-0db3-4260-952a-343926b4ee57" path="/var/lib/kubelet/pods/0020ed66-0db3-4260-952a-343926b4ee57/volumes" Aug 19 00:24:48.593641 kubelet[2667]: I0819 00:24:48.593618 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115eb11b-db07-43a6-ab0b-f6525ceb2c72" path="/var/lib/kubelet/pods/115eb11b-db07-43a6-ab0b-f6525ceb2c72/volumes" Aug 19 00:24:48.697406 sshd[4290]: Connection closed by 10.0.0.1 port 42804 Aug 19 00:24:48.700441 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:48.706509 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:42804.service: Deactivated successfully. Aug 19 00:24:48.712710 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 00:24:48.713003 systemd[1]: session-24.scope: Consumed 1.187s CPU time, 24.6M memory peak. Aug 19 00:24:48.715686 systemd-logind[1502]: Session 24 logged out. Waiting for processes to exit. Aug 19 00:24:48.723023 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:42810.service - OpenSSH per-connection server daemon (10.0.0.1:42810). Aug 19 00:24:48.724071 systemd-logind[1502]: Removed session 24. Aug 19 00:24:48.808959 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 42810 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:48.810345 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:48.816177 systemd-logind[1502]: New session 25 of user core. Aug 19 00:24:48.834646 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 00:24:49.667264 kubelet[2667]: E0819 00:24:49.667173 2667 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 00:24:50.800878 sshd[4442]: Connection closed by 10.0.0.1 port 42810 Aug 19 00:24:50.802305 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:50.822662 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:42810.service: Deactivated successfully. Aug 19 00:24:50.828708 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 00:24:50.831102 systemd[1]: session-25.scope: Consumed 1.773s CPU time, 26.1M memory peak. Aug 19 00:24:50.832564 systemd-logind[1502]: Session 25 logged out. Waiting for processes to exit. Aug 19 00:24:50.840680 systemd[1]: Started sshd@25-10.0.0.89:22-10.0.0.1:42818.service - OpenSSH per-connection server daemon (10.0.0.1:42818). Aug 19 00:24:50.848514 systemd-logind[1502]: Removed session 25. Aug 19 00:24:50.861752 kubelet[2667]: I0819 00:24:50.861716 2667 memory_manager.go:355] "RemoveStaleState removing state" podUID="115eb11b-db07-43a6-ab0b-f6525ceb2c72" containerName="cilium-agent" Aug 19 00:24:50.862536 kubelet[2667]: I0819 00:24:50.862087 2667 memory_manager.go:355] "RemoveStaleState removing state" podUID="0020ed66-0db3-4260-952a-343926b4ee57" containerName="cilium-operator" Aug 19 00:24:50.873580 systemd[1]: Created slice kubepods-burstable-podecb57eeb_2819_487d_ae95_0f31958fdc8a.slice - libcontainer container kubepods-burstable-podecb57eeb_2819_487d_ae95_0f31958fdc8a.slice. Aug 19 00:24:50.917968 sshd[4454]: Accepted publickey for core from 10.0.0.1 port 42818 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:50.919438 sshd-session[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:50.924847 systemd-logind[1502]: New session 26 of user core. Aug 19 00:24:50.939661 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 19 00:24:50.993609 sshd[4458]: Connection closed by 10.0.0.1 port 42818 Aug 19 00:24:50.994687 sshd-session[4454]: pam_unix(sshd:session): session closed for user core Aug 19 00:24:51.007328 systemd[1]: sshd@25-10.0.0.89:22-10.0.0.1:42818.service: Deactivated successfully. Aug 19 00:24:51.011167 systemd[1]: session-26.scope: Deactivated successfully. Aug 19 00:24:51.012433 systemd-logind[1502]: Session 26 logged out. Waiting for processes to exit. Aug 19 00:24:51.017553 systemd[1]: Started sshd@26-10.0.0.89:22-10.0.0.1:42822.service - OpenSSH per-connection server daemon (10.0.0.1:42822). Aug 19 00:24:51.018319 systemd-logind[1502]: Removed session 26. Aug 19 00:24:51.063159 kubelet[2667]: I0819 00:24:51.063023 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecb57eeb-2819-487d-ae95-0f31958fdc8a-clustermesh-secrets\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063159 kubelet[2667]: I0819 00:24:51.063073 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecb57eeb-2819-487d-ae95-0f31958fdc8a-hubble-tls\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063159 kubelet[2667]: I0819 00:24:51.063096 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-xtables-lock\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063159 kubelet[2667]: I0819 00:24:51.063113 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecb57eeb-2819-487d-ae95-0f31958fdc8a-cilium-config-path\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063159 kubelet[2667]: I0819 00:24:51.063143 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-host-proc-sys-net\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063159 kubelet[2667]: I0819 00:24:51.063164 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-lib-modules\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063505 kubelet[2667]: I0819 00:24:51.063179 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-bpf-maps\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063505 kubelet[2667]: I0819 00:24:51.063200 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-cilium-cgroup\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063505 kubelet[2667]: I0819 00:24:51.063260 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-host-proc-sys-kernel\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063505 kubelet[2667]: I0819 00:24:51.063288 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-cilium-run\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063505 kubelet[2667]: I0819 00:24:51.063307 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-cni-path\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063505 kubelet[2667]: I0819 00:24:51.063322 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr26h\" (UniqueName: \"kubernetes.io/projected/ecb57eeb-2819-487d-ae95-0f31958fdc8a-kube-api-access-gr26h\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063636 kubelet[2667]: I0819 00:24:51.063340 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-hostproc\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063636 kubelet[2667]: I0819 00:24:51.063355 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecb57eeb-2819-487d-ae95-0f31958fdc8a-etc-cni-netd\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.063636 kubelet[2667]: I0819 00:24:51.063376 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ecb57eeb-2819-487d-ae95-0f31958fdc8a-cilium-ipsec-secrets\") pod \"cilium-t6jb8\" (UID: \"ecb57eeb-2819-487d-ae95-0f31958fdc8a\") " pod="kube-system/cilium-t6jb8" Aug 19 00:24:51.085049 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 42822 ssh2: RSA SHA256:MuzZtQhRnNVq1rVZP5vx2TeC98TmfU3V7QIECoaqFtM Aug 19 00:24:51.086542 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:24:51.091934 systemd-logind[1502]: New session 27 of user core. Aug 19 00:24:51.104742 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 19 00:24:51.477328 kubelet[2667]: E0819 00:24:51.477276 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:51.478091 containerd[1531]: time="2025-08-19T00:24:51.477808453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6jb8,Uid:ecb57eeb-2819-487d-ae95-0f31958fdc8a,Namespace:kube-system,Attempt:0,}" Aug 19 00:24:51.500962 containerd[1531]: time="2025-08-19T00:24:51.500917500Z" level=info msg="connecting to shim f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b" address="unix:///run/containerd/s/e3d8bc1507e1105d14b0a606425af537d342384a0236f5496012ee402661df9b" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:24:51.541627 systemd[1]: Started cri-containerd-f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b.scope - libcontainer container f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b. Aug 19 00:24:51.563544 containerd[1531]: time="2025-08-19T00:24:51.563505922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t6jb8,Uid:ecb57eeb-2819-487d-ae95-0f31958fdc8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\"" Aug 19 00:24:51.564937 kubelet[2667]: E0819 00:24:51.564571 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:51.568019 containerd[1531]: time="2025-08-19T00:24:51.567965295Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 00:24:51.575829 containerd[1531]: time="2025-08-19T00:24:51.575771669Z" level=info msg="Container 3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:24:51.581672 containerd[1531]: time="2025-08-19T00:24:51.581624089Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae\"" Aug 19 00:24:51.582428 containerd[1531]: time="2025-08-19T00:24:51.582394430Z" level=info msg="StartContainer for \"3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae\"" Aug 19 00:24:51.583423 containerd[1531]: time="2025-08-19T00:24:51.583367527Z" level=info msg="connecting to shim 3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae" address="unix:///run/containerd/s/e3d8bc1507e1105d14b0a606425af537d342384a0236f5496012ee402661df9b" protocol=ttrpc version=3 Aug 19 00:24:51.607599 systemd[1]: Started cri-containerd-3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae.scope - libcontainer container 3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae. Aug 19 00:24:51.654896 containerd[1531]: time="2025-08-19T00:24:51.654800297Z" level=info msg="StartContainer for \"3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae\" returns successfully" Aug 19 00:24:51.683232 systemd[1]: cri-containerd-3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae.scope: Deactivated successfully. Aug 19 00:24:51.686234 containerd[1531]: time="2025-08-19T00:24:51.686075829Z" level=info msg="received exit event container_id:\"3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae\" id:\"3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae\" pid:4540 exited_at:{seconds:1755563091 nanos:685691238}" Aug 19 00:24:51.686333 containerd[1531]: time="2025-08-19T00:24:51.686303783Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae\" id:\"3babeda7d291de1a0c5af50a3f4965f4fa79562f490f4ee9ea8b4757929424ae\" pid:4540 exited_at:{seconds:1755563091 nanos:685691238}" Aug 19 00:24:51.888534 kubelet[2667]: E0819 00:24:51.888418 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:51.896856 containerd[1531]: time="2025-08-19T00:24:51.896812025Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 00:24:51.905575 containerd[1531]: time="2025-08-19T00:24:51.905516137Z" level=info msg="Container 81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:24:51.913978 containerd[1531]: time="2025-08-19T00:24:51.913934175Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6\"" Aug 19 00:24:51.915573 containerd[1531]: time="2025-08-19T00:24:51.915548457Z" level=info msg="StartContainer for \"81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6\"" Aug 19 00:24:51.916472 containerd[1531]: time="2025-08-19T00:24:51.916447075Z" level=info msg="connecting to shim 81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6" address="unix:///run/containerd/s/e3d8bc1507e1105d14b0a606425af537d342384a0236f5496012ee402661df9b" protocol=ttrpc version=3 Aug 19 00:24:51.950628 systemd[1]: Started cri-containerd-81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6.scope - libcontainer container 81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6. Aug 19 00:24:51.982963 containerd[1531]: time="2025-08-19T00:24:51.982898925Z" level=info msg="StartContainer for \"81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6\" returns successfully" Aug 19 00:24:52.013635 systemd[1]: cri-containerd-81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6.scope: Deactivated successfully. Aug 19 00:24:52.014702 containerd[1531]: time="2025-08-19T00:24:52.014621003Z" level=info msg="received exit event container_id:\"81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6\" id:\"81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6\" pid:4583 exited_at:{seconds:1755563092 nanos:13758901}" Aug 19 00:24:52.014761 containerd[1531]: time="2025-08-19T00:24:52.014702162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6\" id:\"81f131d4256cb2b7d0378c965aa3ef8900f94bc264f628db2aa1d753ed9babc6\" pid:4583 exited_at:{seconds:1755563092 nanos:13758901}" Aug 19 00:24:52.892232 kubelet[2667]: E0819 00:24:52.892179 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:52.896626 containerd[1531]: time="2025-08-19T00:24:52.896585323Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 00:24:52.914602 containerd[1531]: time="2025-08-19T00:24:52.914558787Z" level=info msg="Container 25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:24:52.922365 containerd[1531]: time="2025-08-19T00:24:52.922307345Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd\"" Aug 19 00:24:52.924515 containerd[1531]: time="2025-08-19T00:24:52.923077169Z" level=info msg="StartContainer for \"25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd\"" Aug 19 00:24:52.924597 containerd[1531]: time="2025-08-19T00:24:52.924565258Z" level=info msg="connecting to shim 25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd" address="unix:///run/containerd/s/e3d8bc1507e1105d14b0a606425af537d342384a0236f5496012ee402661df9b" protocol=ttrpc version=3 Aug 19 00:24:52.946586 systemd[1]: Started cri-containerd-25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd.scope - libcontainer container 25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd. Aug 19 00:24:52.980968 containerd[1531]: time="2025-08-19T00:24:52.980912800Z" level=info msg="StartContainer for \"25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd\" returns successfully" Aug 19 00:24:52.982923 systemd[1]: cri-containerd-25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd.scope: Deactivated successfully. Aug 19 00:24:52.984664 containerd[1531]: time="2025-08-19T00:24:52.984540684Z" level=info msg="received exit event container_id:\"25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd\" id:\"25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd\" pid:4626 exited_at:{seconds:1755563092 nanos:984024735}" Aug 19 00:24:52.984941 containerd[1531]: time="2025-08-19T00:24:52.984609683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd\" id:\"25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd\" pid:4626 exited_at:{seconds:1755563092 nanos:984024735}" Aug 19 00:24:53.171760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25732cfdec915128f0c51b47930b59214aaa75bd14f7c51dad9890217746c0dd-rootfs.mount: Deactivated successfully. Aug 19 00:24:53.899755 kubelet[2667]: E0819 00:24:53.899705 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:53.905878 containerd[1531]: time="2025-08-19T00:24:53.905829913Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 00:24:53.929031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862335842.mount: Deactivated successfully. Aug 19 00:24:53.943281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914916732.mount: Deactivated successfully. Aug 19 00:24:53.943793 containerd[1531]: time="2025-08-19T00:24:53.943736231Z" level=info msg="Container 543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:24:53.959762 containerd[1531]: time="2025-08-19T00:24:53.959711704Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44\"" Aug 19 00:24:53.960581 containerd[1531]: time="2025-08-19T00:24:53.960419411Z" level=info msg="StartContainer for \"543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44\"" Aug 19 00:24:53.961902 containerd[1531]: time="2025-08-19T00:24:53.961868145Z" level=info msg="connecting to shim 543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44" address="unix:///run/containerd/s/e3d8bc1507e1105d14b0a606425af537d342384a0236f5496012ee402661df9b" protocol=ttrpc version=3 Aug 19 00:24:53.988627 systemd[1]: Started cri-containerd-543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44.scope - libcontainer container 543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44. Aug 19 00:24:54.016238 systemd[1]: cri-containerd-543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44.scope: Deactivated successfully. Aug 19 00:24:54.017907 containerd[1531]: time="2025-08-19T00:24:54.017284114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44\" id:\"543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44\" pid:4665 exited_at:{seconds:1755563094 nanos:16836761}" Aug 19 00:24:54.027422 containerd[1531]: time="2025-08-19T00:24:54.026679972Z" level=info msg="received exit event container_id:\"543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44\" id:\"543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44\" pid:4665 exited_at:{seconds:1755563094 nanos:16836761}" Aug 19 00:24:54.028053 containerd[1531]: time="2025-08-19T00:24:54.028023392Z" level=info msg="StartContainer for \"543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44\" returns successfully" Aug 19 00:24:54.032398 containerd[1531]: time="2025-08-19T00:24:54.022046762Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecb57eeb_2819_487d_ae95_0f31958fdc8a.slice/cri-containerd-543bd79bee0cdffae8c38c5a9546148d3458b3f1929a729e33f9ef6666bbcf44.scope/cgroup.events\": no such file or directory" Aug 19 00:24:54.589364 kubelet[2667]: E0819 00:24:54.589290 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:54.668809 kubelet[2667]: E0819 00:24:54.668758 2667 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 00:24:54.907141 kubelet[2667]: E0819 00:24:54.907020 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:54.916114 containerd[1531]: time="2025-08-19T00:24:54.911548337Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 00:24:54.993051 containerd[1531]: time="2025-08-19T00:24:54.992998784Z" level=info msg="Container 4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:24:55.054130 containerd[1531]: time="2025-08-19T00:24:55.054069204Z" level=info msg="CreateContainer within sandbox \"f1ed2936691a7b2b7a274af0c1e33f2ee836869614af9881668bdd539fefc61b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1\"" Aug 19 00:24:55.054666 containerd[1531]: time="2025-08-19T00:24:55.054631597Z" level=info msg="StartContainer for \"4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1\"" Aug 19 00:24:55.057067 containerd[1531]: time="2025-08-19T00:24:55.057023168Z" level=info msg="connecting to shim 4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1" address="unix:///run/containerd/s/e3d8bc1507e1105d14b0a606425af537d342384a0236f5496012ee402661df9b" protocol=ttrpc version=3 Aug 19 00:24:55.101616 systemd[1]: Started cri-containerd-4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1.scope - libcontainer container 4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1. Aug 19 00:24:55.160361 containerd[1531]: time="2025-08-19T00:24:55.160225689Z" level=info msg="StartContainer for \"4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1\" returns successfully" Aug 19 00:24:55.228763 containerd[1531]: time="2025-08-19T00:24:55.228722881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1\" id:\"0d9cce3f737b7bc277ae1fff3b01e8b0e357f5d173c431180fa15e163e8d88a5\" pid:4735 exited_at:{seconds:1755563095 nanos:228088688}" Aug 19 00:24:55.441412 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 19 00:24:55.914533 kubelet[2667]: E0819 00:24:55.914374 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:55.939344 kubelet[2667]: I0819 00:24:55.939270 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t6jb8" podStartSLOduration=5.939251999 podStartE2EDuration="5.939251999s" podCreationTimestamp="2025-08-19 00:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:24:55.938815084 +0000 UTC m=+91.453490091" watchObservedRunningTime="2025-08-19 00:24:55.939251999 +0000 UTC m=+91.453927006" Aug 19 00:24:56.589402 kubelet[2667]: E0819 00:24:56.589295 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-k8j77" podUID="160b9868-26cb-43ed-9fdc-c0e749c2d60d" Aug 19 00:24:56.811242 kubelet[2667]: I0819 00:24:56.811180 2667 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-19T00:24:56Z","lastTransitionTime":"2025-08-19T00:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 19 00:24:57.479609 kubelet[2667]: E0819 00:24:57.478518 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:57.539211 containerd[1531]: time="2025-08-19T00:24:57.539151689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1\" id:\"dbf3c42f218f0c1502e59c65c426b4cbb41909b18eb55e09752aba5a9fb5ad4d\" pid:4948 exit_status:1 exited_at:{seconds:1755563097 nanos:537279183}" Aug 19 00:24:58.561012 systemd-networkd[1424]: lxc_health: Link UP Aug 19 00:24:58.575789 systemd-networkd[1424]: lxc_health: Gained carrier Aug 19 00:24:58.589823 kubelet[2667]: E0819 00:24:58.589774 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-k8j77" podUID="160b9868-26cb-43ed-9fdc-c0e749c2d60d" Aug 19 00:24:59.479721 kubelet[2667]: E0819 00:24:59.479685 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:24:59.685875 containerd[1531]: time="2025-08-19T00:24:59.685823908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1\" id:\"c96957212a9cd79c759ed7d72f224114f8ba4d5c1bbd766b28d2a5d0f9fe570f\" pid:5268 exited_at:{seconds:1755563099 nanos:685305749}" Aug 19 00:24:59.823569 systemd-networkd[1424]: lxc_health: Gained IPv6LL Aug 19 00:24:59.921814 kubelet[2667]: E0819 00:24:59.921454 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:25:00.590983 kubelet[2667]: E0819 00:25:00.590555 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:25:00.923448 kubelet[2667]: E0819 00:25:00.923306 2667 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:25:01.857507 containerd[1531]: time="2025-08-19T00:25:01.857462112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1\" id:\"d3b683baf16746d815807802860b099b6f553fb6130cd26944ec40fb7849c306\" pid:5299 exited_at:{seconds:1755563101 nanos:857028190}" Aug 19 00:25:03.979568 containerd[1531]: time="2025-08-19T00:25:03.979319570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4407d1145b8b6e69a6f69b7488221e772414b49b32425609cfa62d2537ffefa1\" id:\"b77482760513d6276812e00c9e525cdc9ebcf12ed55a712f5d41f9757ad7843d\" pid:5331 exited_at:{seconds:1755563103 nanos:978990328}" Aug 19 00:25:03.987562 sshd[4468]: Connection closed by 10.0.0.1 port 42822 Aug 19 00:25:03.988337 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Aug 19 00:25:03.992776 systemd[1]: sshd@26-10.0.0.89:22-10.0.0.1:42822.service: Deactivated successfully. Aug 19 00:25:03.995224 systemd[1]: session-27.scope: Deactivated successfully. Aug 19 00:25:03.998192 systemd-logind[1502]: Session 27 logged out. Waiting for processes to exit. Aug 19 00:25:03.999991 systemd-logind[1502]: Removed session 27.