Sep 12 10:09:19.050686 kernel: Linux version 6.6.105-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 08:42:12 -00 2025 Sep 12 10:09:19.050713 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:09:19.050727 kernel: BIOS-provided physical RAM map: Sep 12 10:09:19.050736 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 10:09:19.050744 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 10:09:19.050751 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 10:09:19.050761 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 10:09:19.050769 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 10:09:19.050777 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 10:09:19.050797 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 10:09:19.050805 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 10:09:19.050813 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 10:09:19.050825 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 10:09:19.050833 kernel: NX (Execute Disable) protection: active Sep 12 10:09:19.050843 kernel: APIC: Static calls initialized Sep 12 10:09:19.050858 kernel: SMBIOS 2.8 present. Sep 12 10:09:19.050867 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 10:09:19.050876 kernel: Hypervisor detected: KVM Sep 12 10:09:19.050884 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 10:09:19.050893 kernel: kvm-clock: using sched offset of 4370880891 cycles Sep 12 10:09:19.050902 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 10:09:19.050911 kernel: tsc: Detected 2794.750 MHz processor Sep 12 10:09:19.050920 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 10:09:19.050929 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 10:09:19.050938 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 10:09:19.050950 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 10:09:19.050959 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 10:09:19.050968 kernel: Using GB pages for direct mapping Sep 12 10:09:19.050977 kernel: ACPI: Early table checksum verification disabled Sep 12 10:09:19.050986 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 10:09:19.050995 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:19.051003 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:19.051012 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:19.051021 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 10:09:19.051033 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:19.051042 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:19.051051 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:19.051060 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:09:19.051069 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 10:09:19.051078 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 10:09:19.051091 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 10:09:19.051103 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 10:09:19.051112 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 10:09:19.051121 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 10:09:19.051131 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 10:09:19.051142 kernel: No NUMA configuration found Sep 12 10:09:19.051151 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 10:09:19.051161 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 12 10:09:19.051172 kernel: Zone ranges: Sep 12 10:09:19.051182 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 10:09:19.051191 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 10:09:19.051200 kernel: Normal empty Sep 12 10:09:19.051209 kernel: Movable zone start for each node Sep 12 10:09:19.051218 kernel: Early memory node ranges Sep 12 10:09:19.051227 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 10:09:19.051236 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 10:09:19.051246 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 10:09:19.051258 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 10:09:19.051270 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 10:09:19.051279 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 10:09:19.051288 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 10:09:19.051297 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 10:09:19.051307 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 10:09:19.051316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 10:09:19.051325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 10:09:19.051334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 10:09:19.051347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 10:09:19.051356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 10:09:19.051378 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 10:09:19.051405 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 10:09:19.051416 kernel: TSC deadline timer available Sep 12 10:09:19.051425 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 10:09:19.051435 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 10:09:19.051444 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 10:09:19.051456 kernel: kvm-guest: setup PV sched yield Sep 12 10:09:19.051465 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 10:09:19.051478 kernel: Booting paravirtualized kernel on KVM Sep 12 10:09:19.051488 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 10:09:19.051497 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 10:09:19.051507 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 10:09:19.051516 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 10:09:19.051525 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 10:09:19.051534 kernel: kvm-guest: PV spinlocks enabled Sep 12 10:09:19.051544 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 10:09:19.051555 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:09:19.051568 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 10:09:19.051577 kernel: random: crng init done Sep 12 10:09:19.051587 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 10:09:19.051596 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 10:09:19.051606 kernel: Fallback order for Node 0: 0 Sep 12 10:09:19.051615 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 12 10:09:19.051727 kernel: Policy zone: DMA32 Sep 12 10:09:19.051736 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 10:09:19.051751 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 138948K reserved, 0K cma-reserved) Sep 12 10:09:19.051760 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 10:09:19.051769 kernel: ftrace: allocating 37946 entries in 149 pages Sep 12 10:09:19.051779 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 10:09:19.051795 kernel: Dynamic Preempt: voluntary Sep 12 10:09:19.051805 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 10:09:19.051815 kernel: rcu: RCU event tracing is enabled. Sep 12 10:09:19.051824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 10:09:19.051834 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 10:09:19.051847 kernel: Rude variant of Tasks RCU enabled. Sep 12 10:09:19.051856 kernel: Tracing variant of Tasks RCU enabled. Sep 12 10:09:19.051866 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 10:09:19.051878 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 10:09:19.051887 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 10:09:19.051897 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 10:09:19.051906 kernel: Console: colour VGA+ 80x25 Sep 12 10:09:19.051915 kernel: printk: console [ttyS0] enabled Sep 12 10:09:19.051924 kernel: ACPI: Core revision 20230628 Sep 12 10:09:19.051937 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 10:09:19.051946 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 10:09:19.051955 kernel: x2apic enabled Sep 12 10:09:19.051964 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 10:09:19.051974 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 10:09:19.051983 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 10:09:19.051993 kernel: kvm-guest: setup PV IPIs Sep 12 10:09:19.052014 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 10:09:19.052023 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 10:09:19.052033 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 12 10:09:19.052043 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 10:09:19.052052 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 10:09:19.052064 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 10:09:19.052074 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 10:09:19.052084 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 10:09:19.052094 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 10:09:19.052106 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 10:09:19.052116 kernel: active return thunk: retbleed_return_thunk Sep 12 10:09:19.052128 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 10:09:19.052138 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 10:09:19.052147 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 10:09:19.052157 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 10:09:19.052185 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 10:09:19.052208 kernel: active return thunk: srso_return_thunk Sep 12 10:09:19.052218 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 10:09:19.052232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 10:09:19.052241 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 10:09:19.052254 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 10:09:19.052264 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 10:09:19.052273 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 10:09:19.052283 kernel: Freeing SMP alternatives memory: 32K Sep 12 10:09:19.052293 kernel: pid_max: default: 32768 minimum: 301 Sep 12 10:09:19.052302 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 10:09:19.052312 kernel: landlock: Up and running. Sep 12 10:09:19.052324 kernel: SELinux: Initializing. Sep 12 10:09:19.052335 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 10:09:19.052344 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 10:09:19.052354 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 10:09:19.052364 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 10:09:19.052374 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 10:09:19.052384 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 10:09:19.052396 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 10:09:19.052408 kernel: ... version: 0 Sep 12 10:09:19.052418 kernel: ... bit width: 48 Sep 12 10:09:19.052428 kernel: ... generic registers: 6 Sep 12 10:09:19.052437 kernel: ... value mask: 0000ffffffffffff Sep 12 10:09:19.052447 kernel: ... max period: 00007fffffffffff Sep 12 10:09:19.052456 kernel: ... fixed-purpose events: 0 Sep 12 10:09:19.052466 kernel: ... event mask: 000000000000003f Sep 12 10:09:19.052475 kernel: signal: max sigframe size: 1776 Sep 12 10:09:19.052485 kernel: rcu: Hierarchical SRCU implementation. Sep 12 10:09:19.052495 kernel: rcu: Max phase no-delay instances is 400. Sep 12 10:09:19.052507 kernel: smp: Bringing up secondary CPUs ... Sep 12 10:09:19.052517 kernel: smpboot: x86: Booting SMP configuration: Sep 12 10:09:19.052526 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 10:09:19.052536 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 10:09:19.052545 kernel: smpboot: Max logical packages: 1 Sep 12 10:09:19.052555 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 12 10:09:19.052564 kernel: devtmpfs: initialized Sep 12 10:09:19.052574 kernel: x86/mm: Memory block size: 128MB Sep 12 10:09:19.052584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 10:09:19.052596 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 10:09:19.052606 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 10:09:19.052615 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 10:09:19.052637 kernel: audit: initializing netlink subsys (disabled) Sep 12 10:09:19.052647 kernel: audit: type=2000 audit(1757671758.027:1): state=initialized audit_enabled=0 res=1 Sep 12 10:09:19.052656 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 10:09:19.052666 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 10:09:19.052676 kernel: cpuidle: using governor menu Sep 12 10:09:19.052685 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 10:09:19.052698 kernel: dca service started, version 1.12.1 Sep 12 10:09:19.052708 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 10:09:19.052717 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 10:09:19.052727 kernel: PCI: Using configuration type 1 for base access Sep 12 10:09:19.052737 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 10:09:19.052746 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 10:09:19.052756 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 10:09:19.052766 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 10:09:19.052776 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 10:09:19.052795 kernel: ACPI: Added _OSI(Module Device) Sep 12 10:09:19.052805 kernel: ACPI: Added _OSI(Processor Device) Sep 12 10:09:19.052814 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 10:09:19.052824 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 10:09:19.052834 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 10:09:19.052843 kernel: ACPI: Interpreter enabled Sep 12 10:09:19.052852 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 10:09:19.052862 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 10:09:19.052872 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 10:09:19.052884 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 10:09:19.052894 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 10:09:19.052904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 10:09:19.053183 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 10:09:19.053342 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 10:09:19.053490 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 10:09:19.053503 kernel: PCI host bridge to bus 0000:00 Sep 12 10:09:19.053689 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 10:09:19.053856 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 10:09:19.053993 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 10:09:19.054125 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 10:09:19.054258 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 10:09:19.054392 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 10:09:19.054532 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 10:09:19.054753 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 10:09:19.054927 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 10:09:19.055075 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 12 10:09:19.055220 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 12 10:09:19.055364 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 12 10:09:19.055507 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 10:09:19.055696 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 10:09:19.055859 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 12 10:09:19.056005 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 12 10:09:19.056155 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 10:09:19.056320 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 10:09:19.056468 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 10:09:19.056614 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 12 10:09:19.056794 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 10:09:19.056967 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 10:09:19.057117 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 12 10:09:19.057268 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 12 10:09:19.057471 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 10:09:19.057651 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 12 10:09:19.057844 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 10:09:19.058000 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 10:09:19.058165 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 10:09:19.058311 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 12 10:09:19.058456 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 12 10:09:19.058685 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 10:09:19.058850 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 10:09:19.058864 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 10:09:19.058879 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 10:09:19.058889 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 10:09:19.058898 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 10:09:19.058908 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 10:09:19.058917 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 10:09:19.058927 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 10:09:19.058937 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 10:09:19.058947 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 10:09:19.058956 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 10:09:19.058969 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 10:09:19.058979 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 10:09:19.058988 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 10:09:19.058998 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 10:09:19.059007 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 10:09:19.059017 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 10:09:19.059027 kernel: iommu: Default domain type: Translated Sep 12 10:09:19.059036 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 10:09:19.059046 kernel: PCI: Using ACPI for IRQ routing Sep 12 10:09:19.059059 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 10:09:19.059069 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 10:09:19.059079 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 10:09:19.059226 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 10:09:19.059369 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 10:09:19.059511 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 10:09:19.059524 kernel: vgaarb: loaded Sep 12 10:09:19.059534 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 10:09:19.059548 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 10:09:19.059558 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 10:09:19.059567 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 10:09:19.059578 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 10:09:19.059587 kernel: pnp: PnP ACPI init Sep 12 10:09:19.059773 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 10:09:19.059798 kernel: pnp: PnP ACPI: found 6 devices Sep 12 10:09:19.059808 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 10:09:19.059822 kernel: NET: Registered PF_INET protocol family Sep 12 10:09:19.059831 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 10:09:19.059841 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 10:09:19.059851 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 10:09:19.059861 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 10:09:19.059871 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 10:09:19.059881 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 10:09:19.059890 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 10:09:19.059900 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 10:09:19.059913 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 10:09:19.059923 kernel: NET: Registered PF_XDP protocol family Sep 12 10:09:19.060060 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 10:09:19.060191 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 10:09:19.060321 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 10:09:19.060451 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 10:09:19.060585 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 10:09:19.060732 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 10:09:19.060750 kernel: PCI: CLS 0 bytes, default 64 Sep 12 10:09:19.060760 kernel: Initialise system trusted keyrings Sep 12 10:09:19.060770 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 10:09:19.060779 kernel: Key type asymmetric registered Sep 12 10:09:19.060798 kernel: Asymmetric key parser 'x509' registered Sep 12 10:09:19.060808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 10:09:19.060817 kernel: io scheduler mq-deadline registered Sep 12 10:09:19.060827 kernel: io scheduler kyber registered Sep 12 10:09:19.060837 kernel: io scheduler bfq registered Sep 12 10:09:19.060850 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 10:09:19.060861 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 10:09:19.060870 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 10:09:19.060880 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 10:09:19.060890 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 10:09:19.060899 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 10:09:19.060909 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 10:09:19.060919 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 10:09:19.060929 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 10:09:19.061090 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 10:09:19.061105 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 10:09:19.061240 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 10:09:19.061375 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T10:09:18 UTC (1757671758) Sep 12 10:09:19.061511 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 10:09:19.061523 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 10:09:19.061533 kernel: NET: Registered PF_INET6 protocol family Sep 12 10:09:19.061543 kernel: Segment Routing with IPv6 Sep 12 10:09:19.061557 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 10:09:19.061566 kernel: NET: Registered PF_PACKET protocol family Sep 12 10:09:19.061576 kernel: Key type dns_resolver registered Sep 12 10:09:19.061585 kernel: IPI shorthand broadcast: enabled Sep 12 10:09:19.061595 kernel: sched_clock: Marking stable (923002705, 198134502)->(1216476473, -95339266) Sep 12 10:09:19.061605 kernel: registered taskstats version 1 Sep 12 10:09:19.061614 kernel: Loading compiled-in X.509 certificates Sep 12 10:09:19.061638 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.105-flatcar: 0972efc09ee0bcd53f8cdb5573e11871ce7b16a9' Sep 12 10:09:19.061647 kernel: Key type .fscrypt registered Sep 12 10:09:19.061660 kernel: Key type fscrypt-provisioning registered Sep 12 10:09:19.061670 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 10:09:19.061680 kernel: ima: Allocated hash algorithm: sha1 Sep 12 10:09:19.061689 kernel: ima: No architecture policies found Sep 12 10:09:19.061699 kernel: clk: Disabling unused clocks Sep 12 10:09:19.061708 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 12 10:09:19.061718 kernel: Write protecting the kernel read-only data: 38912k Sep 12 10:09:19.061728 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 12 10:09:19.061737 kernel: Run /init as init process Sep 12 10:09:19.061750 kernel: with arguments: Sep 12 10:09:19.061759 kernel: /init Sep 12 10:09:19.061769 kernel: with environment: Sep 12 10:09:19.061778 kernel: HOME=/ Sep 12 10:09:19.061795 kernel: TERM=linux Sep 12 10:09:19.061809 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 10:09:19.061828 systemd[1]: Successfully made /usr/ read-only. Sep 12 10:09:19.061860 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:09:19.061889 systemd[1]: Detected virtualization kvm. Sep 12 10:09:19.061916 systemd[1]: Detected architecture x86-64. Sep 12 10:09:19.061952 systemd[1]: Running in initrd. Sep 12 10:09:19.061974 systemd[1]: No hostname configured, using default hostname. Sep 12 10:09:19.062003 systemd[1]: Hostname set to . Sep 12 10:09:19.062024 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:09:19.062051 systemd[1]: Queued start job for default target initrd.target. Sep 12 10:09:19.062077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:09:19.062110 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:09:19.062171 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 10:09:19.062202 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:09:19.062224 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 10:09:19.062255 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 10:09:19.062292 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 10:09:19.062322 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 10:09:19.062343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:09:19.062372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:09:19.062398 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:09:19.062424 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:09:19.062449 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:09:19.062474 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:09:19.062506 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:09:19.062528 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:09:19.062557 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 10:09:19.062583 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 10:09:19.062609 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:09:19.062647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:09:19.062669 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:09:19.062694 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:09:19.062716 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 10:09:19.062730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:09:19.062741 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 10:09:19.062752 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 10:09:19.062765 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:09:19.062776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:09:19.062794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:09:19.062805 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 10:09:19.062815 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:09:19.062830 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 10:09:19.062841 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:09:19.062890 systemd-journald[193]: Collecting audit messages is disabled. Sep 12 10:09:19.062915 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:09:19.062926 systemd-journald[193]: Journal started Sep 12 10:09:19.062953 systemd-journald[193]: Runtime Journal (/run/log/journal/fc74856014574e9591dd42ae9839f012) is 6M, max 48.4M, 42.3M free. Sep 12 10:09:19.036803 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 10:09:19.084981 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 10:09:19.085012 kernel: Bridge firewalling registered Sep 12 10:09:19.065401 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 10:09:19.086977 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:09:19.088557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:09:19.091386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:09:19.110112 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:09:19.113704 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:09:19.116708 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:09:19.122896 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:09:19.130316 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:09:19.134792 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 10:09:19.135256 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:09:19.137618 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:09:19.149788 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:09:19.158831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:09:19.163475 dracut-cmdline[228]: dracut-dracut-053 Sep 12 10:09:19.167725 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:09:19.208738 systemd-resolved[236]: Positive Trust Anchors: Sep 12 10:09:19.208761 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:09:19.208813 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:09:19.212333 systemd-resolved[236]: Defaulting to hostname 'linux'. Sep 12 10:09:19.213991 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:09:19.219088 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:09:19.288678 kernel: SCSI subsystem initialized Sep 12 10:09:19.298668 kernel: Loading iSCSI transport class v2.0-870. Sep 12 10:09:19.311678 kernel: iscsi: registered transport (tcp) Sep 12 10:09:19.336923 kernel: iscsi: registered transport (qla4xxx) Sep 12 10:09:19.337038 kernel: QLogic iSCSI HBA Driver Sep 12 10:09:19.395656 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 10:09:19.404769 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 10:09:19.433688 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 10:09:19.433790 kernel: device-mapper: uevent: version 1.0.3 Sep 12 10:09:19.433808 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 10:09:19.480685 kernel: raid6: avx2x4 gen() 22447 MB/s Sep 12 10:09:19.514683 kernel: raid6: avx2x2 gen() 20170 MB/s Sep 12 10:09:19.544919 kernel: raid6: avx2x1 gen() 16932 MB/s Sep 12 10:09:19.544989 kernel: raid6: using algorithm avx2x4 gen() 22447 MB/s Sep 12 10:09:19.576684 kernel: raid6: .... xor() 6889 MB/s, rmw enabled Sep 12 10:09:19.576768 kernel: raid6: using avx2x2 recovery algorithm Sep 12 10:09:19.598670 kernel: xor: automatically using best checksumming function avx Sep 12 10:09:19.761674 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 10:09:19.776840 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:09:19.786783 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:09:19.803670 systemd-udevd[416]: Using default interface naming scheme 'v255'. Sep 12 10:09:19.809719 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:09:19.816985 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 10:09:19.834977 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 12 10:09:19.880384 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:09:19.907023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:09:19.986742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:09:20.000200 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 10:09:20.014383 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 10:09:20.020376 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:09:20.023770 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:09:20.026710 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:09:20.035849 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 10:09:20.041660 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 10:09:20.060783 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 10:09:20.066838 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 10:09:20.066886 kernel: AES CTR mode by8 optimization enabled Sep 12 10:09:20.068619 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:09:20.069580 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:09:20.075089 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:09:20.085325 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 10:09:20.089819 kernel: libata version 3.00 loaded. Sep 12 10:09:20.089836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 10:09:20.089848 kernel: GPT:9289727 != 19775487 Sep 12 10:09:20.089859 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 10:09:20.089869 kernel: GPT:9289727 != 19775487 Sep 12 10:09:20.089879 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 10:09:20.089896 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:09:20.078103 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:09:20.095074 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 10:09:20.095313 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 10:09:20.078301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:09:20.081372 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:09:20.093915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:09:20.099669 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 10:09:20.099904 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 10:09:20.104663 kernel: scsi host0: ahci Sep 12 10:09:20.105027 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:09:20.107816 kernel: scsi host1: ahci Sep 12 10:09:20.108040 kernel: scsi host2: ahci Sep 12 10:09:20.110668 kernel: scsi host3: ahci Sep 12 10:09:20.143654 kernel: scsi host4: ahci Sep 12 10:09:20.145640 kernel: scsi host5: ahci Sep 12 10:09:20.145870 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 12 10:09:20.145885 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 12 10:09:20.145898 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 12 10:09:20.145911 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 12 10:09:20.145923 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 12 10:09:20.145936 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 12 10:09:20.201553 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (458) Sep 12 10:09:20.201664 kernel: BTRFS: device fsid 2566299d-dd4a-4826-ba43-7397a17991fb devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (475) Sep 12 10:09:20.161233 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 10:09:20.230359 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:09:20.245529 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 10:09:20.246015 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 10:09:20.258735 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 10:09:20.268728 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 10:09:20.288809 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 10:09:20.290219 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:09:20.317928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:09:20.424128 disk-uuid[559]: Primary Header is updated. Sep 12 10:09:20.424128 disk-uuid[559]: Secondary Entries is updated. Sep 12 10:09:20.424128 disk-uuid[559]: Secondary Header is updated. Sep 12 10:09:20.428664 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:09:20.435662 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:09:20.511447 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 10:09:20.511563 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 10:09:20.511603 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 10:09:20.512769 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 10:09:20.516647 kernel: ata3.00: applying bridge limits Sep 12 10:09:20.516682 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 10:09:20.516694 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 10:09:20.516705 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 10:09:20.517648 kernel: ata3.00: configured for UDMA/100 Sep 12 10:09:20.519650 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 10:09:20.581705 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 10:09:20.582140 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 10:09:20.598669 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 10:09:21.434691 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:09:21.435309 disk-uuid[568]: The operation has completed successfully. Sep 12 10:09:21.489996 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 10:09:21.490125 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 10:09:21.518845 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 10:09:21.525298 sh[593]: Success Sep 12 10:09:21.540658 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 10:09:21.586211 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 10:09:21.610156 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 10:09:21.615701 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 10:09:21.651679 kernel: BTRFS info (device dm-0): first mount of filesystem 2566299d-dd4a-4826-ba43-7397a17991fb Sep 12 10:09:21.651748 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:09:21.651760 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 10:09:21.652797 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 10:09:21.654266 kernel: BTRFS info (device dm-0): using free space tree Sep 12 10:09:21.660872 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 10:09:21.662328 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 10:09:21.674950 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 10:09:21.677459 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 10:09:21.699383 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:21.699461 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:09:21.699475 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:09:21.702671 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:09:21.708687 kernel: BTRFS info (device vda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:21.716034 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 10:09:21.721975 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 10:09:21.871575 ignition[677]: Ignition 2.20.0 Sep 12 10:09:21.871588 ignition[677]: Stage: fetch-offline Sep 12 10:09:21.871679 ignition[677]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:21.871692 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:09:21.871887 ignition[677]: parsed url from cmdline: "" Sep 12 10:09:21.871891 ignition[677]: no config URL provided Sep 12 10:09:21.871896 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:09:21.871906 ignition[677]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:09:21.871934 ignition[677]: op(1): [started] loading QEMU firmware config module Sep 12 10:09:21.871940 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 10:09:21.884589 ignition[677]: op(1): [finished] loading QEMU firmware config module Sep 12 10:09:21.933652 ignition[677]: parsing config with SHA512: 57fcf81d08828690d2e582da5e938c7d5c64a7e19c215e4aa23841fb94e6b1e3da3d30845c9f724483bd5540d4a2675585ef5f343ddd3e437707c79f22e77087 Sep 12 10:09:21.949307 unknown[677]: fetched base config from "system" Sep 12 10:09:21.949927 ignition[677]: fetch-offline: fetch-offline passed Sep 12 10:09:21.949324 unknown[677]: fetched user config from "qemu" Sep 12 10:09:21.950056 ignition[677]: Ignition finished successfully Sep 12 10:09:21.952748 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:09:21.958063 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:09:21.970833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:09:22.004202 systemd-networkd[779]: lo: Link UP Sep 12 10:09:22.004215 systemd-networkd[779]: lo: Gained carrier Sep 12 10:09:22.007093 systemd-networkd[779]: Enumeration completed Sep 12 10:09:22.007386 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:09:22.007687 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:09:22.007694 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:09:22.008876 systemd-networkd[779]: eth0: Link UP Sep 12 10:09:22.008881 systemd-networkd[779]: eth0: Gained carrier Sep 12 10:09:22.008890 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:09:22.011169 systemd[1]: Reached target network.target - Network. Sep 12 10:09:22.014342 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 10:09:22.021871 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 10:09:22.023701 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 10:09:22.102256 ignition[782]: Ignition 2.20.0 Sep 12 10:09:22.102276 ignition[782]: Stage: kargs Sep 12 10:09:22.102517 ignition[782]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:22.102534 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:09:22.104259 ignition[782]: kargs: kargs passed Sep 12 10:09:22.104315 ignition[782]: Ignition finished successfully Sep 12 10:09:22.107433 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 10:09:22.127135 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 10:09:22.185253 ignition[792]: Ignition 2.20.0 Sep 12 10:09:22.185267 ignition[792]: Stage: disks Sep 12 10:09:22.185519 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:22.185535 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:09:22.189503 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 10:09:22.186675 ignition[792]: disks: disks passed Sep 12 10:09:22.191068 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 10:09:22.186755 ignition[792]: Ignition finished successfully Sep 12 10:09:22.193984 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 10:09:22.195664 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:09:22.197309 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:09:22.198413 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:09:22.214820 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 10:09:22.230499 systemd-resolved[236]: Detected conflict on linux IN A 10.0.0.72 Sep 12 10:09:22.230522 systemd-resolved[236]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Sep 12 10:09:22.235087 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 10:09:22.243210 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 10:09:22.248906 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 10:09:22.372649 kernel: EXT4-fs (vda9): mounted filesystem 4caafea7-bbab-4a47-b77b-37af606fc08b r/w with ordered data mode. Quota mode: none. Sep 12 10:09:22.373439 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 10:09:22.376008 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 10:09:22.390735 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:09:22.393389 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 10:09:22.395765 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 10:09:22.395818 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 10:09:22.395844 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:09:22.404237 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (810) Sep 12 10:09:22.404284 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:22.404298 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:09:22.405746 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:09:22.406354 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 10:09:22.409168 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:09:22.411057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:09:22.423846 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 10:09:22.469452 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 10:09:22.476691 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Sep 12 10:09:22.482378 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 10:09:22.488132 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 10:09:22.559085 systemd-resolved[236]: Detected conflict on linux8 IN A 10.0.0.72 Sep 12 10:09:22.559106 systemd-resolved[236]: Hostname conflict, changing published hostname from 'linux8' to 'linux16'. Sep 12 10:09:22.682809 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 10:09:22.688741 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 10:09:22.691254 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 10:09:22.700735 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 10:09:22.702008 kernel: BTRFS info (device vda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:22.728592 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 10:09:22.740524 ignition[923]: INFO : Ignition 2.20.0 Sep 12 10:09:22.740524 ignition[923]: INFO : Stage: mount Sep 12 10:09:22.742715 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:22.742715 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:09:22.742715 ignition[923]: INFO : mount: mount passed Sep 12 10:09:22.742715 ignition[923]: INFO : Ignition finished successfully Sep 12 10:09:22.749353 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 10:09:22.755009 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 10:09:22.763904 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:09:22.781608 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (936) Sep 12 10:09:22.781685 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:09:22.781713 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:09:22.782492 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:09:22.786665 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:09:22.788657 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:09:22.824154 ignition[953]: INFO : Ignition 2.20.0 Sep 12 10:09:22.824154 ignition[953]: INFO : Stage: files Sep 12 10:09:22.826250 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:22.826250 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:09:22.826250 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Sep 12 10:09:22.830457 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 10:09:22.830457 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 10:09:22.833587 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 10:09:22.833587 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 10:09:22.836757 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 10:09:22.836757 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 10:09:22.836757 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 10:09:22.834193 unknown[953]: wrote ssh authorized keys file for user: core Sep 12 10:09:22.886383 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 10:09:23.009274 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 10:09:23.009274 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:09:23.013532 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 10:09:23.309334 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 10:09:23.560821 systemd-networkd[779]: eth0: Gained IPv6LL Sep 12 10:09:23.568162 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:09:23.570131 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:09:23.587920 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:09:23.587920 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:09:23.587920 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:09:23.587920 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 10:09:23.970122 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 10:09:24.972678 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 10:09:24.972678 ignition[953]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 10:09:24.977273 ignition[953]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:09:24.977273 ignition[953]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:09:24.977273 ignition[953]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 10:09:24.977273 ignition[953]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 10:09:24.977273 ignition[953]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 10:09:24.977273 ignition[953]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 10:09:24.977273 ignition[953]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 10:09:24.977273 ignition[953]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 10:09:25.011015 ignition[953]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 10:09:25.018951 ignition[953]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 10:09:25.020557 ignition[953]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 10:09:25.020557 ignition[953]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 10:09:25.020557 ignition[953]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 10:09:25.020557 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:09:25.020557 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:09:25.020557 ignition[953]: INFO : files: files passed Sep 12 10:09:25.020557 ignition[953]: INFO : Ignition finished successfully Sep 12 10:09:25.032453 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 10:09:25.038099 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 10:09:25.041325 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 10:09:25.044587 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 10:09:25.045744 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 10:09:25.052867 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 10:09:25.057558 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:09:25.057558 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:09:25.063905 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:09:25.059286 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:09:25.061250 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 10:09:25.073798 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 10:09:25.098846 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 10:09:25.098976 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 10:09:25.101497 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 10:09:25.103872 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 10:09:25.105093 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 10:09:25.106002 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 10:09:25.128217 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:09:25.147152 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 10:09:25.159610 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:09:25.162283 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:09:25.164859 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 10:09:25.165336 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 10:09:25.165544 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:09:25.169610 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 10:09:25.170217 systemd[1]: Stopped target basic.target - Basic System. Sep 12 10:09:25.170611 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 10:09:25.171213 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:09:25.171667 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 10:09:25.180816 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 10:09:25.181320 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:09:25.181733 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 10:09:25.182152 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 10:09:25.182573 systemd[1]: Stopped target swap.target - Swaps. Sep 12 10:09:25.183168 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 10:09:25.183321 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:09:25.194282 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:09:25.195050 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:09:25.195393 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 10:09:25.195578 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:09:25.200477 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 10:09:25.200707 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 10:09:25.205260 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 10:09:25.205481 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:09:25.206079 systemd[1]: Stopped target paths.target - Path Units. Sep 12 10:09:25.209345 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 10:09:25.213951 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:09:25.217009 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 10:09:25.219152 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 10:09:25.221479 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 10:09:25.222659 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:09:25.224741 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 10:09:25.225768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:09:25.228011 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 10:09:25.229240 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:09:25.231823 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 10:09:25.232899 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 10:09:25.244813 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 10:09:25.247968 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 10:09:25.250170 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 10:09:25.251609 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:09:25.254603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 10:09:25.256006 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:09:25.262096 ignition[1008]: INFO : Ignition 2.20.0 Sep 12 10:09:25.262096 ignition[1008]: INFO : Stage: umount Sep 12 10:09:25.264870 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:09:25.264870 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:09:25.264870 ignition[1008]: INFO : umount: umount passed Sep 12 10:09:25.264870 ignition[1008]: INFO : Ignition finished successfully Sep 12 10:09:25.270791 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 10:09:25.272034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 10:09:25.275118 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 10:09:25.276513 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 10:09:25.282364 systemd[1]: Stopped target network.target - Network. Sep 12 10:09:25.284533 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 10:09:25.284658 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 10:09:25.288306 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 10:09:25.288384 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 10:09:25.291893 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 10:09:25.291967 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 10:09:25.295304 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 10:09:25.296786 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 10:09:25.299283 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 10:09:25.301717 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 10:09:25.305329 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 10:09:25.307293 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 10:09:25.308491 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 10:09:25.313278 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 10:09:25.313611 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 10:09:25.313884 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 10:09:25.318011 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 10:09:25.320031 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 10:09:25.320086 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:09:25.329769 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 10:09:25.332004 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 10:09:25.332079 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:09:25.335865 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:09:25.335924 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:09:25.338026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 10:09:25.338080 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 10:09:25.341907 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 10:09:25.341964 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:09:25.345404 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:09:25.349266 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 10:09:25.350583 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:09:25.362647 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 10:09:25.363826 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:09:25.366676 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 10:09:25.367705 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 10:09:25.370804 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 10:09:25.371946 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 10:09:25.374204 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 10:09:25.374251 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:09:25.377253 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 10:09:25.378331 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:09:25.380778 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 10:09:25.380855 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 10:09:25.384096 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:09:25.385076 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:09:25.399807 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 10:09:25.400050 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 10:09:25.400110 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:09:25.403210 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 10:09:25.403264 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:09:25.403928 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 10:09:25.403990 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:09:25.407597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:09:25.407675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:09:25.412460 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 10:09:25.412529 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:09:25.412985 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 10:09:25.413111 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 10:09:25.484669 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 10:09:25.484824 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 10:09:25.485694 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 10:09:25.488214 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 10:09:25.488274 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 10:09:25.500774 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 10:09:25.511440 systemd[1]: Switching root. Sep 12 10:09:25.547127 systemd-journald[193]: Journal stopped Sep 12 10:09:27.530615 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 12 10:09:27.530709 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 10:09:27.530729 kernel: SELinux: policy capability open_perms=1 Sep 12 10:09:27.530744 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 10:09:27.530756 kernel: SELinux: policy capability always_check_network=0 Sep 12 10:09:27.530778 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 10:09:27.530802 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 10:09:27.530814 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 10:09:27.530826 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 10:09:27.530838 kernel: audit: type=1403 audit(1757671766.221:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 10:09:27.530865 systemd[1]: Successfully loaded SELinux policy in 41.383ms. Sep 12 10:09:27.530880 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.313ms. Sep 12 10:09:27.530897 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:09:27.530911 systemd[1]: Detected virtualization kvm. Sep 12 10:09:27.530923 systemd[1]: Detected architecture x86-64. Sep 12 10:09:27.530941 systemd[1]: Detected first boot. Sep 12 10:09:27.530954 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:09:27.530967 zram_generator::config[1054]: No configuration found. Sep 12 10:09:27.530980 kernel: Guest personality initialized and is inactive Sep 12 10:09:27.530993 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 10:09:27.531007 kernel: Initialized host personality Sep 12 10:09:27.531019 kernel: NET: Registered PF_VSOCK protocol family Sep 12 10:09:27.531031 systemd[1]: Populated /etc with preset unit settings. Sep 12 10:09:27.531045 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 10:09:27.531058 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 10:09:27.531070 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 10:09:27.531092 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 10:09:27.531105 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 10:09:27.531118 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 10:09:27.531133 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 10:09:27.531147 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 10:09:27.531160 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 10:09:27.531173 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 10:09:27.531186 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 10:09:27.531198 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 10:09:27.531211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:09:27.531224 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:09:27.531237 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 10:09:27.531252 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 10:09:27.531265 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 10:09:27.531284 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:09:27.531297 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 10:09:27.531310 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:09:27.531323 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 10:09:27.531335 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 10:09:27.531351 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 10:09:27.531363 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 10:09:27.531384 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:09:27.531398 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:09:27.531410 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:09:27.531423 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:09:27.531438 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 10:09:27.531454 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 10:09:27.531470 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 10:09:27.531491 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:09:27.531508 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:09:27.531521 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:09:27.531533 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 10:09:27.531545 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 10:09:27.531558 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 10:09:27.531571 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 10:09:27.531593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:27.531607 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 10:09:27.531686 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 10:09:27.531704 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 10:09:27.531718 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 10:09:27.531730 systemd[1]: Reached target machines.target - Containers. Sep 12 10:09:27.531743 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 10:09:27.531757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:09:27.531807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:09:27.531833 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 10:09:27.531846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:09:27.531863 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:09:27.531876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:09:27.531889 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 10:09:27.531902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:09:27.531915 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 10:09:27.531928 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 10:09:27.531941 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 10:09:27.531954 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 10:09:27.531970 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 10:09:27.531983 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:09:27.531996 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:09:27.532009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:09:27.532022 kernel: loop: module loaded Sep 12 10:09:27.532036 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 10:09:27.532048 kernel: fuse: init (API version 7.39) Sep 12 10:09:27.532060 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 10:09:27.532095 systemd-journald[1118]: Collecting audit messages is disabled. Sep 12 10:09:27.532123 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 10:09:27.532137 systemd-journald[1118]: Journal started Sep 12 10:09:27.532169 systemd-journald[1118]: Runtime Journal (/run/log/journal/fc74856014574e9591dd42ae9839f012) is 6M, max 48.4M, 42.3M free. Sep 12 10:09:27.119918 systemd[1]: Queued start job for default target multi-user.target. Sep 12 10:09:27.136892 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 10:09:27.137615 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 10:09:27.557696 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:09:27.560258 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 10:09:27.560304 systemd[1]: Stopped verity-setup.service. Sep 12 10:09:27.563678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:27.569480 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:09:27.571653 kernel: ACPI: bus type drm_connector registered Sep 12 10:09:27.571657 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 10:09:27.580530 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 10:09:27.582050 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 10:09:27.583337 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 10:09:27.584788 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 10:09:27.586224 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 10:09:27.587842 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:09:27.591296 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 10:09:27.591677 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 10:09:27.593486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:09:27.593872 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:09:27.595582 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:09:27.595983 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:09:27.597800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:09:27.598147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:09:27.600053 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 10:09:27.600354 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 10:09:27.602546 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:09:27.602927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:09:27.604991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:09:27.606957 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 10:09:27.608936 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 10:09:27.618695 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 10:09:27.631182 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 10:09:27.643735 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 10:09:27.658000 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 10:09:27.659382 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 10:09:27.659480 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:09:27.661754 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 10:09:27.664587 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 10:09:27.672864 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 10:09:27.674401 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:09:27.697867 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 10:09:27.705939 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 10:09:27.707744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:09:27.712818 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 10:09:27.714151 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:09:27.724558 systemd-journald[1118]: Time spent on flushing to /var/log/journal/fc74856014574e9591dd42ae9839f012 is 13.542ms for 970 entries. Sep 12 10:09:27.724558 systemd-journald[1118]: System Journal (/var/log/journal/fc74856014574e9591dd42ae9839f012) is 8M, max 195.6M, 187.6M free. Sep 12 10:09:28.058031 systemd-journald[1118]: Received client request to flush runtime journal. Sep 12 10:09:28.058104 kernel: loop0: detected capacity change from 0 to 229808 Sep 12 10:09:28.058134 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 10:09:28.058153 kernel: loop1: detected capacity change from 0 to 138176 Sep 12 10:09:28.058173 kernel: loop2: detected capacity change from 0 to 147912 Sep 12 10:09:28.058197 kernel: loop3: detected capacity change from 0 to 229808 Sep 12 10:09:28.058217 kernel: loop4: detected capacity change from 0 to 138176 Sep 12 10:09:27.726315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:09:27.734728 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 10:09:27.737928 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:09:27.741887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:09:27.743527 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 10:09:27.745031 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 10:09:27.746727 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 10:09:27.765908 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 10:09:27.784558 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 10:09:27.826881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:09:27.835647 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 12 10:09:27.835662 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 12 10:09:27.842776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:09:27.905717 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 10:09:27.913838 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 10:09:27.963233 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 10:09:27.976925 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:09:27.994673 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 12 10:09:27.994687 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 12 10:09:28.000324 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:09:28.025262 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 10:09:28.026962 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 10:09:28.035836 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 10:09:28.062204 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 10:09:28.084948 kernel: loop5: detected capacity change from 0 to 147912 Sep 12 10:09:28.092517 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 10:09:28.104242 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 10:09:28.105034 (sd-merge)[1194]: Merged extensions into '/usr'. Sep 12 10:09:28.162957 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 10:09:28.169989 systemd[1]: Reload requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 10:09:28.170014 systemd[1]: Reloading... Sep 12 10:09:28.293664 zram_generator::config[1226]: No configuration found. Sep 12 10:09:28.453937 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 10:09:28.549275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:09:28.628082 systemd[1]: Reloading finished in 457 ms. Sep 12 10:09:28.653970 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 10:09:28.656038 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 10:09:28.676104 systemd[1]: Starting ensure-sysext.service... Sep 12 10:09:28.678915 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:09:28.696453 systemd[1]: Reload requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Sep 12 10:09:28.696472 systemd[1]: Reloading... Sep 12 10:09:28.719073 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 10:09:28.719456 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 10:09:28.720819 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 10:09:28.721319 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 12 10:09:28.721426 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 12 10:09:28.727176 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:09:28.727191 systemd-tmpfiles[1268]: Skipping /boot Sep 12 10:09:28.767422 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:09:28.767581 systemd-tmpfiles[1268]: Skipping /boot Sep 12 10:09:28.813695 zram_generator::config[1303]: No configuration found. Sep 12 10:09:28.956660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:09:29.041260 systemd[1]: Reloading finished in 344 ms. Sep 12 10:09:29.060452 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 10:09:29.077132 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:09:29.092960 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:09:29.096281 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 10:09:29.099210 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 10:09:29.107432 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:09:29.110807 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:09:29.113412 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 10:09:29.118569 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:29.118826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:09:29.120307 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:09:29.122968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:09:29.130499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:09:29.131808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:09:29.133792 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:09:29.136751 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 10:09:29.137940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:29.139379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:09:29.139663 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:09:29.141775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:09:29.142363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:09:29.148791 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 10:09:29.154558 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:09:29.155386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:09:29.166518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:29.168741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:09:29.168742 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Sep 12 10:09:29.177375 augenrules[1370]: No rules Sep 12 10:09:29.178447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:09:29.182277 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:09:29.185912 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:09:29.187695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:09:29.187864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:09:29.190162 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 10:09:29.191590 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:29.195154 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:09:29.195686 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:09:29.198324 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 10:09:29.201182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:09:29.201656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:09:29.203563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:09:29.204000 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:09:29.206049 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:09:29.206317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:09:29.212739 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 10:09:29.216486 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 10:09:29.227648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:09:29.238649 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 10:09:29.252721 systemd[1]: Finished ensure-sysext.service. Sep 12 10:09:29.259750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:29.268881 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:09:29.270345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:09:29.272814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:09:29.278563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:09:29.282885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:09:29.286848 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:09:29.288240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:09:29.288391 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:09:29.293327 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:09:29.308875 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 10:09:29.310744 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 10:09:29.310796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:09:29.329849 systemd-resolved[1341]: Positive Trust Anchors: Sep 12 10:09:29.330308 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:09:29.330946 augenrules[1402]: /sbin/augenrules: No change Sep 12 10:09:29.330354 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:09:29.358950 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1389) Sep 12 10:09:29.357786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:09:29.358127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:09:29.359983 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 10:09:29.366073 systemd-resolved[1341]: Defaulting to hostname 'linux'. Sep 12 10:09:29.368799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:09:29.370266 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:09:29.389538 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:09:29.390955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:09:29.394276 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:09:29.396220 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:09:29.402652 augenrules[1439]: No rules Sep 12 10:09:29.409731 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:09:29.410374 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:09:29.412257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:09:29.412616 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:09:29.505663 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 10:09:29.512083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:09:29.512299 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:09:29.520209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 10:09:29.526664 kernel: ACPI: button: Power Button [PWRF] Sep 12 10:09:29.530818 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 10:09:29.544049 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 10:09:29.553300 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 10:09:29.553681 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 10:09:29.556436 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 10:09:29.561436 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 10:09:29.565471 systemd-networkd[1421]: lo: Link UP Sep 12 10:09:29.565489 systemd-networkd[1421]: lo: Gained carrier Sep 12 10:09:29.573501 systemd-networkd[1421]: Enumeration completed Sep 12 10:09:29.574047 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:09:29.574054 systemd-networkd[1421]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:09:29.574743 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:09:29.575981 systemd-networkd[1421]: eth0: Link UP Sep 12 10:09:29.575987 systemd-networkd[1421]: eth0: Gained carrier Sep 12 10:09:29.576004 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:09:29.576337 systemd[1]: Reached target network.target - Network. Sep 12 10:09:29.585821 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 10:09:29.588986 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 10:09:29.596691 systemd-networkd[1421]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 10:09:29.603193 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 10:09:30.137624 systemd-resolved[1341]: Clock change detected. Flushing caches. Sep 12 10:09:30.137723 systemd-timesyncd[1422]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 10:09:30.137785 systemd-timesyncd[1422]: Initial clock synchronization to Fri 2025-09-12 10:09:30.137576 UTC. Sep 12 10:09:30.138931 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 10:09:30.207906 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 10:09:30.318017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:09:30.331656 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 10:09:30.365800 kernel: kvm_amd: TSC scaling supported Sep 12 10:09:30.365902 kernel: kvm_amd: Nested Virtualization enabled Sep 12 10:09:30.365927 kernel: kvm_amd: Nested Paging enabled Sep 12 10:09:30.366886 kernel: kvm_amd: LBR virtualization supported Sep 12 10:09:30.366921 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 10:09:30.368068 kernel: kvm_amd: Virtual GIF supported Sep 12 10:09:30.401559 kernel: EDAC MC: Ver: 3.0.0 Sep 12 10:09:30.445417 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 10:09:30.453955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:09:30.469796 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 10:09:30.480672 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:09:30.514280 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 10:09:30.516181 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:09:30.517539 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:09:30.518893 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 10:09:30.520337 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 10:09:30.522042 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 10:09:30.523509 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 10:09:30.525418 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 10:09:30.526852 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 10:09:30.526904 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:09:30.527890 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:09:30.530672 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 10:09:30.534450 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 10:09:30.540291 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 10:09:30.541968 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 10:09:30.543385 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 10:09:30.549610 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 10:09:30.557865 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 10:09:30.561052 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 10:09:30.563713 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 10:09:30.565035 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:09:30.566084 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:09:30.567160 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:09:30.567208 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:09:30.568692 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 10:09:30.573953 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 10:09:30.578666 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 10:09:30.579438 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:09:30.581830 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 10:09:30.583187 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 10:09:30.587754 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 10:09:30.592380 jq[1478]: false Sep 12 10:09:30.594087 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 10:09:30.603760 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 10:09:30.607286 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 10:09:30.614599 extend-filesystems[1479]: Found loop3 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found loop4 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found loop5 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found sr0 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found vda Sep 12 10:09:30.619719 extend-filesystems[1479]: Found vda1 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found vda2 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found vda3 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found usr Sep 12 10:09:30.619719 extend-filesystems[1479]: Found vda4 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found vda6 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found vda7 Sep 12 10:09:30.619719 extend-filesystems[1479]: Found vda9 Sep 12 10:09:30.619719 extend-filesystems[1479]: Checking size of /dev/vda9 Sep 12 10:09:30.616698 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 10:09:30.624520 dbus-daemon[1477]: [system] SELinux support is enabled Sep 12 10:09:30.621293 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 10:09:30.622544 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 10:09:30.624759 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 10:09:30.633366 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 10:09:30.657239 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 10:09:30.659899 jq[1496]: true Sep 12 10:09:30.663842 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 10:09:30.666930 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 10:09:30.667216 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 10:09:30.670651 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 10:09:30.670954 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 10:09:30.676031 extend-filesystems[1479]: Resized partition /dev/vda9 Sep 12 10:09:30.677352 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 10:09:30.677644 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 10:09:30.680603 update_engine[1494]: I20250912 10:09:30.680529 1494 main.cc:92] Flatcar Update Engine starting Sep 12 10:09:30.682218 update_engine[1494]: I20250912 10:09:30.682190 1494 update_check_scheduler.cc:74] Next update check in 2m50s Sep 12 10:09:30.683872 extend-filesystems[1503]: resize2fs 1.47.1 (20-May-2024) Sep 12 10:09:30.693883 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1391) Sep 12 10:09:30.699289 jq[1502]: true Sep 12 10:09:30.707995 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 10:09:30.712339 systemd[1]: Started update-engine.service - Update Engine. Sep 12 10:09:30.722294 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 10:09:30.722326 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 10:09:30.723600 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 10:09:30.723619 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 10:09:30.734766 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 10:09:30.760692 tar[1501]: linux-amd64/LICENSE Sep 12 10:09:30.760692 tar[1501]: linux-amd64/helm Sep 12 10:09:30.810531 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 10:09:30.887306 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 10:09:30.918316 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 10:09:30.929854 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 10:09:30.938789 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 10:09:30.939224 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 10:09:30.943296 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 10:09:31.060219 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 10:09:31.101030 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 10:09:31.102576 systemd-logind[1490]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 10:09:31.102607 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 10:09:31.103881 systemd-logind[1490]: New seat seat0. Sep 12 10:09:31.116425 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 10:09:31.117948 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 10:09:31.119228 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 10:09:31.198796 systemd-networkd[1421]: eth0: Gained IPv6LL Sep 12 10:09:31.203986 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 10:09:31.206121 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 10:09:31.237277 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 10:09:31.243708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:09:31.250401 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 10:09:31.300518 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 10:09:31.325689 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 10:09:31.326189 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 10:09:31.330797 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 10:09:31.517532 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 10:09:32.095176 extend-filesystems[1503]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 10:09:32.095176 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 10:09:32.095176 extend-filesystems[1503]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 10:09:32.102169 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Sep 12 10:09:32.097116 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 10:09:32.097427 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 10:09:32.112458 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 10:09:32.232310 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:09:32.233757 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 10:09:32.237485 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 10:09:32.345572 tar[1501]: linux-amd64/README.md Sep 12 10:09:32.363432 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 10:09:32.374882 containerd[1504]: time="2025-09-12T10:09:32.374763180Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 10:09:32.398516 containerd[1504]: time="2025-09-12T10:09:32.398442609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:09:32.401008 containerd[1504]: time="2025-09-12T10:09:32.400938258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.105-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:09:32.401008 containerd[1504]: time="2025-09-12T10:09:32.400970268Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 10:09:32.401008 containerd[1504]: time="2025-09-12T10:09:32.400986178Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 10:09:32.401229 containerd[1504]: time="2025-09-12T10:09:32.401177377Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 10:09:32.401229 containerd[1504]: time="2025-09-12T10:09:32.401194489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 10:09:32.401333 containerd[1504]: time="2025-09-12T10:09:32.401264330Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:09:32.401333 containerd[1504]: time="2025-09-12T10:09:32.401277484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:09:32.401592 containerd[1504]: time="2025-09-12T10:09:32.401568740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:09:32.401592 containerd[1504]: time="2025-09-12T10:09:32.401587025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 10:09:32.401652 containerd[1504]: time="2025-09-12T10:09:32.401601091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:09:32.401652 containerd[1504]: time="2025-09-12T10:09:32.401610900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 10:09:32.401731 containerd[1504]: time="2025-09-12T10:09:32.401711628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:09:32.402011 containerd[1504]: time="2025-09-12T10:09:32.401978068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:09:32.402176 containerd[1504]: time="2025-09-12T10:09:32.402147776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:09:32.402176 containerd[1504]: time="2025-09-12T10:09:32.402166271Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 10:09:32.402305 containerd[1504]: time="2025-09-12T10:09:32.402271057Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 10:09:32.402360 containerd[1504]: time="2025-09-12T10:09:32.402328315Z" level=info msg="metadata content store policy set" policy=shared Sep 12 10:09:32.559273 containerd[1504]: time="2025-09-12T10:09:32.559190120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 10:09:32.559455 containerd[1504]: time="2025-09-12T10:09:32.559306970Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 10:09:32.559455 containerd[1504]: time="2025-09-12T10:09:32.559336635Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 10:09:32.559455 containerd[1504]: time="2025-09-12T10:09:32.559372923Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 10:09:32.559455 containerd[1504]: time="2025-09-12T10:09:32.559396117Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 10:09:32.559713 containerd[1504]: time="2025-09-12T10:09:32.559678957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 10:09:32.560057 containerd[1504]: time="2025-09-12T10:09:32.560009467Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 10:09:32.560232 containerd[1504]: time="2025-09-12T10:09:32.560164738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 10:09:32.560232 containerd[1504]: time="2025-09-12T10:09:32.560194003Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 10:09:32.560232 containerd[1504]: time="2025-09-12T10:09:32.560211335Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 10:09:32.560232 containerd[1504]: time="2025-09-12T10:09:32.560230191Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 10:09:32.560371 containerd[1504]: time="2025-09-12T10:09:32.560246872Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 10:09:32.560371 containerd[1504]: time="2025-09-12T10:09:32.560261559Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 10:09:32.560371 containerd[1504]: time="2025-09-12T10:09:32.560277479Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 10:09:32.560371 containerd[1504]: time="2025-09-12T10:09:32.560294371Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 10:09:32.560371 containerd[1504]: time="2025-09-12T10:09:32.560312014Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 10:09:32.560371 containerd[1504]: time="2025-09-12T10:09:32.560324197Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 10:09:32.560371 containerd[1504]: time="2025-09-12T10:09:32.560337963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 10:09:32.560371 containerd[1504]: time="2025-09-12T10:09:32.560372077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560388858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560405810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560420197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560455132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560472936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560486151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560522919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560544550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560572312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560590045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560606967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560619 containerd[1504]: time="2025-09-12T10:09:32.560624851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560643976Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560677279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560694782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560706644Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560772377Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560792234Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560810358Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560827210Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560840174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560860092Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560878576Z" level=info msg="NRI interface is disabled by configuration." Sep 12 10:09:32.560947 containerd[1504]: time="2025-09-12T10:09:32.560893234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 10:09:32.561429 containerd[1504]: time="2025-09-12T10:09:32.561331486Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 10:09:32.561429 containerd[1504]: time="2025-09-12T10:09:32.561415313Z" level=info msg="Connect containerd service" Sep 12 10:09:32.561692 containerd[1504]: time="2025-09-12T10:09:32.561470176Z" level=info msg="using legacy CRI server" Sep 12 10:09:32.561692 containerd[1504]: time="2025-09-12T10:09:32.561481256Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 10:09:32.561692 containerd[1504]: time="2025-09-12T10:09:32.561669279Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 10:09:32.562593 containerd[1504]: time="2025-09-12T10:09:32.562543718Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:09:32.562968 containerd[1504]: time="2025-09-12T10:09:32.562816791Z" level=info msg="Start subscribing containerd event" Sep 12 10:09:32.563045 containerd[1504]: time="2025-09-12T10:09:32.563010945Z" level=info msg="Start recovering state" Sep 12 10:09:32.563174 containerd[1504]: time="2025-09-12T10:09:32.563149194Z" level=info msg="Start event monitor" Sep 12 10:09:32.563208 containerd[1504]: time="2025-09-12T10:09:32.563173920Z" level=info msg="Start snapshots syncer" Sep 12 10:09:32.563208 containerd[1504]: time="2025-09-12T10:09:32.563190371Z" level=info msg="Start cni network conf syncer for default" Sep 12 10:09:32.563208 containerd[1504]: time="2025-09-12T10:09:32.563200791Z" level=info msg="Start streaming server" Sep 12 10:09:32.563482 containerd[1504]: time="2025-09-12T10:09:32.562941785Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 10:09:32.563619 containerd[1504]: time="2025-09-12T10:09:32.563591994Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 10:09:32.564545 containerd[1504]: time="2025-09-12T10:09:32.563691120Z" level=info msg="containerd successfully booted in 0.190968s" Sep 12 10:09:32.563803 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 10:09:32.743225 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 10:09:32.766870 systemd[1]: Started sshd@0-10.0.0.72:22-10.0.0.1:49322.service - OpenSSH per-connection server daemon (10.0.0.1:49322). Sep 12 10:09:32.826706 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 49322 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:09:32.828365 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:09:32.836066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 10:09:32.849759 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 10:09:32.858671 systemd-logind[1490]: New session 1 of user core. Sep 12 10:09:32.870118 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 10:09:32.882905 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 10:09:32.887886 (systemd)[1591]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 10:09:32.890734 systemd-logind[1490]: New session c1 of user core. Sep 12 10:09:33.060991 systemd[1591]: Queued start job for default target default.target. Sep 12 10:09:33.120968 systemd[1591]: Created slice app.slice - User Application Slice. Sep 12 10:09:33.121007 systemd[1591]: Reached target paths.target - Paths. Sep 12 10:09:33.121075 systemd[1591]: Reached target timers.target - Timers. Sep 12 10:09:33.123069 systemd[1591]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 10:09:33.139411 systemd[1591]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 10:09:33.139649 systemd[1591]: Reached target sockets.target - Sockets. Sep 12 10:09:33.139700 systemd[1591]: Reached target basic.target - Basic System. Sep 12 10:09:33.139748 systemd[1591]: Reached target default.target - Main User Target. Sep 12 10:09:33.139789 systemd[1591]: Startup finished in 239ms. Sep 12 10:09:33.140517 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 10:09:33.143484 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 10:09:33.210828 systemd[1]: Started sshd@1-10.0.0.72:22-10.0.0.1:49326.service - OpenSSH per-connection server daemon (10.0.0.1:49326). Sep 12 10:09:33.312901 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 49326 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:09:33.315038 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:09:33.319911 systemd-logind[1490]: New session 2 of user core. Sep 12 10:09:33.350787 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 10:09:33.464977 sshd[1604]: Connection closed by 10.0.0.1 port 49326 Sep 12 10:09:33.465716 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Sep 12 10:09:33.479240 systemd[1]: sshd@1-10.0.0.72:22-10.0.0.1:49326.service: Deactivated successfully. Sep 12 10:09:33.481110 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 10:09:33.481899 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Sep 12 10:09:33.500862 systemd[1]: Started sshd@2-10.0.0.72:22-10.0.0.1:49340.service - OpenSSH per-connection server daemon (10.0.0.1:49340). Sep 12 10:09:33.503396 systemd-logind[1490]: Removed session 2. Sep 12 10:09:33.548469 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 49340 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:09:33.550160 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:09:33.556386 systemd-logind[1490]: New session 3 of user core. Sep 12 10:09:33.611806 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 10:09:33.669646 sshd[1612]: Connection closed by 10.0.0.1 port 49340 Sep 12 10:09:33.670259 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Sep 12 10:09:33.675858 systemd[1]: sshd@2-10.0.0.72:22-10.0.0.1:49340.service: Deactivated successfully. Sep 12 10:09:33.678685 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 10:09:33.679393 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Sep 12 10:09:33.680259 systemd-logind[1490]: Removed session 3. Sep 12 10:09:33.846912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:09:33.856175 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 10:09:33.857788 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:09:33.858576 systemd[1]: Startup finished in 1.071s (kernel) + 7.434s (initrd) + 7.143s (userspace) = 15.650s. Sep 12 10:09:34.766693 kubelet[1622]: E0912 10:09:34.766577 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:09:34.771616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:09:34.771879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:09:34.772369 systemd[1]: kubelet.service: Consumed 2.307s CPU time, 270M memory peak. Sep 12 10:09:43.692918 systemd[1]: Started sshd@3-10.0.0.72:22-10.0.0.1:33492.service - OpenSSH per-connection server daemon (10.0.0.1:33492). Sep 12 10:09:43.731266 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 33492 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:09:43.733289 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:09:43.738933 systemd-logind[1490]: New session 4 of user core. Sep 12 10:09:43.748953 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 10:09:43.809454 sshd[1637]: Connection closed by 10.0.0.1 port 33492 Sep 12 10:09:43.809950 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Sep 12 10:09:43.836637 systemd[1]: sshd@3-10.0.0.72:22-10.0.0.1:33492.service: Deactivated successfully. Sep 12 10:09:43.839673 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 10:09:43.842414 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. Sep 12 10:09:43.853254 systemd[1]: Started sshd@4-10.0.0.72:22-10.0.0.1:33496.service - OpenSSH per-connection server daemon (10.0.0.1:33496). Sep 12 10:09:43.855179 systemd-logind[1490]: Removed session 4. Sep 12 10:09:43.898874 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 33496 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:09:43.901307 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:09:43.906818 systemd-logind[1490]: New session 5 of user core. Sep 12 10:09:43.929832 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 10:09:43.983948 sshd[1645]: Connection closed by 10.0.0.1 port 33496 Sep 12 10:09:43.984403 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Sep 12 10:09:44.000642 systemd[1]: sshd@4-10.0.0.72:22-10.0.0.1:33496.service: Deactivated successfully. Sep 12 10:09:44.003055 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 10:09:44.004929 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. Sep 12 10:09:44.016892 systemd[1]: Started sshd@5-10.0.0.72:22-10.0.0.1:33502.service - OpenSSH per-connection server daemon (10.0.0.1:33502). Sep 12 10:09:44.018217 systemd-logind[1490]: Removed session 5. Sep 12 10:09:44.063333 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 33502 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:09:44.065495 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:09:44.071624 systemd-logind[1490]: New session 6 of user core. Sep 12 10:09:44.082709 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 10:09:44.141533 sshd[1653]: Connection closed by 10.0.0.1 port 33502 Sep 12 10:09:44.142049 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Sep 12 10:09:44.156011 systemd[1]: sshd@5-10.0.0.72:22-10.0.0.1:33502.service: Deactivated successfully. Sep 12 10:09:44.158269 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 10:09:44.159862 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Sep 12 10:09:44.170776 systemd[1]: Started sshd@6-10.0.0.72:22-10.0.0.1:33510.service - OpenSSH per-connection server daemon (10.0.0.1:33510). Sep 12 10:09:44.172164 systemd-logind[1490]: Removed session 6. Sep 12 10:09:44.215458 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 33510 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:09:44.217153 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:09:44.222899 systemd-logind[1490]: New session 7 of user core. Sep 12 10:09:44.235877 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 10:09:44.299223 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 10:09:44.299711 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:09:44.323468 sudo[1662]: pam_unix(sudo:session): session closed for user root Sep 12 10:09:44.325522 sshd[1661]: Connection closed by 10.0.0.1 port 33510 Sep 12 10:09:44.326017 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Sep 12 10:09:44.342938 systemd[1]: sshd@6-10.0.0.72:22-10.0.0.1:33510.service: Deactivated successfully. Sep 12 10:09:44.346259 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 10:09:44.349192 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Sep 12 10:09:44.367186 systemd[1]: Started sshd@7-10.0.0.72:22-10.0.0.1:33518.service - OpenSSH per-connection server daemon (10.0.0.1:33518). Sep 12 10:09:44.368767 systemd-logind[1490]: Removed session 7. Sep 12 10:09:44.411180 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 33518 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:09:44.413901 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:09:44.419703 systemd-logind[1490]: New session 8 of user core. Sep 12 10:09:44.429646 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 10:09:44.488018 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 10:09:44.488368 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:09:44.493550 sudo[1673]: pam_unix(sudo:session): session closed for user root Sep 12 10:09:44.503329 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 10:09:44.503879 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:09:44.535096 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:09:44.572527 augenrules[1695]: No rules Sep 12 10:09:44.574975 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:09:44.575355 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:09:44.576879 sudo[1672]: pam_unix(sudo:session): session closed for user root Sep 12 10:09:44.578692 sshd[1671]: Connection closed by 10.0.0.1 port 33518 Sep 12 10:09:44.579097 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 12 10:09:44.589197 systemd[1]: sshd@7-10.0.0.72:22-10.0.0.1:33518.service: Deactivated successfully. Sep 12 10:09:44.591740 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 10:09:44.593441 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Sep 12 10:09:44.598891 systemd[1]: Started sshd@8-10.0.0.72:22-10.0.0.1:33528.service - OpenSSH per-connection server daemon (10.0.0.1:33528). Sep 12 10:09:44.600007 systemd-logind[1490]: Removed session 8. Sep 12 10:09:44.646684 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 33528 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:09:44.648720 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:09:44.653915 systemd-logind[1490]: New session 9 of user core. Sep 12 10:09:44.668723 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 10:09:44.727065 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 10:09:44.727543 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:09:45.022346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 10:09:45.035950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:09:45.387872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:09:45.393408 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:09:45.453347 kubelet[1733]: E0912 10:09:45.453228 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:09:45.461543 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:09:45.461783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:09:45.462268 systemd[1]: kubelet.service: Consumed 367ms CPU time, 110.1M memory peak. Sep 12 10:09:45.514785 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 10:09:45.514947 (dockerd)[1743]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 10:09:46.080338 dockerd[1743]: time="2025-09-12T10:09:46.080232602Z" level=info msg="Starting up" Sep 12 10:09:46.580428 dockerd[1743]: time="2025-09-12T10:09:46.580239655Z" level=info msg="Loading containers: start." Sep 12 10:09:46.776527 kernel: Initializing XFRM netlink socket Sep 12 10:09:46.878126 systemd-networkd[1421]: docker0: Link UP Sep 12 10:09:46.917803 dockerd[1743]: time="2025-09-12T10:09:46.917743604Z" level=info msg="Loading containers: done." Sep 12 10:09:46.942607 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2157180401-merged.mount: Deactivated successfully. Sep 12 10:09:46.949468 dockerd[1743]: time="2025-09-12T10:09:46.949404108Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 10:09:46.949624 dockerd[1743]: time="2025-09-12T10:09:46.949594886Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 10:09:46.949805 dockerd[1743]: time="2025-09-12T10:09:46.949781646Z" level=info msg="Daemon has completed initialization" Sep 12 10:09:46.995763 dockerd[1743]: time="2025-09-12T10:09:46.995074350Z" level=info msg="API listen on /run/docker.sock" Sep 12 10:09:46.995347 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 10:09:48.228590 containerd[1504]: time="2025-09-12T10:09:48.228514650Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 10:09:48.950434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542106983.mount: Deactivated successfully. Sep 12 10:09:50.379858 containerd[1504]: time="2025-09-12T10:09:50.379791163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:50.380579 containerd[1504]: time="2025-09-12T10:09:50.380485284Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 10:09:50.381951 containerd[1504]: time="2025-09-12T10:09:50.381903243Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:50.385431 containerd[1504]: time="2025-09-12T10:09:50.385349955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:50.386599 containerd[1504]: time="2025-09-12T10:09:50.386541870Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.157954324s" Sep 12 10:09:50.386599 containerd[1504]: time="2025-09-12T10:09:50.386592605Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 10:09:50.387593 containerd[1504]: time="2025-09-12T10:09:50.387393277Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 10:09:51.891705 containerd[1504]: time="2025-09-12T10:09:51.891564057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:51.892601 containerd[1504]: time="2025-09-12T10:09:51.892531180Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 10:09:51.894240 containerd[1504]: time="2025-09-12T10:09:51.894035340Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:51.897439 containerd[1504]: time="2025-09-12T10:09:51.897388107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:51.898897 containerd[1504]: time="2025-09-12T10:09:51.898831673Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.511407929s" Sep 12 10:09:51.898897 containerd[1504]: time="2025-09-12T10:09:51.898893189Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 10:09:51.899815 containerd[1504]: time="2025-09-12T10:09:51.899748432Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 10:09:53.379258 containerd[1504]: time="2025-09-12T10:09:53.379175317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:53.379912 containerd[1504]: time="2025-09-12T10:09:53.379872294Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 10:09:53.381088 containerd[1504]: time="2025-09-12T10:09:53.381050734Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:53.384240 containerd[1504]: time="2025-09-12T10:09:53.384160765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:53.385406 containerd[1504]: time="2025-09-12T10:09:53.385369742Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.485583899s" Sep 12 10:09:53.385406 containerd[1504]: time="2025-09-12T10:09:53.385404858Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 10:09:53.386056 containerd[1504]: time="2025-09-12T10:09:53.386011896Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 10:09:54.470124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968898279.mount: Deactivated successfully. Sep 12 10:09:55.112836 containerd[1504]: time="2025-09-12T10:09:55.112750620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:55.113595 containerd[1504]: time="2025-09-12T10:09:55.113494375Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 10:09:55.114770 containerd[1504]: time="2025-09-12T10:09:55.114731625Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:55.116878 containerd[1504]: time="2025-09-12T10:09:55.116844667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:55.117700 containerd[1504]: time="2025-09-12T10:09:55.117668041Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.731621931s" Sep 12 10:09:55.117700 containerd[1504]: time="2025-09-12T10:09:55.117704689Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 10:09:55.118415 containerd[1504]: time="2025-09-12T10:09:55.118376729Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 10:09:55.712240 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 10:09:55.722768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:09:55.912477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:09:55.918460 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:09:55.968963 kubelet[2023]: E0912 10:09:55.968718 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:09:55.974188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:09:55.974472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:09:55.974963 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.3M memory peak. Sep 12 10:09:56.028333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339512017.mount: Deactivated successfully. Sep 12 10:09:57.151439 containerd[1504]: time="2025-09-12T10:09:57.151383988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:57.152186 containerd[1504]: time="2025-09-12T10:09:57.152143762Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 10:09:57.153358 containerd[1504]: time="2025-09-12T10:09:57.153297004Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:57.156785 containerd[1504]: time="2025-09-12T10:09:57.156719582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:57.158141 containerd[1504]: time="2025-09-12T10:09:57.158099679Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.039680731s" Sep 12 10:09:57.158188 containerd[1504]: time="2025-09-12T10:09:57.158140756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 10:09:57.158703 containerd[1504]: time="2025-09-12T10:09:57.158675890Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 10:09:57.711040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154556423.mount: Deactivated successfully. Sep 12 10:09:57.716237 containerd[1504]: time="2025-09-12T10:09:57.716180270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:57.717066 containerd[1504]: time="2025-09-12T10:09:57.716998475Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 10:09:57.718392 containerd[1504]: time="2025-09-12T10:09:57.718344298Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:57.721514 containerd[1504]: time="2025-09-12T10:09:57.721454560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:09:57.722536 containerd[1504]: time="2025-09-12T10:09:57.722476626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 563.764659ms" Sep 12 10:09:57.722620 containerd[1504]: time="2025-09-12T10:09:57.722535096Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 10:09:57.723163 containerd[1504]: time="2025-09-12T10:09:57.723085137Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 10:09:58.293964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747278549.mount: Deactivated successfully. Sep 12 10:10:00.957918 containerd[1504]: time="2025-09-12T10:10:00.957799706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:00.958596 containerd[1504]: time="2025-09-12T10:10:00.958518133Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 10:10:00.959749 containerd[1504]: time="2025-09-12T10:10:00.959715548Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:00.964368 containerd[1504]: time="2025-09-12T10:10:00.964313169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:00.965874 containerd[1504]: time="2025-09-12T10:10:00.965817359Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.242695794s" Sep 12 10:10:00.965874 containerd[1504]: time="2025-09-12T10:10:00.965864007Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 10:10:03.996357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:03.996615 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.3M memory peak. Sep 12 10:10:04.009795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:04.040790 systemd[1]: Reload requested from client PID 2174 ('systemctl') (unit session-9.scope)... Sep 12 10:10:04.040811 systemd[1]: Reloading... Sep 12 10:10:04.160546 zram_generator::config[2221]: No configuration found. Sep 12 10:10:04.383576 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:10:04.490582 systemd[1]: Reloading finished in 449 ms. Sep 12 10:10:04.539457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:04.543445 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:04.544274 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:10:04.544627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:04.544676 systemd[1]: kubelet.service: Consumed 181ms CPU time, 98.2M memory peak. Sep 12 10:10:04.546313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:04.717948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:04.722428 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:10:04.771700 kubelet[2268]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:10:04.771700 kubelet[2268]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:10:04.771700 kubelet[2268]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:10:04.772181 kubelet[2268]: I0912 10:10:04.771714 2268 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:10:05.400976 kubelet[2268]: I0912 10:10:05.400207 2268 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 10:10:05.400976 kubelet[2268]: I0912 10:10:05.400247 2268 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:10:05.400976 kubelet[2268]: I0912 10:10:05.400618 2268 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 10:10:05.518018 kubelet[2268]: I0912 10:10:05.517912 2268 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:10:05.519401 kubelet[2268]: E0912 10:10:05.519361 2268 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 10:10:05.524171 kubelet[2268]: E0912 10:10:05.524130 2268 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:10:05.524171 kubelet[2268]: I0912 10:10:05.524164 2268 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:10:05.530827 kubelet[2268]: I0912 10:10:05.530795 2268 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:10:05.531129 kubelet[2268]: I0912 10:10:05.531095 2268 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:10:05.531322 kubelet[2268]: I0912 10:10:05.531120 2268 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:10:05.531574 kubelet[2268]: I0912 10:10:05.531338 2268 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:10:05.531574 kubelet[2268]: I0912 10:10:05.531352 2268 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 10:10:05.531574 kubelet[2268]: I0912 10:10:05.531543 2268 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:10:05.534585 kubelet[2268]: I0912 10:10:05.534546 2268 kubelet.go:480] "Attempting to sync node with API server" Sep 12 10:10:05.534753 kubelet[2268]: I0912 10:10:05.534637 2268 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:10:05.534753 kubelet[2268]: I0912 10:10:05.534681 2268 kubelet.go:386] "Adding apiserver pod source" Sep 12 10:10:05.534753 kubelet[2268]: I0912 10:10:05.534705 2268 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:10:05.543921 kubelet[2268]: E0912 10:10:05.543865 2268 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 10:10:05.544087 kubelet[2268]: E0912 10:10:05.543986 2268 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 10:10:05.544889 kubelet[2268]: I0912 10:10:05.544162 2268 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:10:05.544889 kubelet[2268]: I0912 10:10:05.544852 2268 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 10:10:05.545544 kubelet[2268]: W0912 10:10:05.545515 2268 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 10:10:05.549672 kubelet[2268]: I0912 10:10:05.549634 2268 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:10:05.549762 kubelet[2268]: I0912 10:10:05.549705 2268 server.go:1289] "Started kubelet" Sep 12 10:10:05.555288 kubelet[2268]: I0912 10:10:05.554771 2268 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:10:05.555288 kubelet[2268]: I0912 10:10:05.554979 2268 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:10:05.555288 kubelet[2268]: I0912 10:10:05.555133 2268 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:10:05.555288 kubelet[2268]: I0912 10:10:05.555253 2268 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:10:05.557787 kubelet[2268]: I0912 10:10:05.557752 2268 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:10:05.558968 kubelet[2268]: I0912 10:10:05.558936 2268 server.go:317] "Adding debug handlers to kubelet server" Sep 12 10:10:05.560024 kubelet[2268]: E0912 10:10:05.558941 2268 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18648131d181f350 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 10:10:05.549671248 +0000 UTC m=+0.819469191,LastTimestamp:2025-09-12 10:10:05.549671248 +0000 UTC m=+0.819469191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 10:10:05.562529 kubelet[2268]: E0912 10:10:05.562227 2268 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:10:05.562529 kubelet[2268]: I0912 10:10:05.562257 2268 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:10:05.562529 kubelet[2268]: E0912 10:10:05.562345 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="200ms" Sep 12 10:10:05.562529 kubelet[2268]: I0912 10:10:05.562400 2268 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:10:05.562861 kubelet[2268]: E0912 10:10:05.562839 2268 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 10:10:05.563896 kubelet[2268]: I0912 10:10:05.563873 2268 factory.go:223] Registration of the systemd container factory successfully Sep 12 10:10:05.564180 kubelet[2268]: I0912 10:10:05.564159 2268 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:10:05.564520 kubelet[2268]: E0912 10:10:05.564476 2268 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:10:05.565209 kubelet[2268]: I0912 10:10:05.565143 2268 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:10:05.565746 kubelet[2268]: I0912 10:10:05.565723 2268 factory.go:223] Registration of the containerd container factory successfully Sep 12 10:10:05.584532 kubelet[2268]: I0912 10:10:05.584429 2268 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 10:10:05.587199 kubelet[2268]: I0912 10:10:05.586916 2268 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:10:05.587199 kubelet[2268]: I0912 10:10:05.586934 2268 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:10:05.587199 kubelet[2268]: I0912 10:10:05.586951 2268 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:10:05.587199 kubelet[2268]: I0912 10:10:05.586963 2268 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 10:10:05.587199 kubelet[2268]: I0912 10:10:05.586994 2268 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 10:10:05.587199 kubelet[2268]: I0912 10:10:05.587018 2268 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:10:05.587199 kubelet[2268]: I0912 10:10:05.587033 2268 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 10:10:05.587199 kubelet[2268]: E0912 10:10:05.587089 2268 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:10:05.663529 kubelet[2268]: E0912 10:10:05.663228 2268 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:10:05.687822 kubelet[2268]: E0912 10:10:05.687710 2268 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 10:10:05.763380 kubelet[2268]: E0912 10:10:05.763315 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="400ms" Sep 12 10:10:05.763380 kubelet[2268]: E0912 10:10:05.763333 2268 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:10:05.836008 kubelet[2268]: E0912 10:10:05.835932 2268 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 10:10:05.838926 kubelet[2268]: I0912 10:10:05.838144 2268 policy_none.go:49] "None policy: Start" Sep 12 10:10:05.838926 kubelet[2268]: I0912 10:10:05.838339 2268 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:10:05.838926 kubelet[2268]: I0912 10:10:05.838896 2268 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:10:05.852017 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 10:10:05.864244 kubelet[2268]: E0912 10:10:05.864199 2268 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:10:05.866807 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 10:10:05.871140 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 10:10:05.888797 kubelet[2268]: E0912 10:10:05.888709 2268 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 10:10:05.893313 kubelet[2268]: E0912 10:10:05.893090 2268 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 10:10:05.893448 kubelet[2268]: I0912 10:10:05.893418 2268 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:10:05.893448 kubelet[2268]: I0912 10:10:05.893438 2268 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:10:05.893880 kubelet[2268]: I0912 10:10:05.893858 2268 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:10:05.895144 kubelet[2268]: E0912 10:10:05.895120 2268 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:10:05.895245 kubelet[2268]: E0912 10:10:05.895179 2268 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 10:10:05.995829 kubelet[2268]: I0912 10:10:05.995671 2268 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:10:05.996291 kubelet[2268]: E0912 10:10:05.996237 2268 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Sep 12 10:10:06.165200 kubelet[2268]: E0912 10:10:06.165103 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="800ms" Sep 12 10:10:06.198802 kubelet[2268]: I0912 10:10:06.198752 2268 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:10:06.199294 kubelet[2268]: E0912 10:10:06.199250 2268 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Sep 12 10:10:06.368997 kubelet[2268]: I0912 10:10:06.368900 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f253eb53ff5026d2ee0606f6ab10996-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f253eb53ff5026d2ee0606f6ab10996\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:06.368997 kubelet[2268]: I0912 10:10:06.368965 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f253eb53ff5026d2ee0606f6ab10996-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f253eb53ff5026d2ee0606f6ab10996\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:06.368997 kubelet[2268]: I0912 10:10:06.368993 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f253eb53ff5026d2ee0606f6ab10996-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f253eb53ff5026d2ee0606f6ab10996\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:06.475397 kubelet[2268]: E0912 10:10:06.475333 2268 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 10:10:06.600841 kubelet[2268]: I0912 10:10:06.600801 2268 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:10:06.601220 kubelet[2268]: E0912 10:10:06.601183 2268 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Sep 12 10:10:06.753756 kubelet[2268]: E0912 10:10:06.753493 2268 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 10:10:06.864611 systemd[1]: Created slice kubepods-burstable-pod3f253eb53ff5026d2ee0606f6ab10996.slice - libcontainer container kubepods-burstable-pod3f253eb53ff5026d2ee0606f6ab10996.slice. Sep 12 10:10:06.871515 kubelet[2268]: I0912 10:10:06.871441 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:06.872084 kubelet[2268]: I0912 10:10:06.871522 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:06.872084 kubelet[2268]: I0912 10:10:06.871576 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 10:10:06.872084 kubelet[2268]: I0912 10:10:06.871613 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:06.872084 kubelet[2268]: I0912 10:10:06.871650 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:06.872084 kubelet[2268]: I0912 10:10:06.872034 2268 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:06.883120 kubelet[2268]: E0912 10:10:06.883036 2268 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:10:06.883620 kubelet[2268]: E0912 10:10:06.883591 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:06.884395 containerd[1504]: time="2025-09-12T10:10:06.884339541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f253eb53ff5026d2ee0606f6ab10996,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:06.889144 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 10:10:06.891459 kubelet[2268]: E0912 10:10:06.891428 2268 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:10:06.893576 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 10:10:06.895236 kubelet[2268]: E0912 10:10:06.895211 2268 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:10:06.966258 kubelet[2268]: E0912 10:10:06.966188 2268 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="1.6s" Sep 12 10:10:06.976066 kubelet[2268]: E0912 10:10:06.976013 2268 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 10:10:07.053558 kubelet[2268]: E0912 10:10:07.053263 2268 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 10:10:07.192895 kubelet[2268]: E0912 10:10:07.192840 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:07.193552 containerd[1504]: time="2025-09-12T10:10:07.193489583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:07.195824 kubelet[2268]: E0912 10:10:07.195790 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:07.196166 containerd[1504]: time="2025-09-12T10:10:07.196134694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:07.403385 kubelet[2268]: I0912 10:10:07.403347 2268 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:10:07.403721 kubelet[2268]: E0912 10:10:07.403676 2268 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Sep 12 10:10:07.455623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964860808.mount: Deactivated successfully. Sep 12 10:10:07.461580 containerd[1504]: time="2025-09-12T10:10:07.461533163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:07.463387 containerd[1504]: time="2025-09-12T10:10:07.463350861Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 10:10:07.466205 containerd[1504]: time="2025-09-12T10:10:07.466161498Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:07.467488 containerd[1504]: time="2025-09-12T10:10:07.467455594Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:07.468922 containerd[1504]: time="2025-09-12T10:10:07.468882013Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:10:07.469907 containerd[1504]: time="2025-09-12T10:10:07.469880984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:07.470713 containerd[1504]: time="2025-09-12T10:10:07.470674652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:10:07.471659 containerd[1504]: time="2025-09-12T10:10:07.471637505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:10:07.472532 containerd[1504]: time="2025-09-12T10:10:07.472482030Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 587.972935ms" Sep 12 10:10:07.476215 containerd[1504]: time="2025-09-12T10:10:07.476177821Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 279.95563ms" Sep 12 10:10:07.476985 containerd[1504]: time="2025-09-12T10:10:07.476946321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 283.341528ms" Sep 12 10:10:07.533541 kubelet[2268]: E0912 10:10:07.533479 2268 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 10:10:07.768421 containerd[1504]: time="2025-09-12T10:10:07.768080509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:07.768421 containerd[1504]: time="2025-09-12T10:10:07.768236728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:07.768421 containerd[1504]: time="2025-09-12T10:10:07.768254502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:07.768620 containerd[1504]: time="2025-09-12T10:10:07.768552071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:07.769550 containerd[1504]: time="2025-09-12T10:10:07.767846722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:07.769550 containerd[1504]: time="2025-09-12T10:10:07.769538068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:07.769633 containerd[1504]: time="2025-09-12T10:10:07.769554429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:07.769730 containerd[1504]: time="2025-09-12T10:10:07.769651155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:07.842589 containerd[1504]: time="2025-09-12T10:10:07.840544971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:07.842589 containerd[1504]: time="2025-09-12T10:10:07.840618431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:07.842589 containerd[1504]: time="2025-09-12T10:10:07.840631516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:07.842589 containerd[1504]: time="2025-09-12T10:10:07.840740014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:07.857715 systemd[1]: Started cri-containerd-8aa3d3230d35269cabf44bb88c729cb56cfb5065a33c58aefac2bc4eb31e914a.scope - libcontainer container 8aa3d3230d35269cabf44bb88c729cb56cfb5065a33c58aefac2bc4eb31e914a. Sep 12 10:10:07.861635 systemd[1]: Started cri-containerd-f035a01705cf05d5c2e9661a17039c6cd1667ae1ea1aed32353e825c61e1b83c.scope - libcontainer container f035a01705cf05d5c2e9661a17039c6cd1667ae1ea1aed32353e825c61e1b83c. Sep 12 10:10:07.902838 systemd[1]: Started cri-containerd-f004c8f39f401a9999468e2563a58bbc29fd2b8e746e23f238862d5b10713027.scope - libcontainer container f004c8f39f401a9999468e2563a58bbc29fd2b8e746e23f238862d5b10713027. Sep 12 10:10:07.946805 containerd[1504]: time="2025-09-12T10:10:07.946761678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f253eb53ff5026d2ee0606f6ab10996,Namespace:kube-system,Attempt:0,} returns sandbox id \"8aa3d3230d35269cabf44bb88c729cb56cfb5065a33c58aefac2bc4eb31e914a\"" Sep 12 10:10:07.950135 kubelet[2268]: E0912 10:10:07.950029 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:07.956336 containerd[1504]: time="2025-09-12T10:10:07.956273727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f035a01705cf05d5c2e9661a17039c6cd1667ae1ea1aed32353e825c61e1b83c\"" Sep 12 10:10:07.957310 kubelet[2268]: E0912 10:10:07.957104 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:07.964622 containerd[1504]: time="2025-09-12T10:10:07.964578546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f004c8f39f401a9999468e2563a58bbc29fd2b8e746e23f238862d5b10713027\"" Sep 12 10:10:07.966044 kubelet[2268]: E0912 10:10:07.966005 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:08.008931 containerd[1504]: time="2025-09-12T10:10:08.008884441Z" level=info msg="CreateContainer within sandbox \"8aa3d3230d35269cabf44bb88c729cb56cfb5065a33c58aefac2bc4eb31e914a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 10:10:08.090094 containerd[1504]: time="2025-09-12T10:10:08.090028053Z" level=info msg="CreateContainer within sandbox \"f035a01705cf05d5c2e9661a17039c6cd1667ae1ea1aed32353e825c61e1b83c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 10:10:08.147551 containerd[1504]: time="2025-09-12T10:10:08.147474655Z" level=info msg="CreateContainer within sandbox \"f004c8f39f401a9999468e2563a58bbc29fd2b8e746e23f238862d5b10713027\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 10:10:08.314521 containerd[1504]: time="2025-09-12T10:10:08.314418839Z" level=info msg="CreateContainer within sandbox \"f035a01705cf05d5c2e9661a17039c6cd1667ae1ea1aed32353e825c61e1b83c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0b7cc229e83a837ac11b39ed999c3d113f4d78a7417c2d6cc661534a6da4dcd2\"" Sep 12 10:10:08.316533 containerd[1504]: time="2025-09-12T10:10:08.315297468Z" level=info msg="StartContainer for \"0b7cc229e83a837ac11b39ed999c3d113f4d78a7417c2d6cc661534a6da4dcd2\"" Sep 12 10:10:08.316977 containerd[1504]: time="2025-09-12T10:10:08.316933664Z" level=info msg="CreateContainer within sandbox \"8aa3d3230d35269cabf44bb88c729cb56cfb5065a33c58aefac2bc4eb31e914a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1a88db02e05d4c9239e38efce6337cc43696299ee80c16a38350f4a2314f712\"" Sep 12 10:10:08.317370 containerd[1504]: time="2025-09-12T10:10:08.317278514Z" level=info msg="StartContainer for \"e1a88db02e05d4c9239e38efce6337cc43696299ee80c16a38350f4a2314f712\"" Sep 12 10:10:08.318695 containerd[1504]: time="2025-09-12T10:10:08.318664542Z" level=info msg="CreateContainer within sandbox \"f004c8f39f401a9999468e2563a58bbc29fd2b8e746e23f238862d5b10713027\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b42527e0e50ce0ac314a3bb18848ad7a8fd762f05ae9680fea46197a7078fe2b\"" Sep 12 10:10:08.318968 containerd[1504]: time="2025-09-12T10:10:08.318949016Z" level=info msg="StartContainer for \"b42527e0e50ce0ac314a3bb18848ad7a8fd762f05ae9680fea46197a7078fe2b\"" Sep 12 10:10:08.351934 systemd[1]: Started cri-containerd-b42527e0e50ce0ac314a3bb18848ad7a8fd762f05ae9680fea46197a7078fe2b.scope - libcontainer container b42527e0e50ce0ac314a3bb18848ad7a8fd762f05ae9680fea46197a7078fe2b. Sep 12 10:10:08.362788 systemd[1]: Started cri-containerd-0b7cc229e83a837ac11b39ed999c3d113f4d78a7417c2d6cc661534a6da4dcd2.scope - libcontainer container 0b7cc229e83a837ac11b39ed999c3d113f4d78a7417c2d6cc661534a6da4dcd2. Sep 12 10:10:08.364546 systemd[1]: Started cri-containerd-e1a88db02e05d4c9239e38efce6337cc43696299ee80c16a38350f4a2314f712.scope - libcontainer container e1a88db02e05d4c9239e38efce6337cc43696299ee80c16a38350f4a2314f712. Sep 12 10:10:08.404716 containerd[1504]: time="2025-09-12T10:10:08.404570104Z" level=info msg="StartContainer for \"b42527e0e50ce0ac314a3bb18848ad7a8fd762f05ae9680fea46197a7078fe2b\" returns successfully" Sep 12 10:10:08.480362 containerd[1504]: time="2025-09-12T10:10:08.480205462Z" level=info msg="StartContainer for \"0b7cc229e83a837ac11b39ed999c3d113f4d78a7417c2d6cc661534a6da4dcd2\" returns successfully" Sep 12 10:10:08.480362 containerd[1504]: time="2025-09-12T10:10:08.480240869Z" level=info msg="StartContainer for \"e1a88db02e05d4c9239e38efce6337cc43696299ee80c16a38350f4a2314f712\" returns successfully" Sep 12 10:10:08.598539 kubelet[2268]: E0912 10:10:08.598471 2268 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:10:08.598735 kubelet[2268]: E0912 10:10:08.598647 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:08.601177 kubelet[2268]: E0912 10:10:08.601121 2268 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:10:08.601334 kubelet[2268]: E0912 10:10:08.601287 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:08.604022 kubelet[2268]: E0912 10:10:08.603931 2268 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:10:08.604087 kubelet[2268]: E0912 10:10:08.604057 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:09.008811 kubelet[2268]: I0912 10:10:09.008647 2268 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:10:09.607409 kubelet[2268]: E0912 10:10:09.607123 2268 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:10:09.607409 kubelet[2268]: E0912 10:10:09.607253 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:09.611038 kubelet[2268]: E0912 10:10:09.610838 2268 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 10:10:09.611038 kubelet[2268]: E0912 10:10:09.610967 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:09.701173 kubelet[2268]: E0912 10:10:09.701094 2268 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 10:10:09.794398 kubelet[2268]: I0912 10:10:09.794327 2268 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 10:10:09.840342 kubelet[2268]: E0912 10:10:09.840202 2268 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18648131d181f350 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 10:10:05.549671248 +0000 UTC m=+0.819469191,LastTimestamp:2025-09-12 10:10:05.549671248 +0000 UTC m=+0.819469191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 10:10:09.863675 kubelet[2268]: I0912 10:10:09.863518 2268 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 10:10:09.870179 kubelet[2268]: E0912 10:10:09.870113 2268 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 10:10:09.870179 kubelet[2268]: I0912 10:10:09.870148 2268 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:09.872304 kubelet[2268]: E0912 10:10:09.872269 2268 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:09.872304 kubelet[2268]: I0912 10:10:09.872301 2268 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:09.874037 kubelet[2268]: E0912 10:10:09.873981 2268 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:10.544739 kubelet[2268]: I0912 10:10:10.544668 2268 apiserver.go:52] "Watching apiserver" Sep 12 10:10:10.562682 kubelet[2268]: I0912 10:10:10.562640 2268 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:10:10.607160 kubelet[2268]: I0912 10:10:10.607129 2268 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 10:10:10.608708 kubelet[2268]: E0912 10:10:10.608682 2268 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 10:10:10.608846 kubelet[2268]: E0912 10:10:10.608825 2268 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:11.680393 systemd[1]: Reload requested from client PID 2558 ('systemctl') (unit session-9.scope)... Sep 12 10:10:11.680408 systemd[1]: Reloading... Sep 12 10:10:11.767539 zram_generator::config[2602]: No configuration found. Sep 12 10:10:11.913030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:10:12.048670 systemd[1]: Reloading finished in 367 ms. Sep 12 10:10:12.080393 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:12.106134 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:10:12.106430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:12.106484 systemd[1]: kubelet.service: Consumed 1.553s CPU time, 135.8M memory peak. Sep 12 10:10:12.117004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:10:12.304756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:10:12.309446 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:10:12.352229 kubelet[2647]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:10:12.352229 kubelet[2647]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 10:10:12.352229 kubelet[2647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:10:12.352697 kubelet[2647]: I0912 10:10:12.352326 2647 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:10:12.360534 kubelet[2647]: I0912 10:10:12.359684 2647 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 10:10:12.360534 kubelet[2647]: I0912 10:10:12.359725 2647 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:10:12.360534 kubelet[2647]: I0912 10:10:12.360029 2647 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 10:10:12.361621 kubelet[2647]: I0912 10:10:12.361591 2647 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 10:10:12.365346 kubelet[2647]: I0912 10:10:12.365233 2647 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:10:12.377530 kubelet[2647]: E0912 10:10:12.375286 2647 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:10:12.377530 kubelet[2647]: I0912 10:10:12.375344 2647 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:10:12.388997 kubelet[2647]: I0912 10:10:12.388946 2647 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:10:12.389245 kubelet[2647]: I0912 10:10:12.389198 2647 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:10:12.389392 kubelet[2647]: I0912 10:10:12.389234 2647 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:10:12.389544 kubelet[2647]: I0912 10:10:12.389397 2647 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:10:12.389544 kubelet[2647]: I0912 10:10:12.389407 2647 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 10:10:12.389544 kubelet[2647]: I0912 10:10:12.389465 2647 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:10:12.389717 kubelet[2647]: I0912 10:10:12.389690 2647 kubelet.go:480] "Attempting to sync node with API server" Sep 12 10:10:12.389717 kubelet[2647]: I0912 10:10:12.389716 2647 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:10:12.389801 kubelet[2647]: I0912 10:10:12.389749 2647 kubelet.go:386] "Adding apiserver pod source" Sep 12 10:10:12.389801 kubelet[2647]: I0912 10:10:12.389770 2647 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:10:12.392166 kubelet[2647]: I0912 10:10:12.390657 2647 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:10:12.392166 kubelet[2647]: I0912 10:10:12.391240 2647 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 10:10:12.396158 kubelet[2647]: I0912 10:10:12.396134 2647 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 10:10:12.396319 kubelet[2647]: I0912 10:10:12.396303 2647 server.go:1289] "Started kubelet" Sep 12 10:10:12.396640 kubelet[2647]: I0912 10:10:12.396599 2647 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:10:12.397901 kubelet[2647]: I0912 10:10:12.396818 2647 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:10:12.398321 kubelet[2647]: I0912 10:10:12.398303 2647 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:10:12.398442 kubelet[2647]: I0912 10:10:12.397675 2647 server.go:317] "Adding debug handlers to kubelet server" Sep 12 10:10:12.404404 kubelet[2647]: I0912 10:10:12.404367 2647 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:10:12.405723 kubelet[2647]: I0912 10:10:12.405698 2647 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 10:10:12.406217 kubelet[2647]: I0912 10:10:12.406064 2647 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:10:12.407162 kubelet[2647]: I0912 10:10:12.407112 2647 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 10:10:12.407424 kubelet[2647]: I0912 10:10:12.407405 2647 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:10:12.408958 kubelet[2647]: I0912 10:10:12.408930 2647 factory.go:223] Registration of the systemd container factory successfully Sep 12 10:10:12.409194 kubelet[2647]: I0912 10:10:12.409166 2647 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:10:12.410528 kubelet[2647]: I0912 10:10:12.410492 2647 factory.go:223] Registration of the containerd container factory successfully Sep 12 10:10:12.412829 kubelet[2647]: E0912 10:10:12.410804 2647 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:10:12.421548 kubelet[2647]: I0912 10:10:12.421492 2647 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 10:10:12.423256 kubelet[2647]: I0912 10:10:12.423230 2647 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 10:10:12.423325 kubelet[2647]: I0912 10:10:12.423265 2647 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 10:10:12.423325 kubelet[2647]: I0912 10:10:12.423288 2647 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 10:10:12.423325 kubelet[2647]: I0912 10:10:12.423297 2647 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 10:10:12.423398 kubelet[2647]: E0912 10:10:12.423342 2647 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:10:12.451484 kubelet[2647]: I0912 10:10:12.450151 2647 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 10:10:12.451484 kubelet[2647]: I0912 10:10:12.450171 2647 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 10:10:12.451484 kubelet[2647]: I0912 10:10:12.450190 2647 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:10:12.451484 kubelet[2647]: I0912 10:10:12.450322 2647 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 10:10:12.451484 kubelet[2647]: I0912 10:10:12.450335 2647 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 10:10:12.451484 kubelet[2647]: I0912 10:10:12.450351 2647 policy_none.go:49] "None policy: Start" Sep 12 10:10:12.451484 kubelet[2647]: I0912 10:10:12.450360 2647 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 10:10:12.451484 kubelet[2647]: I0912 10:10:12.450370 2647 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:10:12.451484 kubelet[2647]: I0912 10:10:12.450448 2647 state_mem.go:75] "Updated machine memory state" Sep 12 10:10:12.456718 kubelet[2647]: E0912 10:10:12.456690 2647 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 10:10:12.456947 kubelet[2647]: I0912 10:10:12.456925 2647 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:10:12.456995 kubelet[2647]: I0912 10:10:12.456948 2647 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:10:12.457185 kubelet[2647]: I0912 10:10:12.457171 2647 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:10:12.458409 kubelet[2647]: E0912 10:10:12.458386 2647 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 10:10:12.524619 kubelet[2647]: I0912 10:10:12.524564 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 10:10:12.524850 kubelet[2647]: I0912 10:10:12.524567 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:12.524850 kubelet[2647]: I0912 10:10:12.524582 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:12.564928 kubelet[2647]: I0912 10:10:12.564753 2647 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 10:10:12.575932 kubelet[2647]: I0912 10:10:12.575875 2647 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 10:10:12.576099 kubelet[2647]: I0912 10:10:12.575991 2647 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 10:10:12.610362 kubelet[2647]: I0912 10:10:12.610301 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:12.610362 kubelet[2647]: I0912 10:10:12.610352 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:12.610567 kubelet[2647]: I0912 10:10:12.610415 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f253eb53ff5026d2ee0606f6ab10996-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f253eb53ff5026d2ee0606f6ab10996\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:12.610567 kubelet[2647]: I0912 10:10:12.610443 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:12.610567 kubelet[2647]: I0912 10:10:12.610466 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:12.610567 kubelet[2647]: I0912 10:10:12.610488 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:12.610567 kubelet[2647]: I0912 10:10:12.610524 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 10:10:12.610684 kubelet[2647]: I0912 10:10:12.610547 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f253eb53ff5026d2ee0606f6ab10996-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f253eb53ff5026d2ee0606f6ab10996\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:12.610684 kubelet[2647]: I0912 10:10:12.610568 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f253eb53ff5026d2ee0606f6ab10996-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f253eb53ff5026d2ee0606f6ab10996\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:12.678185 sudo[2690]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 10:10:12.678588 sudo[2690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 10:10:12.830908 kubelet[2647]: E0912 10:10:12.830851 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:12.832120 kubelet[2647]: E0912 10:10:12.831978 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:12.832120 kubelet[2647]: E0912 10:10:12.831999 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:13.245409 sudo[2690]: pam_unix(sudo:session): session closed for user root Sep 12 10:10:13.390946 kubelet[2647]: I0912 10:10:13.390877 2647 apiserver.go:52] "Watching apiserver" Sep 12 10:10:13.407342 kubelet[2647]: I0912 10:10:13.407293 2647 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 10:10:13.435280 kubelet[2647]: I0912 10:10:13.435250 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 10:10:13.436267 kubelet[2647]: I0912 10:10:13.435633 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:13.436267 kubelet[2647]: I0912 10:10:13.435952 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:13.446665 kubelet[2647]: E0912 10:10:13.446592 2647 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 10:10:13.448227 kubelet[2647]: E0912 10:10:13.446865 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:13.448227 kubelet[2647]: E0912 10:10:13.447836 2647 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 10:10:13.448227 kubelet[2647]: E0912 10:10:13.447963 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:13.448227 kubelet[2647]: E0912 10:10:13.448059 2647 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 10:10:13.448227 kubelet[2647]: E0912 10:10:13.448189 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:13.464768 kubelet[2647]: I0912 10:10:13.464672 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.46462001 podStartE2EDuration="1.46462001s" podCreationTimestamp="2025-09-12 10:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:13.464421022 +0000 UTC m=+1.149442852" watchObservedRunningTime="2025-09-12 10:10:13.46462001 +0000 UTC m=+1.149641851" Sep 12 10:10:13.472911 kubelet[2647]: I0912 10:10:13.472827 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4728091779999999 podStartE2EDuration="1.472809178s" podCreationTimestamp="2025-09-12 10:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:13.472585813 +0000 UTC m=+1.157607653" watchObservedRunningTime="2025-09-12 10:10:13.472809178 +0000 UTC m=+1.157831018" Sep 12 10:10:13.482242 kubelet[2647]: I0912 10:10:13.482145 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.482124165 podStartE2EDuration="1.482124165s" podCreationTimestamp="2025-09-12 10:10:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:13.481988336 +0000 UTC m=+1.167010176" watchObservedRunningTime="2025-09-12 10:10:13.482124165 +0000 UTC m=+1.167146005" Sep 12 10:10:14.438208 kubelet[2647]: E0912 10:10:14.437863 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:14.438208 kubelet[2647]: E0912 10:10:14.437878 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:14.438208 kubelet[2647]: E0912 10:10:14.438016 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:14.845007 sudo[1707]: pam_unix(sudo:session): session closed for user root Sep 12 10:10:14.848166 sshd[1706]: Connection closed by 10.0.0.1 port 33528 Sep 12 10:10:14.848936 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:14.854842 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Sep 12 10:10:14.855401 systemd[1]: sshd@8-10.0.0.72:22-10.0.0.1:33528.service: Deactivated successfully. Sep 12 10:10:14.858817 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 10:10:14.859129 systemd[1]: session-9.scope: Consumed 6.453s CPU time, 251.8M memory peak. Sep 12 10:10:14.862882 systemd-logind[1490]: Removed session 9. Sep 12 10:10:15.439887 kubelet[2647]: E0912 10:10:15.439840 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:15.440410 kubelet[2647]: E0912 10:10:15.439943 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:16.210811 update_engine[1494]: I20250912 10:10:16.210702 1494 update_attempter.cc:509] Updating boot flags... Sep 12 10:10:16.245556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2739) Sep 12 10:10:16.302047 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2742) Sep 12 10:10:16.337536 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2742) Sep 12 10:10:16.441558 kubelet[2647]: E0912 10:10:16.441488 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:17.222866 kubelet[2647]: E0912 10:10:17.222816 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:17.423910 kubelet[2647]: I0912 10:10:17.423869 2647 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 10:10:17.424350 containerd[1504]: time="2025-09-12T10:10:17.424308704Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 10:10:17.424774 kubelet[2647]: I0912 10:10:17.424656 2647 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 10:10:18.277998 systemd[1]: Created slice kubepods-besteffort-pod5dac2fc9_c8df_4168_9e7e_ff3e2a98df69.slice - libcontainer container kubepods-besteffort-pod5dac2fc9_c8df_4168_9e7e_ff3e2a98df69.slice. Sep 12 10:10:18.292228 systemd[1]: Created slice kubepods-burstable-poddfb322bc_be71_4d59_bdeb_f775c4e97943.slice - libcontainer container kubepods-burstable-poddfb322bc_be71_4d59_bdeb_f775c4e97943.slice. Sep 12 10:10:18.345411 kubelet[2647]: I0912 10:10:18.345304 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-cgroup\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.345411 kubelet[2647]: I0912 10:10:18.345408 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-config-path\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.345981 kubelet[2647]: I0912 10:10:18.345447 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dfb322bc-be71-4d59-bdeb-f775c4e97943-hubble-tls\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.345981 kubelet[2647]: I0912 10:10:18.345481 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-run\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.345981 kubelet[2647]: I0912 10:10:18.345554 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-bpf-maps\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.345981 kubelet[2647]: I0912 10:10:18.345589 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-hostproc\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.345981 kubelet[2647]: I0912 10:10:18.345662 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5dac2fc9-c8df-4168-9e7e-ff3e2a98df69-kube-proxy\") pod \"kube-proxy-6dtx9\" (UID: \"5dac2fc9-c8df-4168-9e7e-ff3e2a98df69\") " pod="kube-system/kube-proxy-6dtx9" Sep 12 10:10:18.345981 kubelet[2647]: I0912 10:10:18.345783 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-etc-cni-netd\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.346367 kubelet[2647]: I0912 10:10:18.345830 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-lib-modules\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.346367 kubelet[2647]: I0912 10:10:18.345902 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-host-proc-sys-net\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.346367 kubelet[2647]: I0912 10:10:18.345959 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-host-proc-sys-kernel\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.346367 kubelet[2647]: I0912 10:10:18.346001 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dac2fc9-c8df-4168-9e7e-ff3e2a98df69-xtables-lock\") pod \"kube-proxy-6dtx9\" (UID: \"5dac2fc9-c8df-4168-9e7e-ff3e2a98df69\") " pod="kube-system/kube-proxy-6dtx9" Sep 12 10:10:18.346367 kubelet[2647]: I0912 10:10:18.346062 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbvs2\" (UniqueName: \"kubernetes.io/projected/5dac2fc9-c8df-4168-9e7e-ff3e2a98df69-kube-api-access-wbvs2\") pod \"kube-proxy-6dtx9\" (UID: \"5dac2fc9-c8df-4168-9e7e-ff3e2a98df69\") " pod="kube-system/kube-proxy-6dtx9" Sep 12 10:10:18.346632 kubelet[2647]: I0912 10:10:18.346189 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cni-path\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.346632 kubelet[2647]: I0912 10:10:18.346259 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-xtables-lock\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.346632 kubelet[2647]: I0912 10:10:18.346292 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dfb322bc-be71-4d59-bdeb-f775c4e97943-clustermesh-secrets\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.346632 kubelet[2647]: I0912 10:10:18.346335 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqxv9\" (UniqueName: \"kubernetes.io/projected/dfb322bc-be71-4d59-bdeb-f775c4e97943-kube-api-access-rqxv9\") pod \"cilium-qr6nr\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " pod="kube-system/cilium-qr6nr" Sep 12 10:10:18.346632 kubelet[2647]: I0912 10:10:18.346367 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dac2fc9-c8df-4168-9e7e-ff3e2a98df69-lib-modules\") pod \"kube-proxy-6dtx9\" (UID: \"5dac2fc9-c8df-4168-9e7e-ff3e2a98df69\") " pod="kube-system/kube-proxy-6dtx9" Sep 12 10:10:18.889864 kubelet[2647]: E0912 10:10:18.889805 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:18.890656 containerd[1504]: time="2025-09-12T10:10:18.890602151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dtx9,Uid:5dac2fc9-c8df-4168-9e7e-ff3e2a98df69,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:18.899135 kubelet[2647]: E0912 10:10:18.899089 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:18.900923 containerd[1504]: time="2025-09-12T10:10:18.900886427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qr6nr,Uid:dfb322bc-be71-4d59-bdeb-f775c4e97943,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:19.328369 systemd[1]: Created slice kubepods-besteffort-pod49277120_1826_4d7c_a0a4_19a41af73ff2.slice - libcontainer container kubepods-besteffort-pod49277120_1826_4d7c_a0a4_19a41af73ff2.slice. Sep 12 10:10:19.354029 kubelet[2647]: I0912 10:10:19.353962 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjgt4\" (UniqueName: \"kubernetes.io/projected/49277120-1826-4d7c-a0a4-19a41af73ff2-kube-api-access-fjgt4\") pod \"cilium-operator-6c4d7847fc-jw8qr\" (UID: \"49277120-1826-4d7c-a0a4-19a41af73ff2\") " pod="kube-system/cilium-operator-6c4d7847fc-jw8qr" Sep 12 10:10:19.354029 kubelet[2647]: I0912 10:10:19.354030 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49277120-1826-4d7c-a0a4-19a41af73ff2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jw8qr\" (UID: \"49277120-1826-4d7c-a0a4-19a41af73ff2\") " pod="kube-system/cilium-operator-6c4d7847fc-jw8qr" Sep 12 10:10:19.632685 kubelet[2647]: E0912 10:10:19.632220 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:19.640545 containerd[1504]: time="2025-09-12T10:10:19.640165512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jw8qr,Uid:49277120-1826-4d7c-a0a4-19a41af73ff2,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:19.669818 containerd[1504]: time="2025-09-12T10:10:19.669485824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:19.670090 containerd[1504]: time="2025-09-12T10:10:19.669712203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:19.670090 containerd[1504]: time="2025-09-12T10:10:19.669772607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:19.670090 containerd[1504]: time="2025-09-12T10:10:19.669980390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:19.681835 containerd[1504]: time="2025-09-12T10:10:19.681363473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:19.681835 containerd[1504]: time="2025-09-12T10:10:19.681427304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:19.681835 containerd[1504]: time="2025-09-12T10:10:19.681441271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:19.681835 containerd[1504]: time="2025-09-12T10:10:19.681604751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:19.720805 systemd[1]: Started cri-containerd-4d152d3582a3d034aba7c2211854e9e011af8e37969a493c64f94fce6c1285b6.scope - libcontainer container 4d152d3582a3d034aba7c2211854e9e011af8e37969a493c64f94fce6c1285b6. Sep 12 10:10:19.724871 systemd[1]: Started cri-containerd-66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a.scope - libcontainer container 66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a. Sep 12 10:10:19.837710 containerd[1504]: time="2025-09-12T10:10:19.836889667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:19.837710 containerd[1504]: time="2025-09-12T10:10:19.836980087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:19.837710 containerd[1504]: time="2025-09-12T10:10:19.836995137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:19.837710 containerd[1504]: time="2025-09-12T10:10:19.837079506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:19.852197 containerd[1504]: time="2025-09-12T10:10:19.852147874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qr6nr,Uid:dfb322bc-be71-4d59-bdeb-f775c4e97943,Namespace:kube-system,Attempt:0,} returns sandbox id \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\"" Sep 12 10:10:19.854874 kubelet[2647]: E0912 10:10:19.854771 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:19.857041 containerd[1504]: time="2025-09-12T10:10:19.856995129Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 10:10:19.862435 containerd[1504]: time="2025-09-12T10:10:19.862311139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dtx9,Uid:5dac2fc9-c8df-4168-9e7e-ff3e2a98df69,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d152d3582a3d034aba7c2211854e9e011af8e37969a493c64f94fce6c1285b6\"" Sep 12 10:10:19.863411 kubelet[2647]: E0912 10:10:19.863372 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:19.870348 containerd[1504]: time="2025-09-12T10:10:19.870297806Z" level=info msg="CreateContainer within sandbox \"4d152d3582a3d034aba7c2211854e9e011af8e37969a493c64f94fce6c1285b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 10:10:19.873884 systemd[1]: Started cri-containerd-f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd.scope - libcontainer container f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd. Sep 12 10:10:19.894165 containerd[1504]: time="2025-09-12T10:10:19.893816899Z" level=info msg="CreateContainer within sandbox \"4d152d3582a3d034aba7c2211854e9e011af8e37969a493c64f94fce6c1285b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2772f5ea6dfe8cb4135f4099abd1efc29cd99214a99216344b88a75320d2846e\"" Sep 12 10:10:19.898420 containerd[1504]: time="2025-09-12T10:10:19.896811897Z" level=info msg="StartContainer for \"2772f5ea6dfe8cb4135f4099abd1efc29cd99214a99216344b88a75320d2846e\"" Sep 12 10:10:19.936915 systemd[1]: Started cri-containerd-2772f5ea6dfe8cb4135f4099abd1efc29cd99214a99216344b88a75320d2846e.scope - libcontainer container 2772f5ea6dfe8cb4135f4099abd1efc29cd99214a99216344b88a75320d2846e. Sep 12 10:10:19.938708 containerd[1504]: time="2025-09-12T10:10:19.938667183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jw8qr,Uid:49277120-1826-4d7c-a0a4-19a41af73ff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd\"" Sep 12 10:10:19.939909 kubelet[2647]: E0912 10:10:19.939878 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:20.103103 containerd[1504]: time="2025-09-12T10:10:20.103049885Z" level=info msg="StartContainer for \"2772f5ea6dfe8cb4135f4099abd1efc29cd99214a99216344b88a75320d2846e\" returns successfully" Sep 12 10:10:20.456833 kubelet[2647]: E0912 10:10:20.456705 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:24.057938 kubelet[2647]: E0912 10:10:24.057851 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:24.067133 kubelet[2647]: I0912 10:10:24.067024 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6dtx9" podStartSLOduration=6.066986752 podStartE2EDuration="6.066986752s" podCreationTimestamp="2025-09-12 10:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:20.467841056 +0000 UTC m=+8.152862926" watchObservedRunningTime="2025-09-12 10:10:24.066986752 +0000 UTC m=+11.752008592" Sep 12 10:10:24.463136 kubelet[2647]: E0912 10:10:24.463074 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:25.174389 kubelet[2647]: E0912 10:10:25.174021 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:25.465059 kubelet[2647]: E0912 10:10:25.464925 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:27.228963 kubelet[2647]: E0912 10:10:27.228194 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:31.603595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400161510.mount: Deactivated successfully. Sep 12 10:10:33.248164 containerd[1504]: time="2025-09-12T10:10:33.248094259Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:33.249223 containerd[1504]: time="2025-09-12T10:10:33.249160286Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 10:10:33.250085 containerd[1504]: time="2025-09-12T10:10:33.250045592Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:33.251869 containerd[1504]: time="2025-09-12T10:10:33.251830822Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.394795037s" Sep 12 10:10:33.251929 containerd[1504]: time="2025-09-12T10:10:33.251868022Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 10:10:33.253102 containerd[1504]: time="2025-09-12T10:10:33.253045158Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 10:10:33.257335 containerd[1504]: time="2025-09-12T10:10:33.257300116Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:10:33.274374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116140534.mount: Deactivated successfully. Sep 12 10:10:33.276124 containerd[1504]: time="2025-09-12T10:10:33.276045477Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\"" Sep 12 10:10:33.276705 containerd[1504]: time="2025-09-12T10:10:33.276675964Z" level=info msg="StartContainer for \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\"" Sep 12 10:10:33.308658 systemd[1]: Started cri-containerd-c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33.scope - libcontainer container c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33. Sep 12 10:10:33.362083 systemd[1]: cri-containerd-c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33.scope: Deactivated successfully. Sep 12 10:10:33.425045 containerd[1504]: time="2025-09-12T10:10:33.424988880Z" level=info msg="StartContainer for \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\" returns successfully" Sep 12 10:10:33.484037 kubelet[2647]: E0912 10:10:33.483801 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:33.823664 containerd[1504]: time="2025-09-12T10:10:33.823584262Z" level=info msg="shim disconnected" id=c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33 namespace=k8s.io Sep 12 10:10:33.823664 containerd[1504]: time="2025-09-12T10:10:33.823653844Z" level=warning msg="cleaning up after shim disconnected" id=c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33 namespace=k8s.io Sep 12 10:10:33.823664 containerd[1504]: time="2025-09-12T10:10:33.823666507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:10:34.271135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33-rootfs.mount: Deactivated successfully. Sep 12 10:10:34.484901 kubelet[2647]: E0912 10:10:34.484848 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:34.490162 containerd[1504]: time="2025-09-12T10:10:34.490115089Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:10:34.508699 containerd[1504]: time="2025-09-12T10:10:34.508638534Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\"" Sep 12 10:10:34.509154 containerd[1504]: time="2025-09-12T10:10:34.509130950Z" level=info msg="StartContainer for \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\"" Sep 12 10:10:34.536653 systemd[1]: Started cri-containerd-301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6.scope - libcontainer container 301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6. Sep 12 10:10:34.577937 containerd[1504]: time="2025-09-12T10:10:34.576713971Z" level=info msg="StartContainer for \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\" returns successfully" Sep 12 10:10:34.596595 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:10:34.596970 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:10:34.598405 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:10:34.610041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:10:34.610293 systemd[1]: cri-containerd-301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6.scope: Deactivated successfully. Sep 12 10:10:34.627763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:10:34.671092 containerd[1504]: time="2025-09-12T10:10:34.671026989Z" level=info msg="shim disconnected" id=301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6 namespace=k8s.io Sep 12 10:10:34.671092 containerd[1504]: time="2025-09-12T10:10:34.671088866Z" level=warning msg="cleaning up after shim disconnected" id=301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6 namespace=k8s.io Sep 12 10:10:34.671092 containerd[1504]: time="2025-09-12T10:10:34.671100037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:10:34.991211 containerd[1504]: time="2025-09-12T10:10:34.991148610Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:34.991903 containerd[1504]: time="2025-09-12T10:10:34.991829071Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 10:10:34.993090 containerd[1504]: time="2025-09-12T10:10:34.993034148Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:10:34.994380 containerd[1504]: time="2025-09-12T10:10:34.994348061Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.741256385s" Sep 12 10:10:34.994457 containerd[1504]: time="2025-09-12T10:10:34.994381193Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 10:10:34.999526 containerd[1504]: time="2025-09-12T10:10:34.999466031Z" level=info msg="CreateContainer within sandbox \"f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 10:10:35.012576 containerd[1504]: time="2025-09-12T10:10:35.012535358Z" level=info msg="CreateContainer within sandbox \"f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\"" Sep 12 10:10:35.013143 containerd[1504]: time="2025-09-12T10:10:35.012944117Z" level=info msg="StartContainer for \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\"" Sep 12 10:10:35.041659 systemd[1]: Started cri-containerd-0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3.scope - libcontainer container 0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3. Sep 12 10:10:35.068620 containerd[1504]: time="2025-09-12T10:10:35.068542794Z" level=info msg="StartContainer for \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\" returns successfully" Sep 12 10:10:35.272424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6-rootfs.mount: Deactivated successfully. Sep 12 10:10:35.489299 kubelet[2647]: E0912 10:10:35.489260 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:35.493543 kubelet[2647]: E0912 10:10:35.492468 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:35.498859 containerd[1504]: time="2025-09-12T10:10:35.498808001Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:10:35.504368 kubelet[2647]: I0912 10:10:35.504293 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jw8qr" podStartSLOduration=1.450462196 podStartE2EDuration="16.504272772s" podCreationTimestamp="2025-09-12 10:10:19 +0000 UTC" firstStartedPulling="2025-09-12 10:10:19.941628488 +0000 UTC m=+7.626650328" lastFinishedPulling="2025-09-12 10:10:34.995439064 +0000 UTC m=+22.680460904" observedRunningTime="2025-09-12 10:10:35.503792779 +0000 UTC m=+23.188814619" watchObservedRunningTime="2025-09-12 10:10:35.504272772 +0000 UTC m=+23.189294632" Sep 12 10:10:35.526072 containerd[1504]: time="2025-09-12T10:10:35.525702347Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\"" Sep 12 10:10:35.526432 containerd[1504]: time="2025-09-12T10:10:35.526356006Z" level=info msg="StartContainer for \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\"" Sep 12 10:10:35.591669 systemd[1]: Started cri-containerd-bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9.scope - libcontainer container bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9. Sep 12 10:10:35.660380 systemd[1]: cri-containerd-bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9.scope: Deactivated successfully. Sep 12 10:10:35.661429 containerd[1504]: time="2025-09-12T10:10:35.661238538Z" level=info msg="StartContainer for \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\" returns successfully" Sep 12 10:10:35.705738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9-rootfs.mount: Deactivated successfully. Sep 12 10:10:35.894089 containerd[1504]: time="2025-09-12T10:10:35.893994376Z" level=info msg="shim disconnected" id=bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9 namespace=k8s.io Sep 12 10:10:35.894089 containerd[1504]: time="2025-09-12T10:10:35.894061301Z" level=warning msg="cleaning up after shim disconnected" id=bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9 namespace=k8s.io Sep 12 10:10:35.894089 containerd[1504]: time="2025-09-12T10:10:35.894069547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:10:35.919525 containerd[1504]: time="2025-09-12T10:10:35.919429897Z" level=warning msg="cleanup warnings time=\"2025-09-12T10:10:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 10:10:36.500423 kubelet[2647]: E0912 10:10:36.500375 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:36.501334 kubelet[2647]: E0912 10:10:36.500521 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:36.508607 containerd[1504]: time="2025-09-12T10:10:36.508529667Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:10:36.529546 containerd[1504]: time="2025-09-12T10:10:36.529471075Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\"" Sep 12 10:10:36.530393 containerd[1504]: time="2025-09-12T10:10:36.530270358Z" level=info msg="StartContainer for \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\"" Sep 12 10:10:36.574665 systemd[1]: Started cri-containerd-cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035.scope - libcontainer container cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035. Sep 12 10:10:36.603199 systemd[1]: cri-containerd-cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035.scope: Deactivated successfully. Sep 12 10:10:36.605581 containerd[1504]: time="2025-09-12T10:10:36.605533119Z" level=info msg="StartContainer for \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\" returns successfully" Sep 12 10:10:36.626628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035-rootfs.mount: Deactivated successfully. Sep 12 10:10:36.631683 containerd[1504]: time="2025-09-12T10:10:36.631585680Z" level=info msg="shim disconnected" id=cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035 namespace=k8s.io Sep 12 10:10:36.631683 containerd[1504]: time="2025-09-12T10:10:36.631653348Z" level=warning msg="cleaning up after shim disconnected" id=cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035 namespace=k8s.io Sep 12 10:10:36.631683 containerd[1504]: time="2025-09-12T10:10:36.631662726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:10:37.505368 kubelet[2647]: E0912 10:10:37.505165 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:37.511617 containerd[1504]: time="2025-09-12T10:10:37.511564450Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:10:37.586330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303759426.mount: Deactivated successfully. Sep 12 10:10:37.592719 containerd[1504]: time="2025-09-12T10:10:37.592667325Z" level=info msg="CreateContainer within sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\"" Sep 12 10:10:37.593434 containerd[1504]: time="2025-09-12T10:10:37.593398319Z" level=info msg="StartContainer for \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\"" Sep 12 10:10:37.621637 systemd[1]: Started cri-containerd-8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706.scope - libcontainer container 8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706. Sep 12 10:10:37.654032 containerd[1504]: time="2025-09-12T10:10:37.653973422Z" level=info msg="StartContainer for \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\" returns successfully" Sep 12 10:10:37.883155 kubelet[2647]: I0912 10:10:37.883106 2647 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 10:10:37.944918 systemd[1]: Created slice kubepods-burstable-pod67007baa_0b69_4c1e_a962_9845b2827baa.slice - libcontainer container kubepods-burstable-pod67007baa_0b69_4c1e_a962_9845b2827baa.slice. Sep 12 10:10:37.955234 systemd[1]: Created slice kubepods-burstable-pod62264c12_1ef1_48b3_a524_36de9958039e.slice - libcontainer container kubepods-burstable-pod62264c12_1ef1_48b3_a524_36de9958039e.slice. Sep 12 10:10:37.970215 kubelet[2647]: I0912 10:10:37.970184 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlxfz\" (UniqueName: \"kubernetes.io/projected/67007baa-0b69-4c1e-a962-9845b2827baa-kube-api-access-tlxfz\") pod \"coredns-674b8bbfcf-6rhxs\" (UID: \"67007baa-0b69-4c1e-a962-9845b2827baa\") " pod="kube-system/coredns-674b8bbfcf-6rhxs" Sep 12 10:10:37.970315 kubelet[2647]: I0912 10:10:37.970235 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62264c12-1ef1-48b3-a524-36de9958039e-config-volume\") pod \"coredns-674b8bbfcf-m2clr\" (UID: \"62264c12-1ef1-48b3-a524-36de9958039e\") " pod="kube-system/coredns-674b8bbfcf-m2clr" Sep 12 10:10:37.970315 kubelet[2647]: I0912 10:10:37.970254 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gn9r\" (UniqueName: \"kubernetes.io/projected/62264c12-1ef1-48b3-a524-36de9958039e-kube-api-access-5gn9r\") pod \"coredns-674b8bbfcf-m2clr\" (UID: \"62264c12-1ef1-48b3-a524-36de9958039e\") " pod="kube-system/coredns-674b8bbfcf-m2clr" Sep 12 10:10:37.970315 kubelet[2647]: I0912 10:10:37.970275 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67007baa-0b69-4c1e-a962-9845b2827baa-config-volume\") pod \"coredns-674b8bbfcf-6rhxs\" (UID: \"67007baa-0b69-4c1e-a962-9845b2827baa\") " pod="kube-system/coredns-674b8bbfcf-6rhxs" Sep 12 10:10:38.252984 kubelet[2647]: E0912 10:10:38.252824 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:38.253903 containerd[1504]: time="2025-09-12T10:10:38.253854311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rhxs,Uid:67007baa-0b69-4c1e-a962-9845b2827baa,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:38.258626 kubelet[2647]: E0912 10:10:38.258600 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:38.259273 containerd[1504]: time="2025-09-12T10:10:38.258999103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m2clr,Uid:62264c12-1ef1-48b3-a524-36de9958039e,Namespace:kube-system,Attempt:0,}" Sep 12 10:10:38.509438 kubelet[2647]: E0912 10:10:38.509280 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:38.527629 kubelet[2647]: I0912 10:10:38.527027 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qr6nr" podStartSLOduration=7.13010511 podStartE2EDuration="20.527001223s" podCreationTimestamp="2025-09-12 10:10:18 +0000 UTC" firstStartedPulling="2025-09-12 10:10:19.855973926 +0000 UTC m=+7.540995766" lastFinishedPulling="2025-09-12 10:10:33.252870039 +0000 UTC m=+20.937891879" observedRunningTime="2025-09-12 10:10:38.522487135 +0000 UTC m=+26.207508986" watchObservedRunningTime="2025-09-12 10:10:38.527001223 +0000 UTC m=+26.212023063" Sep 12 10:10:39.283296 systemd[1]: Started sshd@9-10.0.0.72:22-10.0.0.1:56834.service - OpenSSH per-connection server daemon (10.0.0.1:56834). Sep 12 10:10:39.334138 sshd[3515]: Accepted publickey for core from 10.0.0.1 port 56834 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:10:39.337049 sshd-session[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:39.342348 systemd-logind[1490]: New session 10 of user core. Sep 12 10:10:39.352834 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 10:10:39.511785 kubelet[2647]: E0912 10:10:39.511731 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:39.610726 sshd[3517]: Connection closed by 10.0.0.1 port 56834 Sep 12 10:10:39.611154 sshd-session[3515]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:39.616112 systemd[1]: sshd@9-10.0.0.72:22-10.0.0.1:56834.service: Deactivated successfully. Sep 12 10:10:39.618343 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 10:10:39.619084 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Sep 12 10:10:39.620535 systemd-logind[1490]: Removed session 10. Sep 12 10:10:39.863483 systemd-networkd[1421]: cilium_host: Link UP Sep 12 10:10:39.863733 systemd-networkd[1421]: cilium_net: Link UP Sep 12 10:10:39.863972 systemd-networkd[1421]: cilium_net: Gained carrier Sep 12 10:10:39.864190 systemd-networkd[1421]: cilium_host: Gained carrier Sep 12 10:10:39.973975 systemd-networkd[1421]: cilium_vxlan: Link UP Sep 12 10:10:39.973986 systemd-networkd[1421]: cilium_vxlan: Gained carrier Sep 12 10:10:40.006660 systemd-networkd[1421]: cilium_host: Gained IPv6LL Sep 12 10:10:40.207533 kernel: NET: Registered PF_ALG protocol family Sep 12 10:10:40.513786 kubelet[2647]: E0912 10:10:40.513746 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:40.830771 systemd-networkd[1421]: cilium_net: Gained IPv6LL Sep 12 10:10:41.039085 systemd-networkd[1421]: lxc_health: Link UP Sep 12 10:10:41.043432 systemd-networkd[1421]: lxc_health: Gained carrier Sep 12 10:10:41.326539 kernel: eth0: renamed from tmp74d68 Sep 12 10:10:41.336563 kernel: eth0: renamed from tmp8f6c1 Sep 12 10:10:41.342283 systemd-networkd[1421]: lxca291993e3200: Link UP Sep 12 10:10:41.343627 systemd-networkd[1421]: lxc11b45a7b256b: Link UP Sep 12 10:10:41.344841 systemd-networkd[1421]: lxc11b45a7b256b: Gained carrier Sep 12 10:10:41.345719 systemd-networkd[1421]: lxca291993e3200: Gained carrier Sep 12 10:10:41.407916 systemd-networkd[1421]: cilium_vxlan: Gained IPv6LL Sep 12 10:10:42.901149 kubelet[2647]: E0912 10:10:42.901098 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:43.073600 systemd-networkd[1421]: lxc_health: Gained IPv6LL Sep 12 10:10:43.198812 systemd-networkd[1421]: lxca291993e3200: Gained IPv6LL Sep 12 10:10:43.390766 systemd-networkd[1421]: lxc11b45a7b256b: Gained IPv6LL Sep 12 10:10:44.641905 systemd[1]: Started sshd@10-10.0.0.72:22-10.0.0.1:37852.service - OpenSSH per-connection server daemon (10.0.0.1:37852). Sep 12 10:10:44.694206 sshd[3915]: Accepted publickey for core from 10.0.0.1 port 37852 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:10:44.696131 sshd-session[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:44.701423 systemd-logind[1490]: New session 11 of user core. Sep 12 10:10:44.708680 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 10:10:44.969228 sshd[3917]: Connection closed by 10.0.0.1 port 37852 Sep 12 10:10:44.971773 sshd-session[3915]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:44.978048 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Sep 12 10:10:44.981174 systemd[1]: sshd@10-10.0.0.72:22-10.0.0.1:37852.service: Deactivated successfully. Sep 12 10:10:44.984849 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 10:10:44.987063 systemd-logind[1490]: Removed session 11. Sep 12 10:10:45.076959 containerd[1504]: time="2025-09-12T10:10:45.076707143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:45.076959 containerd[1504]: time="2025-09-12T10:10:45.076765553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:45.076959 containerd[1504]: time="2025-09-12T10:10:45.076776353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:45.076959 containerd[1504]: time="2025-09-12T10:10:45.076862846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:45.077734 containerd[1504]: time="2025-09-12T10:10:45.077443757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:10:45.077734 containerd[1504]: time="2025-09-12T10:10:45.077531963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:10:45.077734 containerd[1504]: time="2025-09-12T10:10:45.077549947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:45.077734 containerd[1504]: time="2025-09-12T10:10:45.077640987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:10:45.105609 systemd[1]: run-containerd-runc-k8s.io-8f6c1bf9c94ddf4ac69488120a3e71f5e2086c4424aa28d369969ca284d1d791-runc.QD0PA0.mount: Deactivated successfully. Sep 12 10:10:45.117663 systemd[1]: Started cri-containerd-74d68a486329b19da679b329b6211ce2c355834ab2329617be0c90f4658f0d90.scope - libcontainer container 74d68a486329b19da679b329b6211ce2c355834ab2329617be0c90f4658f0d90. Sep 12 10:10:45.119859 systemd[1]: Started cri-containerd-8f6c1bf9c94ddf4ac69488120a3e71f5e2086c4424aa28d369969ca284d1d791.scope - libcontainer container 8f6c1bf9c94ddf4ac69488120a3e71f5e2086c4424aa28d369969ca284d1d791. Sep 12 10:10:45.133570 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 10:10:45.136479 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 10:10:45.161028 containerd[1504]: time="2025-09-12T10:10:45.160838407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rhxs,Uid:67007baa-0b69-4c1e-a962-9845b2827baa,Namespace:kube-system,Attempt:0,} returns sandbox id \"74d68a486329b19da679b329b6211ce2c355834ab2329617be0c90f4658f0d90\"" Sep 12 10:10:45.161859 kubelet[2647]: E0912 10:10:45.161826 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:45.180034 containerd[1504]: time="2025-09-12T10:10:45.179846158Z" level=info msg="CreateContainer within sandbox \"74d68a486329b19da679b329b6211ce2c355834ab2329617be0c90f4658f0d90\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:10:45.181923 containerd[1504]: time="2025-09-12T10:10:45.181870771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m2clr,Uid:62264c12-1ef1-48b3-a524-36de9958039e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f6c1bf9c94ddf4ac69488120a3e71f5e2086c4424aa28d369969ca284d1d791\"" Sep 12 10:10:45.182562 kubelet[2647]: E0912 10:10:45.182492 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:45.189329 containerd[1504]: time="2025-09-12T10:10:45.189268536Z" level=info msg="CreateContainer within sandbox \"8f6c1bf9c94ddf4ac69488120a3e71f5e2086c4424aa28d369969ca284d1d791\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:10:45.201568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165763926.mount: Deactivated successfully. Sep 12 10:10:45.209714 containerd[1504]: time="2025-09-12T10:10:45.209658235Z" level=info msg="CreateContainer within sandbox \"74d68a486329b19da679b329b6211ce2c355834ab2329617be0c90f4658f0d90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7354beb0e673ad2492f2b4b00b829a5af954777290a54cf4fd709bc3abd8842\"" Sep 12 10:10:45.211540 containerd[1504]: time="2025-09-12T10:10:45.210801281Z" level=info msg="StartContainer for \"c7354beb0e673ad2492f2b4b00b829a5af954777290a54cf4fd709bc3abd8842\"" Sep 12 10:10:45.213527 containerd[1504]: time="2025-09-12T10:10:45.213457751Z" level=info msg="CreateContainer within sandbox \"8f6c1bf9c94ddf4ac69488120a3e71f5e2086c4424aa28d369969ca284d1d791\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c4d16e4b5b7631163133f9c614cbd4572582de293c164a95e0365891891592d\"" Sep 12 10:10:45.215801 containerd[1504]: time="2025-09-12T10:10:45.215748554Z" level=info msg="StartContainer for \"3c4d16e4b5b7631163133f9c614cbd4572582de293c164a95e0365891891592d\"" Sep 12 10:10:45.249648 systemd[1]: Started cri-containerd-c7354beb0e673ad2492f2b4b00b829a5af954777290a54cf4fd709bc3abd8842.scope - libcontainer container c7354beb0e673ad2492f2b4b00b829a5af954777290a54cf4fd709bc3abd8842. Sep 12 10:10:45.253545 systemd[1]: Started cri-containerd-3c4d16e4b5b7631163133f9c614cbd4572582de293c164a95e0365891891592d.scope - libcontainer container 3c4d16e4b5b7631163133f9c614cbd4572582de293c164a95e0365891891592d. Sep 12 10:10:45.292935 containerd[1504]: time="2025-09-12T10:10:45.292885570Z" level=info msg="StartContainer for \"3c4d16e4b5b7631163133f9c614cbd4572582de293c164a95e0365891891592d\" returns successfully" Sep 12 10:10:45.293109 containerd[1504]: time="2025-09-12T10:10:45.292970048Z" level=info msg="StartContainer for \"c7354beb0e673ad2492f2b4b00b829a5af954777290a54cf4fd709bc3abd8842\" returns successfully" Sep 12 10:10:45.524097 kubelet[2647]: E0912 10:10:45.523927 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:45.525843 kubelet[2647]: E0912 10:10:45.525795 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:45.536237 kubelet[2647]: I0912 10:10:45.535943 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m2clr" podStartSLOduration=26.535919559 podStartE2EDuration="26.535919559s" podCreationTimestamp="2025-09-12 10:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:45.535181071 +0000 UTC m=+33.220202921" watchObservedRunningTime="2025-09-12 10:10:45.535919559 +0000 UTC m=+33.220941399" Sep 12 10:10:45.548813 kubelet[2647]: I0912 10:10:45.548079 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6rhxs" podStartSLOduration=27.548056495 podStartE2EDuration="27.548056495s" podCreationTimestamp="2025-09-12 10:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:10:45.54796889 +0000 UTC m=+33.232990740" watchObservedRunningTime="2025-09-12 10:10:45.548056495 +0000 UTC m=+33.233078335" Sep 12 10:10:46.528144 kubelet[2647]: E0912 10:10:46.528080 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:46.528629 kubelet[2647]: E0912 10:10:46.528453 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:47.249556 kubelet[2647]: I0912 10:10:47.247728 2647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 10:10:47.249556 kubelet[2647]: E0912 10:10:47.248211 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:47.530483 kubelet[2647]: E0912 10:10:47.530329 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:47.530483 kubelet[2647]: E0912 10:10:47.530414 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:47.531064 kubelet[2647]: E0912 10:10:47.530976 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:10:49.998941 systemd[1]: Started sshd@11-10.0.0.72:22-10.0.0.1:53452.service - OpenSSH per-connection server daemon (10.0.0.1:53452). Sep 12 10:10:50.044378 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 53452 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:10:50.046483 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:50.051426 systemd-logind[1490]: New session 12 of user core. Sep 12 10:10:50.060805 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 10:10:50.335277 sshd[4104]: Connection closed by 10.0.0.1 port 53452 Sep 12 10:10:50.335741 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:50.340731 systemd[1]: sshd@11-10.0.0.72:22-10.0.0.1:53452.service: Deactivated successfully. Sep 12 10:10:50.344385 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 10:10:50.345272 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Sep 12 10:10:50.346336 systemd-logind[1490]: Removed session 12. Sep 12 10:10:55.349436 systemd[1]: Started sshd@12-10.0.0.72:22-10.0.0.1:53460.service - OpenSSH per-connection server daemon (10.0.0.1:53460). Sep 12 10:10:55.393743 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 53460 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:10:55.395872 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:10:55.401983 systemd-logind[1490]: New session 13 of user core. Sep 12 10:10:55.411881 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 10:10:55.551924 sshd[4124]: Connection closed by 10.0.0.1 port 53460 Sep 12 10:10:55.552394 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Sep 12 10:10:55.556959 systemd[1]: sshd@12-10.0.0.72:22-10.0.0.1:53460.service: Deactivated successfully. Sep 12 10:10:55.559792 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 10:10:55.560580 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Sep 12 10:10:55.561636 systemd-logind[1490]: Removed session 13. Sep 12 10:11:00.576782 systemd[1]: Started sshd@13-10.0.0.72:22-10.0.0.1:40846.service - OpenSSH per-connection server daemon (10.0.0.1:40846). Sep 12 10:11:00.615070 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 40846 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:00.616923 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:00.621368 systemd-logind[1490]: New session 14 of user core. Sep 12 10:11:00.634656 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 10:11:00.762211 sshd[4140]: Connection closed by 10.0.0.1 port 40846 Sep 12 10:11:00.762677 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:00.781602 systemd[1]: sshd@13-10.0.0.72:22-10.0.0.1:40846.service: Deactivated successfully. Sep 12 10:11:00.783843 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 10:11:00.784908 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Sep 12 10:11:00.796969 systemd[1]: Started sshd@14-10.0.0.72:22-10.0.0.1:40848.service - OpenSSH per-connection server daemon (10.0.0.1:40848). Sep 12 10:11:00.798609 systemd-logind[1490]: Removed session 14. Sep 12 10:11:00.835468 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 40848 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:00.837763 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:00.843722 systemd-logind[1490]: New session 15 of user core. Sep 12 10:11:00.850691 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 10:11:01.027815 sshd[4156]: Connection closed by 10.0.0.1 port 40848 Sep 12 10:11:01.028316 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:01.044457 systemd[1]: sshd@14-10.0.0.72:22-10.0.0.1:40848.service: Deactivated successfully. Sep 12 10:11:01.047874 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 10:11:01.049690 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Sep 12 10:11:01.058999 systemd[1]: Started sshd@15-10.0.0.72:22-10.0.0.1:40864.service - OpenSSH per-connection server daemon (10.0.0.1:40864). Sep 12 10:11:01.060741 systemd-logind[1490]: Removed session 15. Sep 12 10:11:01.101474 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 40864 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:01.103362 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:01.108418 systemd-logind[1490]: New session 16 of user core. Sep 12 10:11:01.120754 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 10:11:01.247522 sshd[4169]: Connection closed by 10.0.0.1 port 40864 Sep 12 10:11:01.247895 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:01.253697 systemd[1]: sshd@15-10.0.0.72:22-10.0.0.1:40864.service: Deactivated successfully. Sep 12 10:11:01.256709 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 10:11:01.257568 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Sep 12 10:11:01.258610 systemd-logind[1490]: Removed session 16. Sep 12 10:11:06.269255 systemd[1]: Started sshd@16-10.0.0.72:22-10.0.0.1:40878.service - OpenSSH per-connection server daemon (10.0.0.1:40878). Sep 12 10:11:06.313958 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 40878 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:06.315909 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:06.320482 systemd-logind[1490]: New session 17 of user core. Sep 12 10:11:06.330721 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 10:11:06.503900 sshd[4186]: Connection closed by 10.0.0.1 port 40878 Sep 12 10:11:06.504331 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:06.507874 systemd[1]: sshd@16-10.0.0.72:22-10.0.0.1:40878.service: Deactivated successfully. Sep 12 10:11:06.510763 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 10:11:06.513236 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Sep 12 10:11:06.514528 systemd-logind[1490]: Removed session 17. Sep 12 10:11:11.519120 systemd[1]: Started sshd@17-10.0.0.72:22-10.0.0.1:49790.service - OpenSSH per-connection server daemon (10.0.0.1:49790). Sep 12 10:11:11.562904 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 49790 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:11.564762 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:11.569724 systemd-logind[1490]: New session 18 of user core. Sep 12 10:11:11.577728 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 10:11:11.688380 sshd[4202]: Connection closed by 10.0.0.1 port 49790 Sep 12 10:11:11.688763 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:11.693159 systemd[1]: sshd@17-10.0.0.72:22-10.0.0.1:49790.service: Deactivated successfully. Sep 12 10:11:11.695570 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 10:11:11.696374 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Sep 12 10:11:11.697282 systemd-logind[1490]: Removed session 18. Sep 12 10:11:16.702062 systemd[1]: Started sshd@18-10.0.0.72:22-10.0.0.1:49806.service - OpenSSH per-connection server daemon (10.0.0.1:49806). Sep 12 10:11:16.745140 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 49806 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:16.746943 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:16.751845 systemd-logind[1490]: New session 19 of user core. Sep 12 10:11:16.762830 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 10:11:16.881858 sshd[4219]: Connection closed by 10.0.0.1 port 49806 Sep 12 10:11:16.882358 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:16.894654 systemd[1]: sshd@18-10.0.0.72:22-10.0.0.1:49806.service: Deactivated successfully. Sep 12 10:11:16.896913 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 10:11:16.898904 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Sep 12 10:11:16.904840 systemd[1]: Started sshd@19-10.0.0.72:22-10.0.0.1:49820.service - OpenSSH per-connection server daemon (10.0.0.1:49820). Sep 12 10:11:16.906120 systemd-logind[1490]: Removed session 19. Sep 12 10:11:16.943523 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 49820 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:16.945253 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:16.950287 systemd-logind[1490]: New session 20 of user core. Sep 12 10:11:16.959687 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 10:11:18.163926 sshd[4234]: Connection closed by 10.0.0.1 port 49820 Sep 12 10:11:18.164526 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:18.176446 systemd[1]: sshd@19-10.0.0.72:22-10.0.0.1:49820.service: Deactivated successfully. Sep 12 10:11:18.179863 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 10:11:18.182040 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Sep 12 10:11:18.189946 systemd[1]: Started sshd@20-10.0.0.72:22-10.0.0.1:49830.service - OpenSSH per-connection server daemon (10.0.0.1:49830). Sep 12 10:11:18.191604 systemd-logind[1490]: Removed session 20. Sep 12 10:11:18.236738 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 49830 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:18.238859 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:18.245412 systemd-logind[1490]: New session 21 of user core. Sep 12 10:11:18.253777 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 10:11:18.930148 sshd[4247]: Connection closed by 10.0.0.1 port 49830 Sep 12 10:11:18.931515 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:18.943456 systemd[1]: sshd@20-10.0.0.72:22-10.0.0.1:49830.service: Deactivated successfully. Sep 12 10:11:18.947620 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 10:11:18.949206 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Sep 12 10:11:18.957802 systemd[1]: Started sshd@21-10.0.0.72:22-10.0.0.1:49832.service - OpenSSH per-connection server daemon (10.0.0.1:49832). Sep 12 10:11:18.959395 systemd-logind[1490]: Removed session 21. Sep 12 10:11:18.996575 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 49832 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:18.998133 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:19.003823 systemd-logind[1490]: New session 22 of user core. Sep 12 10:11:19.011686 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 10:11:19.534805 sshd[4268]: Connection closed by 10.0.0.1 port 49832 Sep 12 10:11:19.535480 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:19.549274 systemd[1]: sshd@21-10.0.0.72:22-10.0.0.1:49832.service: Deactivated successfully. Sep 12 10:11:19.551857 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 10:11:19.552770 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Sep 12 10:11:19.563123 systemd[1]: Started sshd@22-10.0.0.72:22-10.0.0.1:49844.service - OpenSSH per-connection server daemon (10.0.0.1:49844). Sep 12 10:11:19.564101 systemd-logind[1490]: Removed session 22. Sep 12 10:11:19.606997 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 49844 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:19.608821 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:19.614177 systemd-logind[1490]: New session 23 of user core. Sep 12 10:11:19.621743 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 10:11:19.747096 sshd[4281]: Connection closed by 10.0.0.1 port 49844 Sep 12 10:11:19.747565 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:19.752298 systemd[1]: sshd@22-10.0.0.72:22-10.0.0.1:49844.service: Deactivated successfully. Sep 12 10:11:19.755344 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 10:11:19.756220 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Sep 12 10:11:19.757263 systemd-logind[1490]: Removed session 23. Sep 12 10:11:24.762137 systemd[1]: Started sshd@23-10.0.0.72:22-10.0.0.1:38462.service - OpenSSH per-connection server daemon (10.0.0.1:38462). Sep 12 10:11:24.811427 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 38462 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:24.813194 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:24.817946 systemd-logind[1490]: New session 24 of user core. Sep 12 10:11:24.828641 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 10:11:24.941461 sshd[4298]: Connection closed by 10.0.0.1 port 38462 Sep 12 10:11:24.941866 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:24.946834 systemd[1]: sshd@23-10.0.0.72:22-10.0.0.1:38462.service: Deactivated successfully. Sep 12 10:11:24.948999 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 10:11:24.949749 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. Sep 12 10:11:24.950809 systemd-logind[1490]: Removed session 24. Sep 12 10:11:28.424285 kubelet[2647]: E0912 10:11:28.424225 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:29.956364 systemd[1]: Started sshd@24-10.0.0.72:22-10.0.0.1:55222.service - OpenSSH per-connection server daemon (10.0.0.1:55222). Sep 12 10:11:30.005701 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 55222 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:30.007445 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:30.012495 systemd-logind[1490]: New session 25 of user core. Sep 12 10:11:30.022752 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 10:11:30.139856 sshd[4316]: Connection closed by 10.0.0.1 port 55222 Sep 12 10:11:30.141960 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:30.146850 systemd[1]: sshd@24-10.0.0.72:22-10.0.0.1:55222.service: Deactivated successfully. Sep 12 10:11:30.149336 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 10:11:30.150195 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. Sep 12 10:11:30.151215 systemd-logind[1490]: Removed session 25. Sep 12 10:11:34.424492 kubelet[2647]: E0912 10:11:34.424376 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:35.153355 systemd[1]: Started sshd@25-10.0.0.72:22-10.0.0.1:55228.service - OpenSSH per-connection server daemon (10.0.0.1:55228). Sep 12 10:11:35.198798 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 55228 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:35.200926 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:35.205943 systemd-logind[1490]: New session 26 of user core. Sep 12 10:11:35.214689 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 10:11:35.330355 sshd[4331]: Connection closed by 10.0.0.1 port 55228 Sep 12 10:11:35.330845 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:35.343761 systemd[1]: sshd@25-10.0.0.72:22-10.0.0.1:55228.service: Deactivated successfully. Sep 12 10:11:35.346653 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 10:11:35.348961 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. Sep 12 10:11:35.357808 systemd[1]: Started sshd@26-10.0.0.72:22-10.0.0.1:55242.service - OpenSSH per-connection server daemon (10.0.0.1:55242). Sep 12 10:11:35.358917 systemd-logind[1490]: Removed session 26. Sep 12 10:11:35.399647 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 55242 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:35.401763 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:35.406885 systemd-logind[1490]: New session 27 of user core. Sep 12 10:11:35.420736 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 10:11:36.965552 containerd[1504]: time="2025-09-12T10:11:36.965441420Z" level=info msg="StopContainer for \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\" with timeout 30 (s)" Sep 12 10:11:36.972526 containerd[1504]: time="2025-09-12T10:11:36.972449957Z" level=info msg="Stop container \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\" with signal terminated" Sep 12 10:11:36.989561 systemd[1]: cri-containerd-0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3.scope: Deactivated successfully. Sep 12 10:11:37.006787 containerd[1504]: time="2025-09-12T10:11:37.006724117Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:11:37.009459 containerd[1504]: time="2025-09-12T10:11:37.009302929Z" level=info msg="StopContainer for \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\" with timeout 2 (s)" Sep 12 10:11:37.009903 containerd[1504]: time="2025-09-12T10:11:37.009866340Z" level=info msg="Stop container \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\" with signal terminated" Sep 12 10:11:37.015811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3-rootfs.mount: Deactivated successfully. Sep 12 10:11:37.017862 systemd-networkd[1421]: lxc_health: Link DOWN Sep 12 10:11:37.018438 systemd-networkd[1421]: lxc_health: Lost carrier Sep 12 10:11:37.022290 containerd[1504]: time="2025-09-12T10:11:37.022195978Z" level=info msg="shim disconnected" id=0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3 namespace=k8s.io Sep 12 10:11:37.022433 containerd[1504]: time="2025-09-12T10:11:37.022291659Z" level=warning msg="cleaning up after shim disconnected" id=0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3 namespace=k8s.io Sep 12 10:11:37.022433 containerd[1504]: time="2025-09-12T10:11:37.022306438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:11:37.035165 systemd[1]: cri-containerd-8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706.scope: Deactivated successfully. Sep 12 10:11:37.035750 systemd[1]: cri-containerd-8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706.scope: Consumed 7.546s CPU time, 124.7M memory peak, 204K read from disk, 13.3M written to disk. Sep 12 10:11:37.051637 containerd[1504]: time="2025-09-12T10:11:37.051583781Z" level=info msg="StopContainer for \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\" returns successfully" Sep 12 10:11:37.052303 containerd[1504]: time="2025-09-12T10:11:37.052263493Z" level=info msg="StopPodSandbox for \"f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd\"" Sep 12 10:11:37.056380 containerd[1504]: time="2025-09-12T10:11:37.052566860Z" level=info msg="Container to stop \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:11:37.058641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd-shm.mount: Deactivated successfully. Sep 12 10:11:37.064012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706-rootfs.mount: Deactivated successfully. Sep 12 10:11:37.065823 systemd[1]: cri-containerd-f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd.scope: Deactivated successfully. Sep 12 10:11:37.080123 containerd[1504]: time="2025-09-12T10:11:37.080042809Z" level=info msg="shim disconnected" id=8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706 namespace=k8s.io Sep 12 10:11:37.080123 containerd[1504]: time="2025-09-12T10:11:37.080109245Z" level=warning msg="cleaning up after shim disconnected" id=8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706 namespace=k8s.io Sep 12 10:11:37.080123 containerd[1504]: time="2025-09-12T10:11:37.080121508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:11:37.090811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd-rootfs.mount: Deactivated successfully. Sep 12 10:11:37.098466 containerd[1504]: time="2025-09-12T10:11:37.098380277Z" level=info msg="shim disconnected" id=f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd namespace=k8s.io Sep 12 10:11:37.098466 containerd[1504]: time="2025-09-12T10:11:37.098452012Z" level=warning msg="cleaning up after shim disconnected" id=f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd namespace=k8s.io Sep 12 10:11:37.098466 containerd[1504]: time="2025-09-12T10:11:37.098460569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:11:37.102755 containerd[1504]: time="2025-09-12T10:11:37.102628973Z" level=info msg="StopContainer for \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\" returns successfully" Sep 12 10:11:37.103294 containerd[1504]: time="2025-09-12T10:11:37.103260793Z" level=info msg="StopPodSandbox for \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\"" Sep 12 10:11:37.103361 containerd[1504]: time="2025-09-12T10:11:37.103299006Z" level=info msg="Container to stop \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:11:37.103361 containerd[1504]: time="2025-09-12T10:11:37.103335686Z" level=info msg="Container to stop \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:11:37.103361 containerd[1504]: time="2025-09-12T10:11:37.103344142Z" level=info msg="Container to stop \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:11:37.103361 containerd[1504]: time="2025-09-12T10:11:37.103351837Z" level=info msg="Container to stop \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:11:37.103361 containerd[1504]: time="2025-09-12T10:11:37.103360684Z" level=info msg="Container to stop \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:11:37.110161 systemd[1]: cri-containerd-66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a.scope: Deactivated successfully. Sep 12 10:11:37.117625 containerd[1504]: time="2025-09-12T10:11:37.117565295Z" level=info msg="TearDown network for sandbox \"f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd\" successfully" Sep 12 10:11:37.117625 containerd[1504]: time="2025-09-12T10:11:37.117605803Z" level=info msg="StopPodSandbox for \"f91617f5968ddea6b0f8d32fd136aed558054fa87929dd373d1b60e9515c7cdd\" returns successfully" Sep 12 10:11:37.139138 containerd[1504]: time="2025-09-12T10:11:37.139049805Z" level=info msg="shim disconnected" id=66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a namespace=k8s.io Sep 12 10:11:37.139138 containerd[1504]: time="2025-09-12T10:11:37.139129176Z" level=warning msg="cleaning up after shim disconnected" id=66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a namespace=k8s.io Sep 12 10:11:37.139138 containerd[1504]: time="2025-09-12T10:11:37.139141559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:11:37.150033 kubelet[2647]: I0912 10:11:37.149972 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjgt4\" (UniqueName: \"kubernetes.io/projected/49277120-1826-4d7c-a0a4-19a41af73ff2-kube-api-access-fjgt4\") pod \"49277120-1826-4d7c-a0a4-19a41af73ff2\" (UID: \"49277120-1826-4d7c-a0a4-19a41af73ff2\") " Sep 12 10:11:37.150033 kubelet[2647]: I0912 10:11:37.150029 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49277120-1826-4d7c-a0a4-19a41af73ff2-cilium-config-path\") pod \"49277120-1826-4d7c-a0a4-19a41af73ff2\" (UID: \"49277120-1826-4d7c-a0a4-19a41af73ff2\") " Sep 12 10:11:37.154320 kubelet[2647]: I0912 10:11:37.154265 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49277120-1826-4d7c-a0a4-19a41af73ff2-kube-api-access-fjgt4" (OuterVolumeSpecName: "kube-api-access-fjgt4") pod "49277120-1826-4d7c-a0a4-19a41af73ff2" (UID: "49277120-1826-4d7c-a0a4-19a41af73ff2"). InnerVolumeSpecName "kube-api-access-fjgt4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:11:37.155420 kubelet[2647]: I0912 10:11:37.155372 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49277120-1826-4d7c-a0a4-19a41af73ff2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "49277120-1826-4d7c-a0a4-19a41af73ff2" (UID: "49277120-1826-4d7c-a0a4-19a41af73ff2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:11:37.156667 containerd[1504]: time="2025-09-12T10:11:37.156616798Z" level=info msg="TearDown network for sandbox \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" successfully" Sep 12 10:11:37.156859 containerd[1504]: time="2025-09-12T10:11:37.156825254Z" level=info msg="StopPodSandbox for \"66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a\" returns successfully" Sep 12 10:11:37.250727 kubelet[2647]: I0912 10:11:37.250552 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-config-path\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.250727 kubelet[2647]: I0912 10:11:37.250619 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-run\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.250727 kubelet[2647]: I0912 10:11:37.250649 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cni-path\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.250727 kubelet[2647]: I0912 10:11:37.250672 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-xtables-lock\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.250727 kubelet[2647]: I0912 10:11:37.250700 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqxv9\" (UniqueName: \"kubernetes.io/projected/dfb322bc-be71-4d59-bdeb-f775c4e97943-kube-api-access-rqxv9\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252525 kubelet[2647]: I0912 10:11:37.251356 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-lib-modules\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252525 kubelet[2647]: I0912 10:11:37.251458 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dfb322bc-be71-4d59-bdeb-f775c4e97943-clustermesh-secrets\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252525 kubelet[2647]: I0912 10:11:37.251487 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-bpf-maps\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252525 kubelet[2647]: I0912 10:11:37.251536 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-etc-cni-netd\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252525 kubelet[2647]: I0912 10:11:37.251562 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-cgroup\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252525 kubelet[2647]: I0912 10:11:37.251582 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-hostproc\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252763 kubelet[2647]: I0912 10:11:37.251609 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dfb322bc-be71-4d59-bdeb-f775c4e97943-hubble-tls\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252763 kubelet[2647]: I0912 10:11:37.251631 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-host-proc-sys-net\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252763 kubelet[2647]: I0912 10:11:37.251656 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-host-proc-sys-kernel\") pod \"dfb322bc-be71-4d59-bdeb-f775c4e97943\" (UID: \"dfb322bc-be71-4d59-bdeb-f775c4e97943\") " Sep 12 10:11:37.252763 kubelet[2647]: I0912 10:11:37.251704 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49277120-1826-4d7c-a0a4-19a41af73ff2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.252763 kubelet[2647]: I0912 10:11:37.251718 2647 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fjgt4\" (UniqueName: \"kubernetes.io/projected/49277120-1826-4d7c-a0a4-19a41af73ff2-kube-api-access-fjgt4\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.252763 kubelet[2647]: I0912 10:11:37.250804 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cni-path" (OuterVolumeSpecName: "cni-path") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.252972 kubelet[2647]: I0912 10:11:37.250858 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.252972 kubelet[2647]: I0912 10:11:37.250886 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.252972 kubelet[2647]: I0912 10:11:37.251766 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.252972 kubelet[2647]: I0912 10:11:37.251841 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.252972 kubelet[2647]: I0912 10:11:37.252386 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.253149 kubelet[2647]: I0912 10:11:37.252423 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.253149 kubelet[2647]: I0912 10:11:37.252448 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.253810 kubelet[2647]: I0912 10:11:37.253714 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-hostproc" (OuterVolumeSpecName: "hostproc") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.253810 kubelet[2647]: I0912 10:11:37.253773 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 10:11:37.254937 kubelet[2647]: I0912 10:11:37.254897 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 10:11:37.256130 kubelet[2647]: I0912 10:11:37.256065 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb322bc-be71-4d59-bdeb-f775c4e97943-kube-api-access-rqxv9" (OuterVolumeSpecName: "kube-api-access-rqxv9") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "kube-api-access-rqxv9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:11:37.256200 kubelet[2647]: I0912 10:11:37.256094 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb322bc-be71-4d59-bdeb-f775c4e97943-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 10:11:37.256313 kubelet[2647]: I0912 10:11:37.256253 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb322bc-be71-4d59-bdeb-f775c4e97943-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dfb322bc-be71-4d59-bdeb-f775c4e97943" (UID: "dfb322bc-be71-4d59-bdeb-f775c4e97943"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 10:11:37.352719 kubelet[2647]: I0912 10:11:37.352656 2647 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352719 kubelet[2647]: I0912 10:11:37.352700 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352719 kubelet[2647]: I0912 10:11:37.352714 2647 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352719 kubelet[2647]: I0912 10:11:37.352730 2647 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dfb322bc-be71-4d59-bdeb-f775c4e97943-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352996 kubelet[2647]: I0912 10:11:37.352744 2647 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352996 kubelet[2647]: I0912 10:11:37.352756 2647 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352996 kubelet[2647]: I0912 10:11:37.352767 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352996 kubelet[2647]: I0912 10:11:37.352781 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352996 kubelet[2647]: I0912 10:11:37.352791 2647 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352996 kubelet[2647]: I0912 10:11:37.352801 2647 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352996 kubelet[2647]: I0912 10:11:37.352813 2647 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rqxv9\" (UniqueName: \"kubernetes.io/projected/dfb322bc-be71-4d59-bdeb-f775c4e97943-kube-api-access-rqxv9\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.352996 kubelet[2647]: I0912 10:11:37.352828 2647 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.353249 kubelet[2647]: I0912 10:11:37.352838 2647 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dfb322bc-be71-4d59-bdeb-f775c4e97943-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.353249 kubelet[2647]: I0912 10:11:37.352848 2647 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dfb322bc-be71-4d59-bdeb-f775c4e97943-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 10:11:37.477940 kubelet[2647]: E0912 10:11:37.477882 2647 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:11:37.635917 kubelet[2647]: I0912 10:11:37.635869 2647 scope.go:117] "RemoveContainer" containerID="0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3" Sep 12 10:11:37.643614 containerd[1504]: time="2025-09-12T10:11:37.643558536Z" level=info msg="RemoveContainer for \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\"" Sep 12 10:11:37.645960 systemd[1]: Removed slice kubepods-besteffort-pod49277120_1826_4d7c_a0a4_19a41af73ff2.slice - libcontainer container kubepods-besteffort-pod49277120_1826_4d7c_a0a4_19a41af73ff2.slice. Sep 12 10:11:37.647602 systemd[1]: Removed slice kubepods-burstable-poddfb322bc_be71_4d59_bdeb_f775c4e97943.slice - libcontainer container kubepods-burstable-poddfb322bc_be71_4d59_bdeb_f775c4e97943.slice. Sep 12 10:11:37.647862 systemd[1]: kubepods-burstable-poddfb322bc_be71_4d59_bdeb_f775c4e97943.slice: Consumed 7.684s CPU time, 125M memory peak, 228K read from disk, 13.3M written to disk. Sep 12 10:11:37.650963 containerd[1504]: time="2025-09-12T10:11:37.650837493Z" level=info msg="RemoveContainer for \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\" returns successfully" Sep 12 10:11:37.651766 kubelet[2647]: I0912 10:11:37.651723 2647 scope.go:117] "RemoveContainer" containerID="0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3" Sep 12 10:11:37.652036 containerd[1504]: time="2025-09-12T10:11:37.651993911Z" level=error msg="ContainerStatus for \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\": not found" Sep 12 10:11:37.652195 kubelet[2647]: E0912 10:11:37.652153 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\": not found" containerID="0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3" Sep 12 10:11:37.652229 kubelet[2647]: I0912 10:11:37.652187 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3"} err="failed to get container status \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"0edcb8ffa2dd22249f15594ebcdfa445ad7b3b2c9072a53d688c3a3faa37f5d3\": not found" Sep 12 10:11:37.652269 kubelet[2647]: I0912 10:11:37.652228 2647 scope.go:117] "RemoveContainer" containerID="8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706" Sep 12 10:11:37.653244 containerd[1504]: time="2025-09-12T10:11:37.653206146Z" level=info msg="RemoveContainer for \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\"" Sep 12 10:11:37.657984 containerd[1504]: time="2025-09-12T10:11:37.657891823Z" level=info msg="RemoveContainer for \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\" returns successfully" Sep 12 10:11:37.658570 kubelet[2647]: I0912 10:11:37.658410 2647 scope.go:117] "RemoveContainer" containerID="cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035" Sep 12 10:11:37.659676 containerd[1504]: time="2025-09-12T10:11:37.659640287Z" level=info msg="RemoveContainer for \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\"" Sep 12 10:11:37.663907 containerd[1504]: time="2025-09-12T10:11:37.663869265Z" level=info msg="RemoveContainer for \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\" returns successfully" Sep 12 10:11:37.664102 kubelet[2647]: I0912 10:11:37.664071 2647 scope.go:117] "RemoveContainer" containerID="bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9" Sep 12 10:11:37.665063 containerd[1504]: time="2025-09-12T10:11:37.665030874Z" level=info msg="RemoveContainer for \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\"" Sep 12 10:11:37.669454 containerd[1504]: time="2025-09-12T10:11:37.669403195Z" level=info msg="RemoveContainer for \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\" returns successfully" Sep 12 10:11:37.669692 kubelet[2647]: I0912 10:11:37.669651 2647 scope.go:117] "RemoveContainer" containerID="301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6" Sep 12 10:11:37.670955 containerd[1504]: time="2025-09-12T10:11:37.670925428Z" level=info msg="RemoveContainer for \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\"" Sep 12 10:11:37.674539 containerd[1504]: time="2025-09-12T10:11:37.674472612Z" level=info msg="RemoveContainer for \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\" returns successfully" Sep 12 10:11:37.674731 kubelet[2647]: I0912 10:11:37.674691 2647 scope.go:117] "RemoveContainer" containerID="c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33" Sep 12 10:11:37.676906 containerd[1504]: time="2025-09-12T10:11:37.676871311Z" level=info msg="RemoveContainer for \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\"" Sep 12 10:11:37.680633 containerd[1504]: time="2025-09-12T10:11:37.680603606Z" level=info msg="RemoveContainer for \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\" returns successfully" Sep 12 10:11:37.680812 kubelet[2647]: I0912 10:11:37.680772 2647 scope.go:117] "RemoveContainer" containerID="8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706" Sep 12 10:11:37.681168 containerd[1504]: time="2025-09-12T10:11:37.681111772Z" level=error msg="ContainerStatus for \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\": not found" Sep 12 10:11:37.681398 kubelet[2647]: E0912 10:11:37.681356 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\": not found" containerID="8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706" Sep 12 10:11:37.681471 kubelet[2647]: I0912 10:11:37.681409 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706"} err="failed to get container status \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ee6feaf1c02cede8cb603db965c6c30403ed7eb36429958cb39df4601f3a706\": not found" Sep 12 10:11:37.681471 kubelet[2647]: I0912 10:11:37.681450 2647 scope.go:117] "RemoveContainer" containerID="cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035" Sep 12 10:11:37.681738 containerd[1504]: time="2025-09-12T10:11:37.681699258Z" level=error msg="ContainerStatus for \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\": not found" Sep 12 10:11:37.681876 kubelet[2647]: E0912 10:11:37.681849 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\": not found" containerID="cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035" Sep 12 10:11:37.681921 kubelet[2647]: I0912 10:11:37.681884 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035"} err="failed to get container status \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfd2c8973d2c3345182768123091c165da7e3756d12be48ef6195c0858047035\": not found" Sep 12 10:11:37.681921 kubelet[2647]: I0912 10:11:37.681910 2647 scope.go:117] "RemoveContainer" containerID="bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9" Sep 12 10:11:37.682139 containerd[1504]: time="2025-09-12T10:11:37.682099509Z" level=error msg="ContainerStatus for \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\": not found" Sep 12 10:11:37.682342 kubelet[2647]: E0912 10:11:37.682307 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\": not found" containerID="bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9" Sep 12 10:11:37.682385 kubelet[2647]: I0912 10:11:37.682350 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9"} err="failed to get container status \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcfd843237f9ab624d5fd0dd4f27f1fb99064200281c414b45475506c1f98bb9\": not found" Sep 12 10:11:37.682385 kubelet[2647]: I0912 10:11:37.682378 2647 scope.go:117] "RemoveContainer" containerID="301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6" Sep 12 10:11:37.682600 containerd[1504]: time="2025-09-12T10:11:37.682567229Z" level=error msg="ContainerStatus for \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\": not found" Sep 12 10:11:37.682738 kubelet[2647]: E0912 10:11:37.682711 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\": not found" containerID="301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6" Sep 12 10:11:37.682779 kubelet[2647]: I0912 10:11:37.682743 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6"} err="failed to get container status \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"301a7c598c09f9fe86f70c95e92ef672da32ae01dd358a3f37ca8dfddf1902a6\": not found" Sep 12 10:11:37.682779 kubelet[2647]: I0912 10:11:37.682762 2647 scope.go:117] "RemoveContainer" containerID="c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33" Sep 12 10:11:37.682965 containerd[1504]: time="2025-09-12T10:11:37.682932903Z" level=error msg="ContainerStatus for \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\": not found" Sep 12 10:11:37.683065 kubelet[2647]: E0912 10:11:37.683043 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\": not found" containerID="c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33" Sep 12 10:11:37.683104 kubelet[2647]: I0912 10:11:37.683067 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33"} err="failed to get container status \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\": rpc error: code = NotFound desc = an error occurred when try to find container \"c35631bb9abd1e509fcb873ea98ac92397c08ead9920e646d2c0ee81e892bf33\": not found" Sep 12 10:11:37.984191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a-rootfs.mount: Deactivated successfully. Sep 12 10:11:37.984352 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66288f776a2f7b3bcbb0eab221f7688674afd8bb9ee50278b48675ee431eb64a-shm.mount: Deactivated successfully. Sep 12 10:11:37.984442 systemd[1]: var-lib-kubelet-pods-49277120\x2d1826\x2d4d7c\x2da0a4\x2d19a41af73ff2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjgt4.mount: Deactivated successfully. Sep 12 10:11:37.984558 systemd[1]: var-lib-kubelet-pods-dfb322bc\x2dbe71\x2d4d59\x2dbdeb\x2df775c4e97943-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drqxv9.mount: Deactivated successfully. Sep 12 10:11:37.984648 systemd[1]: var-lib-kubelet-pods-dfb322bc\x2dbe71\x2d4d59\x2dbdeb\x2df775c4e97943-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 10:11:37.984733 systemd[1]: var-lib-kubelet-pods-dfb322bc\x2dbe71\x2d4d59\x2dbdeb\x2df775c4e97943-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 10:11:38.427093 kubelet[2647]: I0912 10:11:38.427017 2647 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49277120-1826-4d7c-a0a4-19a41af73ff2" path="/var/lib/kubelet/pods/49277120-1826-4d7c-a0a4-19a41af73ff2/volumes" Sep 12 10:11:38.427690 kubelet[2647]: I0912 10:11:38.427665 2647 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfb322bc-be71-4d59-bdeb-f775c4e97943" path="/var/lib/kubelet/pods/dfb322bc-be71-4d59-bdeb-f775c4e97943/volumes" Sep 12 10:11:38.953490 sshd[4346]: Connection closed by 10.0.0.1 port 55242 Sep 12 10:11:38.954200 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:38.966533 systemd[1]: sshd@26-10.0.0.72:22-10.0.0.1:55242.service: Deactivated successfully. Sep 12 10:11:38.969080 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 10:11:38.971232 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. Sep 12 10:11:38.978848 systemd[1]: Started sshd@27-10.0.0.72:22-10.0.0.1:55256.service - OpenSSH per-connection server daemon (10.0.0.1:55256). Sep 12 10:11:38.980250 systemd-logind[1490]: Removed session 27. Sep 12 10:11:39.026107 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 55256 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:39.028000 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:39.036100 systemd-logind[1490]: New session 28 of user core. Sep 12 10:11:39.045893 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 10:11:39.458554 sshd[4511]: Connection closed by 10.0.0.1 port 55256 Sep 12 10:11:39.458522 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:39.476122 systemd[1]: sshd@27-10.0.0.72:22-10.0.0.1:55256.service: Deactivated successfully. Sep 12 10:11:39.479347 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 10:11:39.480363 systemd-logind[1490]: Session 28 logged out. Waiting for processes to exit. Sep 12 10:11:39.491845 systemd[1]: Started sshd@28-10.0.0.72:22-10.0.0.1:55260.service - OpenSSH per-connection server daemon (10.0.0.1:55260). Sep 12 10:11:39.492562 systemd-logind[1490]: Removed session 28. Sep 12 10:11:39.531827 sshd[4523]: Accepted publickey for core from 10.0.0.1 port 55260 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:39.533672 sshd-session[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:39.539098 systemd-logind[1490]: New session 29 of user core. Sep 12 10:11:39.556674 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 10:11:39.608941 sshd[4527]: Connection closed by 10.0.0.1 port 55260 Sep 12 10:11:39.609396 sshd-session[4523]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:39.622373 systemd[1]: sshd@28-10.0.0.72:22-10.0.0.1:55260.service: Deactivated successfully. Sep 12 10:11:39.624822 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 10:11:39.627047 systemd-logind[1490]: Session 29 logged out. Waiting for processes to exit. Sep 12 10:11:39.641381 systemd[1]: Started sshd@29-10.0.0.72:22-10.0.0.1:55274.service - OpenSSH per-connection server daemon (10.0.0.1:55274). Sep 12 10:11:39.642970 systemd-logind[1490]: Removed session 29. Sep 12 10:11:39.679425 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 55274 ssh2: RSA SHA256:TnEZHMsSP7ubTz8ncmkUtKou03xTTOKVKcLGnYmsDtY Sep 12 10:11:39.681096 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:11:39.687105 systemd-logind[1490]: New session 30 of user core. Sep 12 10:11:39.703834 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 10:11:39.741965 systemd[1]: Created slice kubepods-burstable-podbfe646f1_e4b6_4c11_ba5e_2796d7dd4d41.slice - libcontainer container kubepods-burstable-podbfe646f1_e4b6_4c11_ba5e_2796d7dd4d41.slice. Sep 12 10:11:39.769407 kubelet[2647]: I0912 10:11:39.769338 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-etc-cni-netd\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.769948 kubelet[2647]: I0912 10:11:39.769443 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-host-proc-sys-net\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.769948 kubelet[2647]: I0912 10:11:39.769473 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-host-proc-sys-kernel\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.769948 kubelet[2647]: I0912 10:11:39.769494 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-hubble-tls\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.769948 kubelet[2647]: I0912 10:11:39.769536 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-bpf-maps\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.769948 kubelet[2647]: I0912 10:11:39.769555 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-cni-path\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.769948 kubelet[2647]: I0912 10:11:39.769579 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-lib-modules\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.770134 kubelet[2647]: I0912 10:11:39.769642 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-cilium-ipsec-secrets\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.770134 kubelet[2647]: I0912 10:11:39.769666 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-hostproc\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.770134 kubelet[2647]: I0912 10:11:39.769689 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-cilium-cgroup\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.770134 kubelet[2647]: I0912 10:11:39.769711 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-xtables-lock\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.770134 kubelet[2647]: I0912 10:11:39.769760 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-clustermesh-secrets\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.770134 kubelet[2647]: I0912 10:11:39.769787 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-cilium-run\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.770263 kubelet[2647]: I0912 10:11:39.769808 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-cilium-config-path\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:39.770263 kubelet[2647]: I0912 10:11:39.769834 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4pp\" (UniqueName: \"kubernetes.io/projected/bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41-kube-api-access-hb4pp\") pod \"cilium-2wmpf\" (UID: \"bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41\") " pod="kube-system/cilium-2wmpf" Sep 12 10:11:40.045374 kubelet[2647]: E0912 10:11:40.045167 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:40.046125 containerd[1504]: time="2025-09-12T10:11:40.045930655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wmpf,Uid:bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41,Namespace:kube-system,Attempt:0,}" Sep 12 10:11:40.077842 containerd[1504]: time="2025-09-12T10:11:40.077676915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:11:40.077842 containerd[1504]: time="2025-09-12T10:11:40.077754643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:11:40.077842 containerd[1504]: time="2025-09-12T10:11:40.077766886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:11:40.078107 containerd[1504]: time="2025-09-12T10:11:40.077857528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:11:40.102781 systemd[1]: Started cri-containerd-11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487.scope - libcontainer container 11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487. Sep 12 10:11:40.132045 containerd[1504]: time="2025-09-12T10:11:40.131978250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wmpf,Uid:bfe646f1-e4b6-4c11-ba5e-2796d7dd4d41,Namespace:kube-system,Attempt:0,} returns sandbox id \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\"" Sep 12 10:11:40.132891 kubelet[2647]: E0912 10:11:40.132855 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:40.140667 containerd[1504]: time="2025-09-12T10:11:40.140612387Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:11:40.292408 containerd[1504]: time="2025-09-12T10:11:40.292333154Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"265ea3608f6f89bd809849ce0197167aade4aed4977a5a3da49e8efac7e79507\"" Sep 12 10:11:40.294077 containerd[1504]: time="2025-09-12T10:11:40.292947070Z" level=info msg="StartContainer for \"265ea3608f6f89bd809849ce0197167aade4aed4977a5a3da49e8efac7e79507\"" Sep 12 10:11:40.329757 systemd[1]: Started cri-containerd-265ea3608f6f89bd809849ce0197167aade4aed4977a5a3da49e8efac7e79507.scope - libcontainer container 265ea3608f6f89bd809849ce0197167aade4aed4977a5a3da49e8efac7e79507. Sep 12 10:11:40.361366 containerd[1504]: time="2025-09-12T10:11:40.361306593Z" level=info msg="StartContainer for \"265ea3608f6f89bd809849ce0197167aade4aed4977a5a3da49e8efac7e79507\" returns successfully" Sep 12 10:11:40.372585 systemd[1]: cri-containerd-265ea3608f6f89bd809849ce0197167aade4aed4977a5a3da49e8efac7e79507.scope: Deactivated successfully. Sep 12 10:11:40.414653 containerd[1504]: time="2025-09-12T10:11:40.414564716Z" level=info msg="shim disconnected" id=265ea3608f6f89bd809849ce0197167aade4aed4977a5a3da49e8efac7e79507 namespace=k8s.io Sep 12 10:11:40.414653 containerd[1504]: time="2025-09-12T10:11:40.414641522Z" level=warning msg="cleaning up after shim disconnected" id=265ea3608f6f89bd809849ce0197167aade4aed4977a5a3da49e8efac7e79507 namespace=k8s.io Sep 12 10:11:40.414653 containerd[1504]: time="2025-09-12T10:11:40.414653264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:11:40.653236 kubelet[2647]: E0912 10:11:40.653085 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:40.658679 containerd[1504]: time="2025-09-12T10:11:40.658182436Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:11:40.675871 containerd[1504]: time="2025-09-12T10:11:40.675799904Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"73ce666787ba9bd867a2a2f0f475aaed29162c0e22d11f112f6c06f170e08420\"" Sep 12 10:11:40.676527 containerd[1504]: time="2025-09-12T10:11:40.676463825Z" level=info msg="StartContainer for \"73ce666787ba9bd867a2a2f0f475aaed29162c0e22d11f112f6c06f170e08420\"" Sep 12 10:11:40.706895 systemd[1]: Started cri-containerd-73ce666787ba9bd867a2a2f0f475aaed29162c0e22d11f112f6c06f170e08420.scope - libcontainer container 73ce666787ba9bd867a2a2f0f475aaed29162c0e22d11f112f6c06f170e08420. Sep 12 10:11:40.740864 containerd[1504]: time="2025-09-12T10:11:40.740808193Z" level=info msg="StartContainer for \"73ce666787ba9bd867a2a2f0f475aaed29162c0e22d11f112f6c06f170e08420\" returns successfully" Sep 12 10:11:40.748581 systemd[1]: cri-containerd-73ce666787ba9bd867a2a2f0f475aaed29162c0e22d11f112f6c06f170e08420.scope: Deactivated successfully. Sep 12 10:11:40.780392 containerd[1504]: time="2025-09-12T10:11:40.780156881Z" level=info msg="shim disconnected" id=73ce666787ba9bd867a2a2f0f475aaed29162c0e22d11f112f6c06f170e08420 namespace=k8s.io Sep 12 10:11:40.780392 containerd[1504]: time="2025-09-12T10:11:40.780220893Z" level=warning msg="cleaning up after shim disconnected" id=73ce666787ba9bd867a2a2f0f475aaed29162c0e22d11f112f6c06f170e08420 namespace=k8s.io Sep 12 10:11:40.780392 containerd[1504]: time="2025-09-12T10:11:40.780229529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:11:41.656630 kubelet[2647]: E0912 10:11:41.656582 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:41.734398 containerd[1504]: time="2025-09-12T10:11:41.734326752Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:11:41.889687 containerd[1504]: time="2025-09-12T10:11:41.889621038Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba60c3df7dfadd30bb04b7af0a5732e7b51b55c38edb42a467f340822573ca90\"" Sep 12 10:11:41.890205 containerd[1504]: time="2025-09-12T10:11:41.890172826Z" level=info msg="StartContainer for \"ba60c3df7dfadd30bb04b7af0a5732e7b51b55c38edb42a467f340822573ca90\"" Sep 12 10:11:41.923675 systemd[1]: Started cri-containerd-ba60c3df7dfadd30bb04b7af0a5732e7b51b55c38edb42a467f340822573ca90.scope - libcontainer container ba60c3df7dfadd30bb04b7af0a5732e7b51b55c38edb42a467f340822573ca90. Sep 12 10:11:41.964739 systemd[1]: cri-containerd-ba60c3df7dfadd30bb04b7af0a5732e7b51b55c38edb42a467f340822573ca90.scope: Deactivated successfully. Sep 12 10:11:42.000283 containerd[1504]: time="2025-09-12T10:11:42.000221851Z" level=info msg="StartContainer for \"ba60c3df7dfadd30bb04b7af0a5732e7b51b55c38edb42a467f340822573ca90\" returns successfully" Sep 12 10:11:42.022571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba60c3df7dfadd30bb04b7af0a5732e7b51b55c38edb42a467f340822573ca90-rootfs.mount: Deactivated successfully. Sep 12 10:11:42.057694 containerd[1504]: time="2025-09-12T10:11:42.057595499Z" level=info msg="shim disconnected" id=ba60c3df7dfadd30bb04b7af0a5732e7b51b55c38edb42a467f340822573ca90 namespace=k8s.io Sep 12 10:11:42.057694 containerd[1504]: time="2025-09-12T10:11:42.057671764Z" level=warning msg="cleaning up after shim disconnected" id=ba60c3df7dfadd30bb04b7af0a5732e7b51b55c38edb42a467f340822573ca90 namespace=k8s.io Sep 12 10:11:42.057694 containerd[1504]: time="2025-09-12T10:11:42.057684678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:11:42.478694 kubelet[2647]: E0912 10:11:42.478639 2647 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:11:42.661221 kubelet[2647]: E0912 10:11:42.661160 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:42.846421 containerd[1504]: time="2025-09-12T10:11:42.846315172Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:11:43.113840 containerd[1504]: time="2025-09-12T10:11:43.113684352Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad7f2a238e3242a0371bebadff5929f3460a3ecce1783d484031ff80dc0fca5b\"" Sep 12 10:11:43.114806 containerd[1504]: time="2025-09-12T10:11:43.114732220Z" level=info msg="StartContainer for \"ad7f2a238e3242a0371bebadff5929f3460a3ecce1783d484031ff80dc0fca5b\"" Sep 12 10:11:43.148718 systemd[1]: Started cri-containerd-ad7f2a238e3242a0371bebadff5929f3460a3ecce1783d484031ff80dc0fca5b.scope - libcontainer container ad7f2a238e3242a0371bebadff5929f3460a3ecce1783d484031ff80dc0fca5b. Sep 12 10:11:43.181068 systemd[1]: cri-containerd-ad7f2a238e3242a0371bebadff5929f3460a3ecce1783d484031ff80dc0fca5b.scope: Deactivated successfully. Sep 12 10:11:43.183181 containerd[1504]: time="2025-09-12T10:11:43.183142028Z" level=info msg="StartContainer for \"ad7f2a238e3242a0371bebadff5929f3460a3ecce1783d484031ff80dc0fca5b\" returns successfully" Sep 12 10:11:43.203753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad7f2a238e3242a0371bebadff5929f3460a3ecce1783d484031ff80dc0fca5b-rootfs.mount: Deactivated successfully. Sep 12 10:11:43.207773 containerd[1504]: time="2025-09-12T10:11:43.207692092Z" level=info msg="shim disconnected" id=ad7f2a238e3242a0371bebadff5929f3460a3ecce1783d484031ff80dc0fca5b namespace=k8s.io Sep 12 10:11:43.207773 containerd[1504]: time="2025-09-12T10:11:43.207764419Z" level=warning msg="cleaning up after shim disconnected" id=ad7f2a238e3242a0371bebadff5929f3460a3ecce1783d484031ff80dc0fca5b namespace=k8s.io Sep 12 10:11:43.207773 containerd[1504]: time="2025-09-12T10:11:43.207774127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:11:43.664867 kubelet[2647]: E0912 10:11:43.664834 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:43.876514 containerd[1504]: time="2025-09-12T10:11:43.876430979Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:11:44.019071 containerd[1504]: time="2025-09-12T10:11:44.018935053Z" level=info msg="CreateContainer within sandbox \"11d422482251516fbb9381bec70937d5a09ce6feba3d74f8330bce789cd2b487\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e59fcd77b17d68ae6d5b58b466e856a17581c1f85d841c5f837724b138afddd5\"" Sep 12 10:11:44.019850 containerd[1504]: time="2025-09-12T10:11:44.019817677Z" level=info msg="StartContainer for \"e59fcd77b17d68ae6d5b58b466e856a17581c1f85d841c5f837724b138afddd5\"" Sep 12 10:11:44.049633 systemd[1]: Started cri-containerd-e59fcd77b17d68ae6d5b58b466e856a17581c1f85d841c5f837724b138afddd5.scope - libcontainer container e59fcd77b17d68ae6d5b58b466e856a17581c1f85d841c5f837724b138afddd5. Sep 12 10:11:44.140661 containerd[1504]: time="2025-09-12T10:11:44.140586166Z" level=info msg="StartContainer for \"e59fcd77b17d68ae6d5b58b466e856a17581c1f85d841c5f837724b138afddd5\" returns successfully" Sep 12 10:11:44.544541 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 10:11:44.670833 kubelet[2647]: E0912 10:11:44.670738 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:45.279108 kubelet[2647]: I0912 10:11:45.279037 2647 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T10:11:45Z","lastTransitionTime":"2025-09-12T10:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 10:11:45.424535 kubelet[2647]: E0912 10:11:45.424451 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:46.046264 kubelet[2647]: E0912 10:11:46.046223 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:47.792623 systemd-networkd[1421]: lxc_health: Link UP Sep 12 10:11:47.797672 systemd-networkd[1421]: lxc_health: Gained carrier Sep 12 10:11:48.047804 kubelet[2647]: E0912 10:11:48.047642 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:48.068005 kubelet[2647]: I0912 10:11:48.067875 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2wmpf" podStartSLOduration=9.067859022 podStartE2EDuration="9.067859022s" podCreationTimestamp="2025-09-12 10:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:11:44.873364609 +0000 UTC m=+92.558386479" watchObservedRunningTime="2025-09-12 10:11:48.067859022 +0000 UTC m=+95.752880862" Sep 12 10:11:48.458248 systemd[1]: run-containerd-runc-k8s.io-e59fcd77b17d68ae6d5b58b466e856a17581c1f85d841c5f837724b138afddd5-runc.2AKVBU.mount: Deactivated successfully. Sep 12 10:11:48.534720 kubelet[2647]: E0912 10:11:48.534661 2647 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43796->127.0.0.1:45189: write tcp 127.0.0.1:43796->127.0.0.1:45189: write: broken pipe Sep 12 10:11:48.678936 kubelet[2647]: E0912 10:11:48.678864 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:49.374838 systemd-networkd[1421]: lxc_health: Gained IPv6LL Sep 12 10:11:49.681714 kubelet[2647]: E0912 10:11:49.681558 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:54.424786 kubelet[2647]: E0912 10:11:54.424688 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:11:54.890106 sshd[4536]: Connection closed by 10.0.0.1 port 55274 Sep 12 10:11:54.890608 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Sep 12 10:11:54.895293 systemd[1]: sshd@29-10.0.0.72:22-10.0.0.1:55274.service: Deactivated successfully. Sep 12 10:11:54.897850 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 10:11:54.898626 systemd-logind[1490]: Session 30 logged out. Waiting for processes to exit. Sep 12 10:11:54.899614 systemd-logind[1490]: Removed session 30. Sep 12 10:11:55.423938 kubelet[2647]: E0912 10:11:55.423886 2647 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"