Sep 5 00:22:13.933695 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:33:49 -00 2025 Sep 5 00:22:13.933717 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:22:13.933731 kernel: BIOS-provided physical RAM map: Sep 5 00:22:13.933740 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 00:22:13.933748 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 00:22:13.933756 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 00:22:13.933767 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 5 00:22:13.933775 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 5 00:22:13.933781 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:22:13.933791 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 00:22:13.933797 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:22:13.933804 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 00:22:13.933814 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:22:13.933821 kernel: NX (Execute Disable) protection: active Sep 5 00:22:13.933829 kernel: APIC: Static calls initialized Sep 5 00:22:13.933841 kernel: SMBIOS 2.8 present. Sep 5 00:22:13.933848 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 5 00:22:13.933855 kernel: Hypervisor detected: KVM Sep 5 00:22:13.933862 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:22:13.933869 kernel: kvm-clock: using sched offset of 2815413571 cycles Sep 5 00:22:13.933876 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:22:13.933883 kernel: tsc: Detected 2794.750 MHz processor Sep 5 00:22:13.933890 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:22:13.933898 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:22:13.933908 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 5 00:22:13.933915 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 00:22:13.933922 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:22:13.933929 kernel: Using GB pages for direct mapping Sep 5 00:22:13.933936 kernel: ACPI: Early table checksum verification disabled Sep 5 00:22:13.933943 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 5 00:22:13.933950 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:22:13.933957 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:22:13.933964 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:22:13.933973 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 5 00:22:13.933980 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:22:13.933987 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:22:13.933994 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:22:13.934001 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:22:13.934025 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 5 00:22:13.934033 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 5 00:22:13.934044 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 5 00:22:13.934054 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 5 00:22:13.934061 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 5 00:22:13.934068 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 5 00:22:13.934076 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 5 00:22:13.934085 kernel: No NUMA configuration found Sep 5 00:22:13.934093 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 5 00:22:13.934103 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 5 00:22:13.934110 kernel: Zone ranges: Sep 5 00:22:13.934117 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:22:13.934124 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 5 00:22:13.934132 kernel: Normal empty Sep 5 00:22:13.934139 kernel: Movable zone start for each node Sep 5 00:22:13.934146 kernel: Early memory node ranges Sep 5 00:22:13.934153 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 00:22:13.934160 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 5 00:22:13.934168 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 5 00:22:13.934178 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:22:13.934187 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 00:22:13.934194 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 5 00:22:13.934201 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:22:13.934209 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:22:13.934216 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:22:13.934223 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:22:13.934231 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:22:13.934238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:22:13.934248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:22:13.934255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:22:13.934262 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:22:13.934270 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:22:13.934277 kernel: TSC deadline timer available Sep 5 00:22:13.934292 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 5 00:22:13.934300 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:22:13.934307 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:22:13.934316 kernel: kvm-guest: setup PV sched yield Sep 5 00:22:13.934327 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 00:22:13.934334 kernel: Booting paravirtualized kernel on KVM Sep 5 00:22:13.934342 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:22:13.934349 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:22:13.934357 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 5 00:22:13.934364 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 5 00:22:13.934371 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:22:13.934378 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:22:13.934385 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:22:13.934396 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:22:13.934404 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:22:13.934411 kernel: random: crng init done Sep 5 00:22:13.934419 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:22:13.934426 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:22:13.934433 kernel: Fallback order for Node 0: 0 Sep 5 00:22:13.934440 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 5 00:22:13.934448 kernel: Policy zone: DMA32 Sep 5 00:22:13.934458 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:22:13.934465 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42872K init, 2324K bss, 136900K reserved, 0K cma-reserved) Sep 5 00:22:13.934473 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:22:13.934480 kernel: ftrace: allocating 37969 entries in 149 pages Sep 5 00:22:13.934487 kernel: ftrace: allocated 149 pages with 4 groups Sep 5 00:22:13.934494 kernel: Dynamic Preempt: voluntary Sep 5 00:22:13.934502 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:22:13.934510 kernel: rcu: RCU event tracing is enabled. Sep 5 00:22:13.934517 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:22:13.934527 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:22:13.934534 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:22:13.934542 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:22:13.934549 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:22:13.934559 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:22:13.934566 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:22:13.934573 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:22:13.934580 kernel: Console: colour VGA+ 80x25 Sep 5 00:22:13.934588 kernel: printk: console [ttyS0] enabled Sep 5 00:22:13.934599 kernel: ACPI: Core revision 20230628 Sep 5 00:22:13.934613 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:22:13.934625 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:22:13.934635 kernel: x2apic enabled Sep 5 00:22:13.934644 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:22:13.934654 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:22:13.934664 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:22:13.934674 kernel: kvm-guest: setup PV IPIs Sep 5 00:22:13.934694 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:22:13.934702 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 5 00:22:13.934710 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 5 00:22:13.934717 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:22:13.934727 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:22:13.934735 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:22:13.934742 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:22:13.934750 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:22:13.934758 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:22:13.934767 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:22:13.934775 kernel: active return thunk: retbleed_return_thunk Sep 5 00:22:13.934787 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:22:13.934795 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:22:13.934803 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:22:13.934811 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:22:13.934819 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:22:13.934827 kernel: active return thunk: srso_return_thunk Sep 5 00:22:13.934837 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:22:13.934845 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:22:13.934852 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:22:13.934860 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:22:13.934868 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:22:13.934876 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:22:13.934883 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:22:13.934891 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:22:13.934898 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:22:13.934908 kernel: landlock: Up and running. Sep 5 00:22:13.934916 kernel: SELinux: Initializing. Sep 5 00:22:13.934923 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:22:13.934931 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:22:13.934939 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:22:13.934947 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:22:13.934954 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:22:13.934962 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:22:13.934972 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:22:13.934982 kernel: ... version: 0 Sep 5 00:22:13.934990 kernel: ... bit width: 48 Sep 5 00:22:13.934998 kernel: ... generic registers: 6 Sep 5 00:22:13.935005 kernel: ... value mask: 0000ffffffffffff Sep 5 00:22:13.935084 kernel: ... max period: 00007fffffffffff Sep 5 00:22:13.935092 kernel: ... fixed-purpose events: 0 Sep 5 00:22:13.935099 kernel: ... event mask: 000000000000003f Sep 5 00:22:13.935107 kernel: signal: max sigframe size: 1776 Sep 5 00:22:13.935114 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:22:13.935126 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:22:13.935133 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:22:13.935141 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:22:13.935149 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:22:13.935156 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:22:13.935164 kernel: smpboot: Max logical packages: 1 Sep 5 00:22:13.935171 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 5 00:22:13.935179 kernel: devtmpfs: initialized Sep 5 00:22:13.935187 kernel: x86/mm: Memory block size: 128MB Sep 5 00:22:13.935197 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:22:13.935204 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:22:13.935212 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:22:13.935220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:22:13.935227 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:22:13.935235 kernel: audit: type=2000 audit(1757031733.844:1): state=initialized audit_enabled=0 res=1 Sep 5 00:22:13.935242 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:22:13.935250 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:22:13.935258 kernel: cpuidle: using governor menu Sep 5 00:22:13.935268 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:22:13.935275 kernel: dca service started, version 1.12.1 Sep 5 00:22:13.935290 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 5 00:22:13.935298 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:22:13.935306 kernel: PCI: Using configuration type 1 for base access Sep 5 00:22:13.935313 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:22:13.935321 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:22:13.935329 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:22:13.935336 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:22:13.935347 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:22:13.935354 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:22:13.935362 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:22:13.935369 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:22:13.935377 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:22:13.935385 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 5 00:22:13.935392 kernel: ACPI: Interpreter enabled Sep 5 00:22:13.935400 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:22:13.935407 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:22:13.935418 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:22:13.935425 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:22:13.935433 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:22:13.935440 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:22:13.935648 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:22:13.935795 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:22:13.935932 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:22:13.935952 kernel: PCI host bridge to bus 0000:00 Sep 5 00:22:13.936142 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:22:13.936277 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:22:13.936412 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:22:13.936540 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:22:13.936662 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:22:13.936792 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 5 00:22:13.936927 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:22:13.937120 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 5 00:22:13.937273 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 5 00:22:13.937417 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 5 00:22:13.937562 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 5 00:22:13.937688 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 5 00:22:13.937828 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:22:13.937980 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:22:13.938126 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 5 00:22:13.938252 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 5 00:22:13.938388 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 5 00:22:13.938544 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 5 00:22:13.938678 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 5 00:22:13.938804 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 5 00:22:13.938935 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 5 00:22:13.939135 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 5 00:22:13.939268 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 5 00:22:13.939404 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 5 00:22:13.939529 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 5 00:22:13.939659 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 5 00:22:13.939803 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 5 00:22:13.939938 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:22:13.940098 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 5 00:22:13.940226 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 5 00:22:13.940361 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 5 00:22:13.940496 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 5 00:22:13.940621 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 5 00:22:13.940637 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:22:13.940645 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:22:13.940652 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:22:13.940660 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:22:13.940668 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:22:13.940676 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:22:13.940683 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:22:13.940691 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:22:13.940699 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:22:13.940709 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:22:13.940717 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:22:13.940725 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:22:13.940732 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:22:13.940740 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:22:13.940747 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:22:13.940755 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:22:13.940762 kernel: iommu: Default domain type: Translated Sep 5 00:22:13.940770 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:22:13.940781 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:22:13.940788 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:22:13.940796 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 00:22:13.940804 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 5 00:22:13.940929 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:22:13.941072 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:22:13.941198 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:22:13.941209 kernel: vgaarb: loaded Sep 5 00:22:13.941221 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:22:13.941229 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:22:13.941237 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:22:13.941244 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:22:13.941252 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:22:13.941260 kernel: pnp: PnP ACPI init Sep 5 00:22:13.941417 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:22:13.941430 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:22:13.941442 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:22:13.941450 kernel: NET: Registered PF_INET protocol family Sep 5 00:22:13.941457 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:22:13.941465 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:22:13.941473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:22:13.941481 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:22:13.941489 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:22:13.941497 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:22:13.941504 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:22:13.941515 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:22:13.941523 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:22:13.941530 kernel: NET: Registered PF_XDP protocol family Sep 5 00:22:13.941653 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:22:13.941767 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:22:13.941881 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:22:13.941995 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:22:13.942130 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:22:13.942251 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 5 00:22:13.942261 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:22:13.942269 kernel: Initialise system trusted keyrings Sep 5 00:22:13.942277 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:22:13.942294 kernel: Key type asymmetric registered Sep 5 00:22:13.942302 kernel: Asymmetric key parser 'x509' registered Sep 5 00:22:13.942310 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:22:13.942318 kernel: io scheduler mq-deadline registered Sep 5 00:22:13.942325 kernel: io scheduler kyber registered Sep 5 00:22:13.942333 kernel: io scheduler bfq registered Sep 5 00:22:13.942344 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:22:13.942352 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:22:13.942360 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:22:13.942368 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:22:13.942375 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:22:13.942383 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:22:13.942391 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:22:13.942399 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:22:13.942406 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:22:13.942560 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:22:13.942572 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:22:13.942689 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:22:13.942808 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:22:13 UTC (1757031733) Sep 5 00:22:13.942926 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:22:13.942936 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:22:13.942943 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:22:13.942955 kernel: Segment Routing with IPv6 Sep 5 00:22:13.942963 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:22:13.942971 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:22:13.942978 kernel: Key type dns_resolver registered Sep 5 00:22:13.942986 kernel: IPI shorthand broadcast: enabled Sep 5 00:22:13.942994 kernel: sched_clock: Marking stable (735002625, 142184436)->(952351158, -75164097) Sep 5 00:22:13.943001 kernel: registered taskstats version 1 Sep 5 00:22:13.943009 kernel: Loading compiled-in X.509 certificates Sep 5 00:22:13.943084 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: fbb6a9f06c02a4dbdf06d4c5d95c782040e8492c' Sep 5 00:22:13.943095 kernel: Key type .fscrypt registered Sep 5 00:22:13.943102 kernel: Key type fscrypt-provisioning registered Sep 5 00:22:13.943110 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:22:13.943118 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:22:13.943126 kernel: ima: No architecture policies found Sep 5 00:22:13.943134 kernel: clk: Disabling unused clocks Sep 5 00:22:13.943141 kernel: Freeing unused kernel image (initmem) memory: 42872K Sep 5 00:22:13.943149 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:22:13.943157 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 5 00:22:13.943167 kernel: Run /init as init process Sep 5 00:22:13.943175 kernel: with arguments: Sep 5 00:22:13.943183 kernel: /init Sep 5 00:22:13.943190 kernel: with environment: Sep 5 00:22:13.943198 kernel: HOME=/ Sep 5 00:22:13.943205 kernel: TERM=linux Sep 5 00:22:13.943213 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:22:13.943223 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:22:13.943235 systemd[1]: Detected virtualization kvm. Sep 5 00:22:13.943243 systemd[1]: Detected architecture x86-64. Sep 5 00:22:13.943252 systemd[1]: Running in initrd. Sep 5 00:22:13.943260 systemd[1]: No hostname configured, using default hostname. Sep 5 00:22:13.943268 systemd[1]: Hostname set to . Sep 5 00:22:13.943276 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:22:13.943293 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:22:13.943302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:22:13.943313 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:22:13.943322 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:22:13.943343 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:22:13.943354 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:22:13.943363 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:22:13.943375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:22:13.943384 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:22:13.943392 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:22:13.943401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:22:13.943409 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:22:13.943417 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:22:13.943425 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:22:13.943434 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:22:13.943445 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:22:13.943453 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:22:13.943462 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:22:13.943470 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:22:13.943479 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:22:13.943487 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:22:13.943496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:22:13.943504 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:22:13.943512 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:22:13.943523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:22:13.943532 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:22:13.943540 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:22:13.943548 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:22:13.943557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:22:13.943565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:22:13.943574 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:22:13.943583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:22:13.943596 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:22:13.943607 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:22:13.943616 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:22:13.943646 systemd-journald[190]: Collecting audit messages is disabled. Sep 5 00:22:13.943665 systemd-journald[190]: Journal started Sep 5 00:22:13.943686 systemd-journald[190]: Runtime Journal (/run/log/journal/89f213cc75934262bc819bb5a5ebb100) is 6.0M, max 48.4M, 42.3M free. Sep 5 00:22:13.928605 systemd-modules-load[194]: Inserted module 'overlay' Sep 5 00:22:13.973006 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:22:13.973038 kernel: Bridge firewalling registered Sep 5 00:22:13.973049 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:22:13.962251 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 5 00:22:13.973262 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:22:13.975377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:22:13.986256 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:22:13.988099 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:22:13.989616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:22:13.992182 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:22:14.003123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:22:14.007032 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:22:14.009931 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:22:14.011375 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:22:14.025178 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:22:14.029076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:22:14.037057 dracut-cmdline[226]: dracut-dracut-053 Sep 5 00:22:14.040514 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:22:14.065306 systemd-resolved[229]: Positive Trust Anchors: Sep 5 00:22:14.065320 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:22:14.065358 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:22:14.068382 systemd-resolved[229]: Defaulting to hostname 'linux'. Sep 5 00:22:14.069542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:22:14.074487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:22:14.139049 kernel: SCSI subsystem initialized Sep 5 00:22:14.149035 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:22:14.160046 kernel: iscsi: registered transport (tcp) Sep 5 00:22:14.180226 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:22:14.180255 kernel: QLogic iSCSI HBA Driver Sep 5 00:22:14.228669 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:22:14.238147 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:22:14.262284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:22:14.262326 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:22:14.263310 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:22:14.304043 kernel: raid6: avx2x4 gen() 30194 MB/s Sep 5 00:22:14.321041 kernel: raid6: avx2x2 gen() 30987 MB/s Sep 5 00:22:14.338096 kernel: raid6: avx2x1 gen() 25684 MB/s Sep 5 00:22:14.338119 kernel: raid6: using algorithm avx2x2 gen() 30987 MB/s Sep 5 00:22:14.356091 kernel: raid6: .... xor() 19682 MB/s, rmw enabled Sep 5 00:22:14.356136 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:22:14.377041 kernel: xor: automatically using best checksumming function avx Sep 5 00:22:14.530046 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:22:14.542369 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:22:14.556360 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:22:14.570822 systemd-udevd[411]: Using default interface naming scheme 'v255'. Sep 5 00:22:14.576734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:22:14.577826 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:22:14.597238 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Sep 5 00:22:14.631551 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:22:14.649215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:22:14.721042 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:22:14.729267 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:22:14.746199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:22:14.749410 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:22:14.751652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:22:14.753680 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:22:14.763233 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:22:14.773524 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:22:14.775154 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:22:14.777182 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:22:14.785283 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:22:14.785304 kernel: GPT:9289727 != 19775487 Sep 5 00:22:14.785318 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:22:14.785331 kernel: GPT:9289727 != 19775487 Sep 5 00:22:14.785344 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:22:14.785357 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:22:14.799042 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:22:14.811331 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:22:14.812163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:22:14.816556 kernel: libata version 3.00 loaded. Sep 5 00:22:14.816732 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:22:14.818441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:22:14.818616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:22:14.821801 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:22:14.829724 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:22:14.834045 kernel: BTRFS: device fsid 3713859d-e283-4add-80dc-7ca8465b1d1d devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (455) Sep 5 00:22:14.834383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:22:14.840523 kernel: AES CTR mode by8 optimization enabled Sep 5 00:22:14.840545 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Sep 5 00:22:14.844244 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:22:14.844491 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:22:14.844509 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 5 00:22:14.847037 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:22:14.851952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:22:14.854067 kernel: scsi host0: ahci Sep 5 00:22:14.855890 kernel: scsi host1: ahci Sep 5 00:22:14.861035 kernel: scsi host2: ahci Sep 5 00:22:14.863145 kernel: scsi host3: ahci Sep 5 00:22:14.863329 kernel: scsi host4: ahci Sep 5 00:22:14.865033 kernel: scsi host5: ahci Sep 5 00:22:14.865253 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 5 00:22:14.865281 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 5 00:22:14.865303 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 5 00:22:14.865317 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 5 00:22:14.865332 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 5 00:22:14.865347 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 5 00:22:14.871654 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:22:14.902050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:22:14.903518 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:22:14.909852 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:22:14.910291 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:22:14.921212 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:22:14.922417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:22:14.936065 disk-uuid[554]: Primary Header is updated. Sep 5 00:22:14.936065 disk-uuid[554]: Secondary Entries is updated. Sep 5 00:22:14.936065 disk-uuid[554]: Secondary Header is updated. Sep 5 00:22:14.941066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:22:14.943586 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:22:14.947130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:22:15.177169 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:22:15.177271 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:22:15.177307 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:22:15.177322 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:22:15.178039 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:22:15.179041 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:22:15.180042 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:22:15.181102 kernel: ata3.00: applying bridge limits Sep 5 00:22:15.181115 kernel: ata3.00: configured for UDMA/100 Sep 5 00:22:15.182045 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:22:15.230517 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:22:15.230750 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:22:15.247123 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:22:15.946895 disk-uuid[559]: The operation has completed successfully. Sep 5 00:22:15.948137 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:22:15.973851 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:22:15.973979 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:22:16.001400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:22:16.005184 sh[590]: Success Sep 5 00:22:16.018050 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 5 00:22:16.052205 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:22:16.065929 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:22:16.068952 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:22:16.080935 kernel: BTRFS info (device dm-0): first mount of filesystem 3713859d-e283-4add-80dc-7ca8465b1d1d Sep 5 00:22:16.080990 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:22:16.081001 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:22:16.081034 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:22:16.081654 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:22:16.086868 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:22:16.088699 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:22:16.100341 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:22:16.102463 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:22:16.112152 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:22:16.112203 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:22:16.112219 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:22:16.116053 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:22:16.125907 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:22:16.127746 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:22:16.208130 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:22:16.222470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:22:16.245098 systemd-networkd[768]: lo: Link UP Sep 5 00:22:16.245108 systemd-networkd[768]: lo: Gained carrier Sep 5 00:22:16.246769 systemd-networkd[768]: Enumeration completed Sep 5 00:22:16.246885 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:22:16.247317 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:22:16.247321 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:22:16.249325 systemd[1]: Reached target network.target - Network. Sep 5 00:22:16.249387 systemd-networkd[768]: eth0: Link UP Sep 5 00:22:16.249391 systemd-networkd[768]: eth0: Gained carrier Sep 5 00:22:16.249399 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:22:16.264064 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.155/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:22:16.507501 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:22:16.520250 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:22:16.574922 ignition[773]: Ignition 2.19.0 Sep 5 00:22:16.574935 ignition[773]: Stage: fetch-offline Sep 5 00:22:16.574986 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:22:16.574998 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:22:16.575144 ignition[773]: parsed url from cmdline: "" Sep 5 00:22:16.575150 ignition[773]: no config URL provided Sep 5 00:22:16.575157 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:22:16.575172 ignition[773]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:22:16.575222 ignition[773]: op(1): [started] loading QEMU firmware config module Sep 5 00:22:16.575231 ignition[773]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:22:16.582197 ignition[773]: op(1): [finished] loading QEMU firmware config module Sep 5 00:22:16.624538 ignition[773]: parsing config with SHA512: 9271603e4511deb9ff97da3d29de914f91e485ed06ac73835adfa6e4de12e4b3337a9dcd8792c985b95d92d1a59a2dd9beca97906a88724689b400781d592662 Sep 5 00:22:16.627976 unknown[773]: fetched base config from "system" Sep 5 00:22:16.627993 unknown[773]: fetched user config from "qemu" Sep 5 00:22:16.629790 ignition[773]: fetch-offline: fetch-offline passed Sep 5 00:22:16.629870 ignition[773]: Ignition finished successfully Sep 5 00:22:16.632502 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:22:16.634005 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:22:16.639273 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:22:16.655140 ignition[784]: Ignition 2.19.0 Sep 5 00:22:16.655151 ignition[784]: Stage: kargs Sep 5 00:22:16.655358 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:22:16.655371 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:22:16.656367 ignition[784]: kargs: kargs passed Sep 5 00:22:16.656446 ignition[784]: Ignition finished successfully Sep 5 00:22:16.660003 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:22:16.671197 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:22:16.687572 ignition[792]: Ignition 2.19.0 Sep 5 00:22:16.687584 ignition[792]: Stage: disks Sep 5 00:22:16.687757 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:22:16.687768 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:22:16.688868 ignition[792]: disks: disks passed Sep 5 00:22:16.691391 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:22:16.688925 ignition[792]: Ignition finished successfully Sep 5 00:22:16.692878 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:22:16.694550 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:22:16.696799 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:22:16.697850 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:22:16.699501 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:22:16.712227 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:22:16.725638 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:22:16.732985 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:22:16.749228 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:22:16.836054 kernel: EXT4-fs (vda9): mounted filesystem 83287606-d110-4d13-a801-c8d88205bd5a r/w with ordered data mode. Quota mode: none. Sep 5 00:22:16.836672 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:22:16.838343 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:22:16.855164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:22:16.857442 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:22:16.858256 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:22:16.864446 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Sep 5 00:22:16.858311 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:22:16.871623 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:22:16.871644 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:22:16.871656 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:22:16.871667 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:22:16.858344 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:22:16.867947 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:22:16.872792 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:22:16.875743 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:22:16.914637 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:22:16.918719 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:22:16.922955 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:22:16.927631 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:22:17.012676 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:22:17.023127 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:22:17.025040 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:22:17.032078 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:22:17.050988 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:22:17.058806 ignition[923]: INFO : Ignition 2.19.0 Sep 5 00:22:17.058806 ignition[923]: INFO : Stage: mount Sep 5 00:22:17.060821 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:22:17.060821 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:22:17.060821 ignition[923]: INFO : mount: mount passed Sep 5 00:22:17.060821 ignition[923]: INFO : Ignition finished successfully Sep 5 00:22:17.066340 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:22:17.077148 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:22:17.079706 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:22:17.084474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:22:17.098050 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (936) Sep 5 00:22:17.100100 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:22:17.100142 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:22:17.100153 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:22:17.103071 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:22:17.105691 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:22:17.141636 ignition[953]: INFO : Ignition 2.19.0 Sep 5 00:22:17.141636 ignition[953]: INFO : Stage: files Sep 5 00:22:17.143597 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:22:17.143597 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:22:17.143597 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:22:17.147607 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:22:17.147607 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:22:17.147607 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:22:17.147607 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:22:17.147607 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:22:17.147105 unknown[953]: wrote ssh authorized keys file for user: core Sep 5 00:22:17.156613 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:22:17.156613 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 5 00:22:17.203256 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:22:17.445691 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:22:17.445691 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:22:17.449541 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 5 00:22:17.665209 systemd-networkd[768]: eth0: Gained IPv6LL Sep 5 00:22:17.700420 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 00:22:17.880201 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:22:17.880201 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:22:17.883952 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 5 00:22:18.370219 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 00:22:18.936748 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:22:18.936748 ignition[953]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 5 00:22:18.940724 ignition[953]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:22:18.942921 ignition[953]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:22:18.942921 ignition[953]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 5 00:22:18.942921 ignition[953]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 5 00:22:18.947087 ignition[953]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:22:18.948996 ignition[953]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:22:18.948996 ignition[953]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 5 00:22:18.952138 ignition[953]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:22:18.980984 ignition[953]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:22:18.987166 ignition[953]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:22:18.988905 ignition[953]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:22:18.988905 ignition[953]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:22:18.991658 ignition[953]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:22:18.993121 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:22:18.994879 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:22:18.996513 ignition[953]: INFO : files: files passed Sep 5 00:22:18.997242 ignition[953]: INFO : Ignition finished successfully Sep 5 00:22:19.000774 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:22:19.015181 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:22:19.017113 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:22:19.018740 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:22:19.018852 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:22:19.028195 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:22:19.031237 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:22:19.031237 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:22:19.034288 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:22:19.033736 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:22:19.036293 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:22:19.048426 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:22:19.074114 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:22:19.074314 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:22:19.075642 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:22:19.080201 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:22:19.080596 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:22:19.082579 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:22:19.103048 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:22:19.117227 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:22:19.126820 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:22:19.129108 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:22:19.130362 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:22:19.132326 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:22:19.132449 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:22:19.134666 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:22:19.136357 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:22:19.138297 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:22:19.140413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:22:19.142439 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:22:19.144486 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:22:19.146517 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:22:19.148733 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:22:19.150683 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:22:19.152898 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:22:19.154762 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:22:19.154877 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:22:19.157195 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:22:19.158885 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:22:19.161064 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:22:19.161226 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:22:19.163507 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:22:19.163620 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:22:19.166132 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:22:19.166257 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:22:19.168533 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:22:19.170548 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:22:19.174086 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:22:19.175574 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:22:19.177613 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:22:19.179825 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:22:19.179951 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:22:19.181798 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:22:19.181891 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:22:19.184042 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:22:19.184175 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:22:19.186943 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:22:19.187074 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:22:19.204183 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:22:19.206041 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:22:19.206176 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:22:19.209122 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:22:19.210252 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:22:19.210370 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:22:19.212486 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:22:19.212680 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:22:19.218654 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:22:19.218787 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:22:19.225995 ignition[1007]: INFO : Ignition 2.19.0 Sep 5 00:22:19.225995 ignition[1007]: INFO : Stage: umount Sep 5 00:22:19.227693 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:22:19.227693 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:22:19.227693 ignition[1007]: INFO : umount: umount passed Sep 5 00:22:19.227693 ignition[1007]: INFO : Ignition finished successfully Sep 5 00:22:19.233343 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:22:19.234704 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:22:19.237293 systemd[1]: Stopped target network.target - Network. Sep 5 00:22:19.239115 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:22:19.240158 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:22:19.244219 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:22:19.244283 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:22:19.247140 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:22:19.248169 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:22:19.250087 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:22:19.251060 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:22:19.253315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:22:19.255513 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:22:19.257054 systemd-networkd[768]: eth0: DHCPv6 lease lost Sep 5 00:22:19.259457 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:22:19.260992 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:22:19.262193 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:22:19.265003 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:22:19.266154 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:22:19.270992 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:22:19.271980 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:22:19.290175 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:22:19.291124 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:22:19.292189 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:22:19.294418 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:22:19.294473 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:22:19.297078 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:22:19.298249 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:22:19.302724 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:22:19.302791 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:22:19.306363 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:22:19.320922 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:22:19.322010 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:22:19.324339 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:22:19.325439 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:22:19.329179 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:22:19.330369 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:22:19.332710 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:22:19.332770 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:22:19.335835 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:22:19.335900 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:22:19.339039 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:22:19.339100 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:22:19.341985 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:22:19.342059 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:22:19.353327 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:22:19.354511 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:22:19.354630 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:22:19.356853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:22:19.356912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:22:19.362979 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:22:19.363155 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:22:19.499660 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:22:19.499830 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:22:19.502100 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:22:19.503873 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:22:19.503938 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:22:19.514180 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:22:19.524301 systemd[1]: Switching root. Sep 5 00:22:19.558057 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Sep 5 00:22:19.558157 systemd-journald[190]: Journal stopped Sep 5 00:22:20.926161 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:22:20.926241 kernel: SELinux: policy capability open_perms=1 Sep 5 00:22:20.926253 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:22:20.926268 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:22:20.926286 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:22:20.926308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:22:20.926320 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:22:20.926331 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:22:20.926343 kernel: audit: type=1403 audit(1757031740.151:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:22:20.926356 systemd[1]: Successfully loaded SELinux policy in 42.980ms. Sep 5 00:22:20.926384 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.130ms. Sep 5 00:22:20.926401 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:22:20.926413 systemd[1]: Detected virtualization kvm. Sep 5 00:22:20.926425 systemd[1]: Detected architecture x86-64. Sep 5 00:22:20.926438 systemd[1]: Detected first boot. Sep 5 00:22:20.926449 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:22:20.926461 zram_generator::config[1052]: No configuration found. Sep 5 00:22:20.926480 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:22:20.926493 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:22:20.926505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:22:20.926520 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:22:20.926533 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:22:20.926545 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:22:20.926558 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:22:20.926574 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:22:20.926587 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:22:20.926607 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:22:20.926619 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:22:20.926634 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:22:20.926647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:22:20.926659 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:22:20.926672 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:22:20.926684 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:22:20.926696 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:22:20.926709 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:22:20.926721 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:22:20.926733 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:22:20.926748 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:22:20.926760 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:22:20.926772 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:22:20.926784 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:22:20.926797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:22:20.926809 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:22:20.926822 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:22:20.926834 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:22:20.926851 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:22:20.926864 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:22:20.926887 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:22:20.926901 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:22:20.926913 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:22:20.926925 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:22:20.926940 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:22:20.926953 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:22:20.926965 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:22:20.926980 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:22:20.926993 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:22:20.927005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:22:20.927031 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:22:20.927044 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:22:20.927057 systemd[1]: Reached target machines.target - Containers. Sep 5 00:22:20.927070 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:22:20.927082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:22:20.927098 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:22:20.927118 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:22:20.927131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:22:20.927143 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:22:20.927156 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:22:20.927168 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:22:20.927189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:22:20.927201 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:22:20.927213 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:22:20.927229 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:22:20.927241 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:22:20.927253 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:22:20.927265 kernel: fuse: init (API version 7.39) Sep 5 00:22:20.927276 kernel: loop: module loaded Sep 5 00:22:20.927288 kernel: ACPI: bus type drm_connector registered Sep 5 00:22:20.927300 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:22:20.927312 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:22:20.927325 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:22:20.927340 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:22:20.927352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:22:20.927365 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:22:20.927395 systemd-journald[1126]: Collecting audit messages is disabled. Sep 5 00:22:20.927419 systemd[1]: Stopped verity-setup.service. Sep 5 00:22:20.927432 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:22:20.927448 systemd-journald[1126]: Journal started Sep 5 00:22:20.927469 systemd-journald[1126]: Runtime Journal (/run/log/journal/89f213cc75934262bc819bb5a5ebb100) is 6.0M, max 48.4M, 42.3M free. Sep 5 00:22:20.695445 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:22:20.713485 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:22:20.713964 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:22:20.932578 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:22:20.933417 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:22:20.934636 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:22:20.935844 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:22:20.936951 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:22:20.938159 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:22:20.939372 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:22:20.940630 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:22:20.942096 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:22:20.943666 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:22:20.943856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:22:20.945497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:22:20.945680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:22:20.947297 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:22:20.947484 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:22:20.948851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:22:20.949045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:22:20.950559 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:22:20.950739 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:22:20.952159 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:22:20.952337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:22:20.953816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:22:20.955242 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:22:20.956976 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:22:20.973176 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:22:20.985129 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:22:20.987466 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:22:20.988563 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:22:20.988593 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:22:20.990554 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:22:20.992899 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:22:20.996340 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:22:20.997443 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:22:21.000749 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:22:21.004142 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:22:21.005879 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:22:21.009808 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:22:21.012863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:22:21.014305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:22:21.021970 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:22:21.026283 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:22:21.027801 systemd-journald[1126]: Time spent on flushing to /var/log/journal/89f213cc75934262bc819bb5a5ebb100 is 14.842ms for 954 entries. Sep 5 00:22:21.027801 systemd-journald[1126]: System Journal (/var/log/journal/89f213cc75934262bc819bb5a5ebb100) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:22:21.056293 systemd-journald[1126]: Received client request to flush runtime journal. Sep 5 00:22:21.056330 kernel: loop0: detected capacity change from 0 to 140768 Sep 5 00:22:21.031327 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:22:21.033319 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:22:21.036437 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:22:21.038221 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:22:21.040289 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:22:21.051508 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:22:21.059956 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:22:21.065578 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:22:21.067330 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:22:21.075950 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:22:21.073350 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:22:21.081435 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 00:22:21.096184 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:22:21.096897 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:22:21.104294 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:22:21.107041 kernel: loop1: detected capacity change from 0 to 142488 Sep 5 00:22:21.118178 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:22:21.144289 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Sep 5 00:22:21.144311 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Sep 5 00:22:21.151446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:22:21.155053 kernel: loop2: detected capacity change from 0 to 229808 Sep 5 00:22:21.181042 kernel: loop3: detected capacity change from 0 to 140768 Sep 5 00:22:21.195033 kernel: loop4: detected capacity change from 0 to 142488 Sep 5 00:22:21.206241 kernel: loop5: detected capacity change from 0 to 229808 Sep 5 00:22:21.214951 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:22:21.216260 (sd-merge)[1190]: Merged extensions into '/usr'. Sep 5 00:22:21.221033 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:22:21.221048 systemd[1]: Reloading... Sep 5 00:22:21.298059 zram_generator::config[1219]: No configuration found. Sep 5 00:22:21.354684 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:22:21.434838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:22:21.484280 systemd[1]: Reloading finished in 262 ms. Sep 5 00:22:21.521042 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:22:21.522831 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:22:21.541274 systemd[1]: Starting ensure-sysext.service... Sep 5 00:22:21.543523 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:22:21.551468 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:22:21.551479 systemd[1]: Reloading... Sep 5 00:22:21.605460 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:22:21.606580 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:22:21.608394 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:22:21.608944 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Sep 5 00:22:21.609158 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Sep 5 00:22:21.618043 zram_generator::config[1283]: No configuration found. Sep 5 00:22:21.642547 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:22:21.642563 systemd-tmpfiles[1254]: Skipping /boot Sep 5 00:22:21.654956 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:22:21.654970 systemd-tmpfiles[1254]: Skipping /boot Sep 5 00:22:21.730809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:22:21.780638 systemd[1]: Reloading finished in 228 ms. Sep 5 00:22:21.800145 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:22:21.813478 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:22:21.821963 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:22:21.824610 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:22:21.826975 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:22:21.832823 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:22:21.836952 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:22:21.840396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:22:21.847465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:22:21.847658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:22:21.849808 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:22:21.854614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:22:21.859366 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:22:21.861980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:22:21.864328 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:22:21.865502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:22:21.867610 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:22:21.870403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:22:21.871002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:22:21.873475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:22:21.873938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:22:21.876152 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:22:21.876391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:22:21.889088 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Sep 5 00:22:21.890409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:22:21.890629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:22:21.895564 augenrules[1348]: No rules Sep 5 00:22:21.896487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:22:21.900608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:22:21.905283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:22:21.906886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:22:21.912891 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:22:21.914179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:22:21.915491 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:22:21.917708 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:22:21.920711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:22:21.923901 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:22:21.926516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:22:21.926887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:22:21.929379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:22:21.929947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:22:21.932771 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:22:21.934128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:22:21.946884 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:22:21.949684 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:22:21.955918 systemd[1]: Finished ensure-sysext.service. Sep 5 00:22:21.967145 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:22:21.967301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:22:21.975252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:22:21.982202 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:22:21.984735 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:22:21.991203 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:22:21.992273 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1388) Sep 5 00:22:21.992394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:22:22.000225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:22:22.011491 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:22:22.012793 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:22:22.012825 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:22:22.013175 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:22:22.029392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:22:22.029632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:22:22.092738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:22:22.092932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:22:22.095602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:22:22.099316 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:22:22.099555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:22:22.103045 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:22:22.108504 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:22:22.108772 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:22:22.152495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:22:22.159052 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 5 00:22:22.165278 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:22:22.176272 systemd-resolved[1323]: Positive Trust Anchors: Sep 5 00:22:22.176297 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:22:22.176329 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:22:22.181715 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:22:22.200941 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 5 00:22:22.211207 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:22:22.211417 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 5 00:22:22.211450 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:22:22.185821 systemd-resolved[1323]: Defaulting to hostname 'linux'. Sep 5 00:22:22.188096 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:22:22.188470 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:22:22.188590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:22:22.189135 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:22:22.200080 systemd-networkd[1395]: lo: Link UP Sep 5 00:22:22.200086 systemd-networkd[1395]: lo: Gained carrier Sep 5 00:22:22.201056 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:22:22.201923 systemd-networkd[1395]: Enumeration completed Sep 5 00:22:22.202439 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:22:22.202443 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:22:22.205844 systemd-networkd[1395]: eth0: Link UP Sep 5 00:22:22.205851 systemd-networkd[1395]: eth0: Gained carrier Sep 5 00:22:22.205868 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:22:22.207444 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:22:22.208991 systemd[1]: Reached target network.target - Network. Sep 5 00:22:22.217255 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:22:22.222114 systemd-networkd[1395]: eth0: DHCPv4 address 10.0.0.155/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:22:22.225233 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Sep 5 00:22:23.317354 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:22:23.317460 systemd-timesyncd[1398]: Initial clock synchronization to Fri 2025-09-05 00:22:23.317192 UTC. Sep 5 00:22:23.317606 systemd-resolved[1323]: Clock change detected. Flushing caches. Sep 5 00:22:23.350756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:22:23.357450 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:22:23.427943 kernel: kvm_amd: TSC scaling supported Sep 5 00:22:23.428039 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:22:23.428053 kernel: kvm_amd: Nested Paging enabled Sep 5 00:22:23.429474 kernel: kvm_amd: LBR virtualization supported Sep 5 00:22:23.430475 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:22:23.430571 kernel: kvm_amd: Virtual GIF supported Sep 5 00:22:23.452489 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:22:23.484141 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:22:23.505623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:22:23.517873 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:22:23.530400 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:22:23.579442 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:22:23.581313 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:22:23.582541 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:22:23.583847 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:22:23.585307 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:22:23.586801 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:22:23.588178 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:22:23.589781 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:22:23.591314 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:22:23.591346 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:22:23.592605 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:22:23.594742 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:22:23.597962 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:22:23.608686 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:22:23.611764 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:22:23.613674 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:22:23.615187 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:22:23.616673 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:22:23.617889 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:22:23.617919 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:22:23.619506 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:22:23.622198 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:22:23.626589 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:22:23.631644 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:22:23.632823 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:22:23.634488 jq[1430]: false Sep 5 00:22:23.636275 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:22:23.636622 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:22:23.639643 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:22:23.644924 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:22:23.647679 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:22:23.651867 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:22:23.653503 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:22:23.653986 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:22:23.662841 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:22:23.667591 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:22:23.670079 extend-filesystems[1431]: Found loop3 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found loop4 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found loop5 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found sr0 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found vda Sep 5 00:22:23.671024 extend-filesystems[1431]: Found vda1 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found vda2 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found vda3 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found usr Sep 5 00:22:23.671024 extend-filesystems[1431]: Found vda4 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found vda6 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found vda7 Sep 5 00:22:23.671024 extend-filesystems[1431]: Found vda9 Sep 5 00:22:23.671024 extend-filesystems[1431]: Checking size of /dev/vda9 Sep 5 00:22:23.678832 dbus-daemon[1429]: [system] SELinux support is enabled Sep 5 00:22:23.672231 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:22:23.672496 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:22:23.685798 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:22:23.689038 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:22:23.690860 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:22:23.691126 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:22:23.693095 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:22:23.693490 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:22:23.694967 jq[1446]: true Sep 5 00:22:23.703932 extend-filesystems[1431]: Resized partition /dev/vda9 Sep 5 00:22:23.709087 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:22:23.712496 update_engine[1439]: I20250905 00:22:23.712356 1439 main.cc:92] Flatcar Update Engine starting Sep 5 00:22:23.715969 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:22:23.716561 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:22:23.718593 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:22:23.718624 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:22:23.720645 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:22:23.726638 update_engine[1439]: I20250905 00:22:23.723621 1439 update_check_scheduler.cc:74] Next update check in 4m43s Sep 5 00:22:23.722115 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:22:23.725017 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:22:23.728464 tar[1449]: linux-amd64/LICENSE Sep 5 00:22:23.728464 tar[1449]: linux-amd64/helm Sep 5 00:22:23.731442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1370) Sep 5 00:22:23.746658 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:22:23.778659 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Sep 5 00:22:23.778697 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:22:23.782620 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:22:23.784187 systemd-logind[1438]: New seat seat0. Sep 5 00:22:23.785120 jq[1453]: true Sep 5 00:22:23.845950 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:22:23.845950 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:22:23.845950 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:22:23.849744 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Sep 5 00:22:23.894975 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:22:24.091157 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:22:24.093857 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:22:24.094137 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:22:24.096386 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:22:24.198954 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:22:24.206842 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:22:24.210601 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:22:24.210846 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:22:24.223677 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:22:24.289077 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:22:24.320394 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:22:24.328643 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:22:24.330419 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:22:24.441946 tar[1449]: linux-amd64/README.md Sep 5 00:22:24.460691 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:22:24.560704 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:22:24.562309 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:22:24.563859 containerd[1452]: time="2025-09-05T00:22:24.563700742Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:22:24.565955 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:22:24.607766 containerd[1452]: time="2025-09-05T00:22:24.607650227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:22:24.612545 containerd[1452]: time="2025-09-05T00:22:24.612486986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:22:24.612545 containerd[1452]: time="2025-09-05T00:22:24.612527252Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:22:24.612735 containerd[1452]: time="2025-09-05T00:22:24.612552409Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:22:24.612937 containerd[1452]: time="2025-09-05T00:22:24.612889000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:22:24.612937 containerd[1452]: time="2025-09-05T00:22:24.612932752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:22:24.613084 containerd[1452]: time="2025-09-05T00:22:24.613054109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:22:24.613084 containerd[1452]: time="2025-09-05T00:22:24.613079337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:22:24.613467 containerd[1452]: time="2025-09-05T00:22:24.613386412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:22:24.613518 containerd[1452]: time="2025-09-05T00:22:24.613461764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:22:24.613518 containerd[1452]: time="2025-09-05T00:22:24.613489686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:22:24.613518 containerd[1452]: time="2025-09-05T00:22:24.613505716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:22:24.613730 containerd[1452]: time="2025-09-05T00:22:24.613689250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:22:24.614128 containerd[1452]: time="2025-09-05T00:22:24.614080834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:22:24.614349 containerd[1452]: time="2025-09-05T00:22:24.614303592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:22:24.614349 containerd[1452]: time="2025-09-05T00:22:24.614333138Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:22:24.614563 containerd[1452]: time="2025-09-05T00:22:24.614527051Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:22:24.614714 containerd[1452]: time="2025-09-05T00:22:24.614673776Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:22:24.671215 containerd[1452]: time="2025-09-05T00:22:24.671116196Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:22:24.671215 containerd[1452]: time="2025-09-05T00:22:24.671196256Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:22:24.671215 containerd[1452]: time="2025-09-05T00:22:24.671232354Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:22:24.671386 containerd[1452]: time="2025-09-05T00:22:24.671257601Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:22:24.671386 containerd[1452]: time="2025-09-05T00:22:24.671286184Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:22:24.671599 containerd[1452]: time="2025-09-05T00:22:24.671555129Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:22:24.672930 containerd[1452]: time="2025-09-05T00:22:24.672886184Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:22:24.673151 containerd[1452]: time="2025-09-05T00:22:24.673109814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:22:24.673151 containerd[1452]: time="2025-09-05T00:22:24.673141453Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:22:24.673206 containerd[1452]: time="2025-09-05T00:22:24.673161360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:22:24.673206 containerd[1452]: time="2025-09-05T00:22:24.673191868Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:22:24.673264 containerd[1452]: time="2025-09-05T00:22:24.673212747Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:22:24.673264 containerd[1452]: time="2025-09-05T00:22:24.673240870Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:22:24.673311 containerd[1452]: time="2025-09-05T00:22:24.673273671Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:22:24.673311 containerd[1452]: time="2025-09-05T00:22:24.673302776Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:22:24.673379 containerd[1452]: time="2025-09-05T00:22:24.673324516Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:22:24.673401 containerd[1452]: time="2025-09-05T00:22:24.673369932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:22:24.673401 containerd[1452]: time="2025-09-05T00:22:24.673394117Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:22:24.673492 containerd[1452]: time="2025-09-05T00:22:24.673421077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673516 containerd[1452]: time="2025-09-05T00:22:24.673496048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673553 containerd[1452]: time="2025-09-05T00:22:24.673519953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673553 containerd[1452]: time="2025-09-05T00:22:24.673538538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673612 containerd[1452]: time="2025-09-05T00:22:24.673567312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673612 containerd[1452]: time="2025-09-05T00:22:24.673591156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673612 containerd[1452]: time="2025-09-05T00:22:24.673608138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673666 containerd[1452]: time="2025-09-05T00:22:24.673626493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673716 containerd[1452]: time="2025-09-05T00:22:24.673688910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673779 containerd[1452]: time="2025-09-05T00:22:24.673740667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673779 containerd[1452]: time="2025-09-05T00:22:24.673771975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673826 containerd[1452]: time="2025-09-05T00:22:24.673794337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673826 containerd[1452]: time="2025-09-05T00:22:24.673812782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673870 containerd[1452]: time="2025-09-05T00:22:24.673834713Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:22:24.673903 containerd[1452]: time="2025-09-05T00:22:24.673888895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673925 containerd[1452]: time="2025-09-05T00:22:24.673910926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.673951 containerd[1452]: time="2025-09-05T00:22:24.673927367Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:22:24.674047 containerd[1452]: time="2025-09-05T00:22:24.674013889Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:22:24.674071 containerd[1452]: time="2025-09-05T00:22:24.674051580Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:22:24.674094 containerd[1452]: time="2025-09-05T00:22:24.674070615Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:22:24.674115 containerd[1452]: time="2025-09-05T00:22:24.674088679Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:22:24.674115 containerd[1452]: time="2025-09-05T00:22:24.674104088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.674160 containerd[1452]: time="2025-09-05T00:22:24.674122202Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:22:24.674160 containerd[1452]: time="2025-09-05T00:22:24.674141819Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:22:24.674198 containerd[1452]: time="2025-09-05T00:22:24.674156426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:22:24.674711 containerd[1452]: time="2025-09-05T00:22:24.674615627Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:22:24.674711 containerd[1452]: time="2025-09-05T00:22:24.674700747Z" level=info msg="Connect containerd service" Sep 5 00:22:24.674901 containerd[1452]: time="2025-09-05T00:22:24.674749158Z" level=info msg="using legacy CRI server" Sep 5 00:22:24.674901 containerd[1452]: time="2025-09-05T00:22:24.674768554Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:22:24.674979 containerd[1452]: time="2025-09-05T00:22:24.674951567Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:22:24.675872 containerd[1452]: time="2025-09-05T00:22:24.675824083Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:22:24.677388 containerd[1452]: time="2025-09-05T00:22:24.676314523Z" level=info msg="Start subscribing containerd event" Sep 5 00:22:24.677388 containerd[1452]: time="2025-09-05T00:22:24.676646034Z" level=info msg="Start recovering state" Sep 5 00:22:24.677388 containerd[1452]: time="2025-09-05T00:22:24.676767051Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:22:24.677388 containerd[1452]: time="2025-09-05T00:22:24.676772121Z" level=info msg="Start event monitor" Sep 5 00:22:24.677388 containerd[1452]: time="2025-09-05T00:22:24.676856619Z" level=info msg="Start snapshots syncer" Sep 5 00:22:24.677388 containerd[1452]: time="2025-09-05T00:22:24.676864804Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:22:24.677388 containerd[1452]: time="2025-09-05T00:22:24.676886685Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:22:24.677388 containerd[1452]: time="2025-09-05T00:22:24.676985992Z" level=info msg="Start streaming server" Sep 5 00:22:24.677242 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:22:24.678314 containerd[1452]: time="2025-09-05T00:22:24.677993000Z" level=info msg="containerd successfully booted in 0.118190s" Sep 5 00:22:24.770740 systemd-networkd[1395]: eth0: Gained IPv6LL Sep 5 00:22:24.774209 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:22:24.778012 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:22:24.791962 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:22:24.795476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:22:24.798302 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:22:24.846934 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:22:24.944181 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:22:24.944470 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:22:24.946075 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:22:26.260445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:22:26.262653 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:22:26.267564 systemd[1]: Startup finished in 881ms (kernel) + 6.406s (initrd) + 5.068s (userspace) = 12.356s. Sep 5 00:22:26.289006 (kubelet)[1543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:22:27.007415 kubelet[1543]: E0905 00:22:27.007309 1543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:22:27.011626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:22:27.011877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:22:27.012307 systemd[1]: kubelet.service: Consumed 2.034s CPU time. Sep 5 00:22:28.037841 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:22:28.039322 systemd[1]: Started sshd@0-10.0.0.155:22-10.0.0.1:32954.service - OpenSSH per-connection server daemon (10.0.0.1:32954). Sep 5 00:22:28.085942 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 32954 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:22:28.088163 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:28.098272 systemd-logind[1438]: New session 1 of user core. Sep 5 00:22:28.099683 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:22:28.106653 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:22:28.119577 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:22:28.128016 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:22:28.132392 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:22:28.251063 systemd[1560]: Queued start job for default target default.target. Sep 5 00:22:28.259773 systemd[1560]: Created slice app.slice - User Application Slice. Sep 5 00:22:28.259800 systemd[1560]: Reached target paths.target - Paths. Sep 5 00:22:28.259815 systemd[1560]: Reached target timers.target - Timers. Sep 5 00:22:28.261498 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:22:28.274164 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:22:28.274294 systemd[1560]: Reached target sockets.target - Sockets. Sep 5 00:22:28.274312 systemd[1560]: Reached target basic.target - Basic System. Sep 5 00:22:28.274350 systemd[1560]: Reached target default.target - Main User Target. Sep 5 00:22:28.274398 systemd[1560]: Startup finished in 134ms. Sep 5 00:22:28.275139 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:22:28.276945 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:22:28.339615 systemd[1]: Started sshd@1-10.0.0.155:22-10.0.0.1:32966.service - OpenSSH per-connection server daemon (10.0.0.1:32966). Sep 5 00:22:28.375091 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 32966 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:22:28.376747 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:28.380715 systemd-logind[1438]: New session 2 of user core. Sep 5 00:22:28.389592 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:22:28.444403 sshd[1571]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:28.459290 systemd[1]: sshd@1-10.0.0.155:22-10.0.0.1:32966.service: Deactivated successfully. Sep 5 00:22:28.461079 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:22:28.462651 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:22:28.463997 systemd[1]: Started sshd@2-10.0.0.155:22-10.0.0.1:32980.service - OpenSSH per-connection server daemon (10.0.0.1:32980). Sep 5 00:22:28.464972 systemd-logind[1438]: Removed session 2. Sep 5 00:22:28.498379 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 32980 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:22:28.500038 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:28.504000 systemd-logind[1438]: New session 3 of user core. Sep 5 00:22:28.513545 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:22:28.563781 sshd[1578]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:28.578216 systemd[1]: sshd@2-10.0.0.155:22-10.0.0.1:32980.service: Deactivated successfully. Sep 5 00:22:28.579911 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:22:28.581664 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:22:28.599746 systemd[1]: Started sshd@3-10.0.0.155:22-10.0.0.1:32988.service - OpenSSH per-connection server daemon (10.0.0.1:32988). Sep 5 00:22:28.601013 systemd-logind[1438]: Removed session 3. Sep 5 00:22:28.628751 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 32988 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:22:28.630537 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:28.634554 systemd-logind[1438]: New session 4 of user core. Sep 5 00:22:28.648559 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:22:28.703599 sshd[1585]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:28.715034 systemd[1]: sshd@3-10.0.0.155:22-10.0.0.1:32988.service: Deactivated successfully. Sep 5 00:22:28.716731 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:22:28.718447 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:22:28.727665 systemd[1]: Started sshd@4-10.0.0.155:22-10.0.0.1:33002.service - OpenSSH per-connection server daemon (10.0.0.1:33002). Sep 5 00:22:28.728537 systemd-logind[1438]: Removed session 4. Sep 5 00:22:28.756294 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 33002 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:22:28.758072 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:28.761949 systemd-logind[1438]: New session 5 of user core. Sep 5 00:22:28.771549 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:22:28.833705 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:22:28.834152 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:22:28.857367 sudo[1595]: pam_unix(sudo:session): session closed for user root Sep 5 00:22:28.859738 sshd[1592]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:28.873255 systemd[1]: sshd@4-10.0.0.155:22-10.0.0.1:33002.service: Deactivated successfully. Sep 5 00:22:28.876304 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:22:28.878756 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:22:28.891829 systemd[1]: Started sshd@5-10.0.0.155:22-10.0.0.1:33004.service - OpenSSH per-connection server daemon (10.0.0.1:33004). Sep 5 00:22:28.892922 systemd-logind[1438]: Removed session 5. Sep 5 00:22:28.920951 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 33004 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:22:28.922649 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:28.927189 systemd-logind[1438]: New session 6 of user core. Sep 5 00:22:28.936623 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:22:28.993964 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:22:28.994600 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:22:28.999068 sudo[1604]: pam_unix(sudo:session): session closed for user root Sep 5 00:22:29.005699 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:22:29.006047 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:22:29.028662 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:22:29.030473 auditctl[1607]: No rules Sep 5 00:22:29.031939 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:22:29.032293 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:22:29.034207 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:22:29.069061 augenrules[1625]: No rules Sep 5 00:22:29.070867 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:22:29.072251 sudo[1603]: pam_unix(sudo:session): session closed for user root Sep 5 00:22:29.074162 sshd[1600]: pam_unix(sshd:session): session closed for user core Sep 5 00:22:29.081224 systemd[1]: sshd@5-10.0.0.155:22-10.0.0.1:33004.service: Deactivated successfully. Sep 5 00:22:29.083157 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:22:29.085216 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:22:29.091679 systemd[1]: Started sshd@6-10.0.0.155:22-10.0.0.1:33014.service - OpenSSH per-connection server daemon (10.0.0.1:33014). Sep 5 00:22:29.092534 systemd-logind[1438]: Removed session 6. Sep 5 00:22:29.121077 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 33014 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:22:29.122765 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:22:29.126599 systemd-logind[1438]: New session 7 of user core. Sep 5 00:22:29.137544 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:22:29.191346 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:22:29.191717 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:22:29.488644 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:22:29.488785 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:22:29.778062 dockerd[1655]: time="2025-09-05T00:22:29.777840012Z" level=info msg="Starting up" Sep 5 00:22:30.941694 dockerd[1655]: time="2025-09-05T00:22:30.941638023Z" level=info msg="Loading containers: start." Sep 5 00:22:31.097456 kernel: Initializing XFRM netlink socket Sep 5 00:22:31.202774 systemd-networkd[1395]: docker0: Link UP Sep 5 00:22:31.226247 dockerd[1655]: time="2025-09-05T00:22:31.226183395Z" level=info msg="Loading containers: done." Sep 5 00:22:31.242618 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3323299377-merged.mount: Deactivated successfully. Sep 5 00:22:31.245755 dockerd[1655]: time="2025-09-05T00:22:31.245704888Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:22:31.245832 dockerd[1655]: time="2025-09-05T00:22:31.245819122Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:22:31.245997 dockerd[1655]: time="2025-09-05T00:22:31.245952933Z" level=info msg="Daemon has completed initialization" Sep 5 00:22:31.831814 dockerd[1655]: time="2025-09-05T00:22:31.831738150Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:22:31.833033 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:22:32.649377 containerd[1452]: time="2025-09-05T00:22:32.649290641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 00:22:33.195586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527574312.mount: Deactivated successfully. Sep 5 00:22:34.114754 containerd[1452]: time="2025-09-05T00:22:34.114685596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:34.115829 containerd[1452]: time="2025-09-05T00:22:34.115765240Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 5 00:22:34.117308 containerd[1452]: time="2025-09-05T00:22:34.117282134Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:34.120177 containerd[1452]: time="2025-09-05T00:22:34.120139462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:34.121366 containerd[1452]: time="2025-09-05T00:22:34.121315918Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 1.471921923s" Sep 5 00:22:34.121446 containerd[1452]: time="2025-09-05T00:22:34.121385238Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 5 00:22:34.122133 containerd[1452]: time="2025-09-05T00:22:34.122091793Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 00:22:35.254466 containerd[1452]: time="2025-09-05T00:22:35.254389150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:35.255380 containerd[1452]: time="2025-09-05T00:22:35.255315917Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 5 00:22:35.256695 containerd[1452]: time="2025-09-05T00:22:35.256662132Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:35.259407 containerd[1452]: time="2025-09-05T00:22:35.259377633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:35.260349 containerd[1452]: time="2025-09-05T00:22:35.260304050Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.138177062s" Sep 5 00:22:35.260349 containerd[1452]: time="2025-09-05T00:22:35.260344286Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 5 00:22:35.260890 containerd[1452]: time="2025-09-05T00:22:35.260869300Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 00:22:36.590587 containerd[1452]: time="2025-09-05T00:22:36.590529853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:36.591488 containerd[1452]: time="2025-09-05T00:22:36.591448035Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 5 00:22:36.592843 containerd[1452]: time="2025-09-05T00:22:36.592791263Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:36.595564 containerd[1452]: time="2025-09-05T00:22:36.595514059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:36.596603 containerd[1452]: time="2025-09-05T00:22:36.596565360Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.335667547s" Sep 5 00:22:36.596668 containerd[1452]: time="2025-09-05T00:22:36.596604213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 5 00:22:36.597192 containerd[1452]: time="2025-09-05T00:22:36.597152942Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 00:22:37.262156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:22:37.277739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:22:37.498060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:22:37.502867 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:22:37.551617 kubelet[1875]: E0905 00:22:37.551403 1875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:22:37.560616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:22:37.560896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:22:38.125469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658021109.mount: Deactivated successfully. Sep 5 00:22:38.868110 containerd[1452]: time="2025-09-05T00:22:38.868008405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:38.870579 containerd[1452]: time="2025-09-05T00:22:38.870493084Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 5 00:22:38.870766 containerd[1452]: time="2025-09-05T00:22:38.870646181Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:38.872870 containerd[1452]: time="2025-09-05T00:22:38.872819826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:38.873391 containerd[1452]: time="2025-09-05T00:22:38.873349099Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.276160821s" Sep 5 00:22:38.873391 containerd[1452]: time="2025-09-05T00:22:38.873386499Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 5 00:22:38.873910 containerd[1452]: time="2025-09-05T00:22:38.873884543Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 00:22:39.515468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228211226.mount: Deactivated successfully. Sep 5 00:22:40.816164 containerd[1452]: time="2025-09-05T00:22:40.816076592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:40.817496 containerd[1452]: time="2025-09-05T00:22:40.816948377Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 5 00:22:40.818181 containerd[1452]: time="2025-09-05T00:22:40.818120725Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:40.821492 containerd[1452]: time="2025-09-05T00:22:40.821441532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:40.822876 containerd[1452]: time="2025-09-05T00:22:40.822820928Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.948872005s" Sep 5 00:22:40.822917 containerd[1452]: time="2025-09-05T00:22:40.822872915Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 5 00:22:40.823440 containerd[1452]: time="2025-09-05T00:22:40.823402548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:22:41.403705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount274710395.mount: Deactivated successfully. Sep 5 00:22:41.410896 containerd[1452]: time="2025-09-05T00:22:41.410837088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:41.411628 containerd[1452]: time="2025-09-05T00:22:41.411574150Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:22:41.412719 containerd[1452]: time="2025-09-05T00:22:41.412666428Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:41.414915 containerd[1452]: time="2025-09-05T00:22:41.414857246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:41.415682 containerd[1452]: time="2025-09-05T00:22:41.415626969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.177162ms" Sep 5 00:22:41.415682 containerd[1452]: time="2025-09-05T00:22:41.415669419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:22:41.416151 containerd[1452]: time="2025-09-05T00:22:41.416126566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 00:22:45.609325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834659889.mount: Deactivated successfully. Sep 5 00:22:47.015097 containerd[1452]: time="2025-09-05T00:22:47.015018086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:47.015669 containerd[1452]: time="2025-09-05T00:22:47.015606469Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 5 00:22:47.016926 containerd[1452]: time="2025-09-05T00:22:47.016883824Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:47.019948 containerd[1452]: time="2025-09-05T00:22:47.019904668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:22:47.021365 containerd[1452]: time="2025-09-05T00:22:47.021332656Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.605178829s" Sep 5 00:22:47.021454 containerd[1452]: time="2025-09-05T00:22:47.021368122Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 5 00:22:47.584721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:22:47.594663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:22:47.765583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:22:47.771395 (kubelet)[2034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:22:47.810363 kubelet[2034]: E0905 00:22:47.810300 2034 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:22:47.815121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:22:47.815361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:22:50.454998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:22:50.468637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:22:50.494114 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-7.scope)... Sep 5 00:22:50.494128 systemd[1]: Reloading... Sep 5 00:22:50.578455 zram_generator::config[2090]: No configuration found. Sep 5 00:22:51.223320 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:22:51.303314 systemd[1]: Reloading finished in 808 ms. Sep 5 00:22:51.350242 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:22:51.350358 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:22:51.350733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:22:51.354337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:22:51.611168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:22:51.616520 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:22:51.649714 kubelet[2138]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:22:51.649714 kubelet[2138]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:22:51.649714 kubelet[2138]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:22:51.650070 kubelet[2138]: I0905 00:22:51.649755 2138 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:22:53.216780 kubelet[2138]: I0905 00:22:53.216685 2138 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:22:53.216780 kubelet[2138]: I0905 00:22:53.216748 2138 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:22:53.217338 kubelet[2138]: I0905 00:22:53.217210 2138 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:22:53.240120 kubelet[2138]: E0905 00:22:53.240079 2138 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.155:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:22:53.240231 kubelet[2138]: I0905 00:22:53.240165 2138 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:22:53.245809 kubelet[2138]: E0905 00:22:53.245764 2138 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:22:53.245873 kubelet[2138]: I0905 00:22:53.245811 2138 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:22:53.252534 kubelet[2138]: I0905 00:22:53.252503 2138 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:22:53.252844 kubelet[2138]: I0905 00:22:53.252814 2138 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:22:53.252998 kubelet[2138]: I0905 00:22:53.252835 2138 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:22:53.253093 kubelet[2138]: I0905 00:22:53.253011 2138 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:22:53.253093 kubelet[2138]: I0905 00:22:53.253024 2138 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:22:53.253833 kubelet[2138]: I0905 00:22:53.253805 2138 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:22:53.256913 kubelet[2138]: I0905 00:22:53.256886 2138 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:22:53.256967 kubelet[2138]: I0905 00:22:53.256916 2138 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:22:53.256967 kubelet[2138]: I0905 00:22:53.256946 2138 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:22:53.256967 kubelet[2138]: I0905 00:22:53.256965 2138 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:22:53.262240 kubelet[2138]: I0905 00:22:53.262114 2138 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:22:53.263938 kubelet[2138]: E0905 00:22:53.263912 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:22:53.264270 kubelet[2138]: E0905 00:22:53.264244 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.155:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:22:53.264510 kubelet[2138]: I0905 00:22:53.264401 2138 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:22:53.265513 kubelet[2138]: W0905 00:22:53.265496 2138 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:22:53.268709 kubelet[2138]: I0905 00:22:53.268685 2138 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:22:53.268786 kubelet[2138]: I0905 00:22:53.268768 2138 server.go:1289] "Started kubelet" Sep 5 00:22:53.270310 kubelet[2138]: I0905 00:22:53.269582 2138 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:22:53.270388 kubelet[2138]: I0905 00:22:53.270379 2138 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:22:53.271340 kubelet[2138]: I0905 00:22:53.270988 2138 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:22:53.271957 kubelet[2138]: I0905 00:22:53.271939 2138 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:22:53.272705 kubelet[2138]: I0905 00:22:53.272028 2138 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:22:53.273848 kubelet[2138]: I0905 00:22:53.273579 2138 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:22:53.273848 kubelet[2138]: I0905 00:22:53.273654 2138 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:22:53.274222 kubelet[2138]: I0905 00:22:53.274168 2138 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:22:53.274340 kubelet[2138]: E0905 00:22:53.274311 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:22:53.274441 kubelet[2138]: I0905 00:22:53.274402 2138 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:22:53.274513 kubelet[2138]: E0905 00:22:53.274498 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:53.274651 kubelet[2138]: E0905 00:22:53.274601 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="200ms" Sep 5 00:22:53.275552 kubelet[2138]: E0905 00:22:53.274168 2138 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.155:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.155:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623b16b0c1da5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:22:53.268728411 +0000 UTC m=+1.648138730,LastTimestamp:2025-09-05 00:22:53.268728411 +0000 UTC m=+1.648138730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:22:53.275925 kubelet[2138]: I0905 00:22:53.275901 2138 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:22:53.276756 kubelet[2138]: E0905 00:22:53.276699 2138 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:22:53.277007 kubelet[2138]: I0905 00:22:53.276989 2138 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:22:53.277007 kubelet[2138]: I0905 00:22:53.277005 2138 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:22:53.292186 kubelet[2138]: I0905 00:22:53.292144 2138 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:22:53.292186 kubelet[2138]: I0905 00:22:53.292168 2138 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:22:53.292186 kubelet[2138]: I0905 00:22:53.292190 2138 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:22:53.293132 kubelet[2138]: I0905 00:22:53.293061 2138 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:22:53.294610 kubelet[2138]: I0905 00:22:53.294581 2138 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:22:53.294671 kubelet[2138]: I0905 00:22:53.294632 2138 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:22:53.294671 kubelet[2138]: I0905 00:22:53.294657 2138 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:22:53.294671 kubelet[2138]: I0905 00:22:53.294666 2138 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:22:53.294756 kubelet[2138]: E0905 00:22:53.294729 2138 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:22:53.296315 kubelet[2138]: E0905 00:22:53.296287 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:22:53.299144 kubelet[2138]: I0905 00:22:53.299110 2138 policy_none.go:49] "None policy: Start" Sep 5 00:22:53.299144 kubelet[2138]: I0905 00:22:53.299144 2138 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:22:53.299225 kubelet[2138]: I0905 00:22:53.299163 2138 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:22:53.305901 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:22:53.325978 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:22:53.343084 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:22:53.344390 kubelet[2138]: E0905 00:22:53.344353 2138 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:22:53.344626 kubelet[2138]: I0905 00:22:53.344604 2138 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:22:53.344708 kubelet[2138]: I0905 00:22:53.344624 2138 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:22:53.344916 kubelet[2138]: I0905 00:22:53.344870 2138 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:22:53.345766 kubelet[2138]: E0905 00:22:53.345726 2138 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:22:53.345766 kubelet[2138]: E0905 00:22:53.345766 2138 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:22:53.406559 systemd[1]: Created slice kubepods-burstable-podf3837a0cfe595fee606e12ed8d94cae8.slice - libcontainer container kubepods-burstable-podf3837a0cfe595fee606e12ed8d94cae8.slice. Sep 5 00:22:53.415292 kubelet[2138]: E0905 00:22:53.415251 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:22:53.417086 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 00:22:53.427644 kubelet[2138]: E0905 00:22:53.427609 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:22:53.430207 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 00:22:53.431751 kubelet[2138]: E0905 00:22:53.431725 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:22:53.445466 kubelet[2138]: I0905 00:22:53.445446 2138 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:22:53.445888 kubelet[2138]: E0905 00:22:53.445841 2138 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" Sep 5 00:22:53.476493 kubelet[2138]: I0905 00:22:53.474490 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:53.476493 kubelet[2138]: I0905 00:22:53.474533 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:53.476493 kubelet[2138]: I0905 00:22:53.474558 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3837a0cfe595fee606e12ed8d94cae8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f3837a0cfe595fee606e12ed8d94cae8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:53.476493 kubelet[2138]: I0905 00:22:53.474577 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3837a0cfe595fee606e12ed8d94cae8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f3837a0cfe595fee606e12ed8d94cae8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:53.476493 kubelet[2138]: I0905 00:22:53.474598 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:53.476639 kubelet[2138]: I0905 00:22:53.474617 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:53.476639 kubelet[2138]: I0905 00:22:53.474637 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:53.476639 kubelet[2138]: I0905 00:22:53.474653 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:22:53.476639 kubelet[2138]: I0905 00:22:53.474674 2138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3837a0cfe595fee606e12ed8d94cae8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f3837a0cfe595fee606e12ed8d94cae8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:53.476639 kubelet[2138]: E0905 00:22:53.476517 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="400ms" Sep 5 00:22:53.647632 kubelet[2138]: I0905 00:22:53.647584 2138 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:22:53.647927 kubelet[2138]: E0905 00:22:53.647884 2138 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" Sep 5 00:22:53.716545 kubelet[2138]: E0905 00:22:53.716516 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:53.717123 containerd[1452]: time="2025-09-05T00:22:53.717078037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f3837a0cfe595fee606e12ed8d94cae8,Namespace:kube-system,Attempt:0,}" Sep 5 00:22:53.728333 kubelet[2138]: E0905 00:22:53.728244 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:53.728734 containerd[1452]: time="2025-09-05T00:22:53.728679068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 00:22:53.732932 kubelet[2138]: E0905 00:22:53.732895 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:53.735553 containerd[1452]: time="2025-09-05T00:22:53.735524975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 00:22:53.877628 kubelet[2138]: E0905 00:22:53.877584 2138 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.155:6443: connect: connection refused" interval="800ms" Sep 5 00:22:54.049308 kubelet[2138]: I0905 00:22:54.049225 2138 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:22:54.049806 kubelet[2138]: E0905 00:22:54.049766 2138 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.155:6443/api/v1/nodes\": dial tcp 10.0.0.155:6443: connect: connection refused" node="localhost" Sep 5 00:22:54.149760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1610551255.mount: Deactivated successfully. Sep 5 00:22:54.156577 containerd[1452]: time="2025-09-05T00:22:54.156538204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:22:54.158227 containerd[1452]: time="2025-09-05T00:22:54.158190923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:22:54.159161 containerd[1452]: time="2025-09-05T00:22:54.159125996Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:22:54.159944 containerd[1452]: time="2025-09-05T00:22:54.159910257Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:22:54.160866 containerd[1452]: time="2025-09-05T00:22:54.160825062Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:22:54.161716 containerd[1452]: time="2025-09-05T00:22:54.161677871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:22:54.162486 containerd[1452]: time="2025-09-05T00:22:54.162438978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:22:54.163913 containerd[1452]: time="2025-09-05T00:22:54.163877365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:22:54.165845 containerd[1452]: time="2025-09-05T00:22:54.165809378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 448.648486ms" Sep 5 00:22:54.166457 containerd[1452]: time="2025-09-05T00:22:54.166414683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 437.642991ms" Sep 5 00:22:54.168746 containerd[1452]: time="2025-09-05T00:22:54.168714264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 433.10959ms" Sep 5 00:22:54.250144 kubelet[2138]: E0905 00:22:54.250095 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:22:54.289779 containerd[1452]: time="2025-09-05T00:22:54.289332213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:22:54.289779 containerd[1452]: time="2025-09-05T00:22:54.289588053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:22:54.289779 containerd[1452]: time="2025-09-05T00:22:54.289605996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:22:54.290230 containerd[1452]: time="2025-09-05T00:22:54.289771036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:22:54.291371 containerd[1452]: time="2025-09-05T00:22:54.291252373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:22:54.291488 containerd[1452]: time="2025-09-05T00:22:54.291300313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:22:54.291488 containerd[1452]: time="2025-09-05T00:22:54.291376767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:22:54.291649 containerd[1452]: time="2025-09-05T00:22:54.291562405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:22:54.295684 containerd[1452]: time="2025-09-05T00:22:54.293914124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:22:54.295684 containerd[1452]: time="2025-09-05T00:22:54.295485711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:22:54.295684 containerd[1452]: time="2025-09-05T00:22:54.295521037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:22:54.295684 containerd[1452]: time="2025-09-05T00:22:54.295604153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:22:54.325588 systemd[1]: Started cri-containerd-45a9d1c8c94842c8c81bf67461be20e1f976423a20bc58dfc4a2f3512a31b5c8.scope - libcontainer container 45a9d1c8c94842c8c81bf67461be20e1f976423a20bc58dfc4a2f3512a31b5c8. Sep 5 00:22:54.329921 systemd[1]: Started cri-containerd-324b2b80ea4a47ef9a132cdaef16948701574667f08d7754c8b13854345337aa.scope - libcontainer container 324b2b80ea4a47ef9a132cdaef16948701574667f08d7754c8b13854345337aa. Sep 5 00:22:54.332143 systemd[1]: Started cri-containerd-60baf0eec0afa66d695d01c8f4acb6ae7c72f0608bdf28d8e280e4923805d44f.scope - libcontainer container 60baf0eec0afa66d695d01c8f4acb6ae7c72f0608bdf28d8e280e4923805d44f. Sep 5 00:22:54.371168 containerd[1452]: time="2025-09-05T00:22:54.371123983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"324b2b80ea4a47ef9a132cdaef16948701574667f08d7754c8b13854345337aa\"" Sep 5 00:22:54.372682 kubelet[2138]: E0905 00:22:54.372649 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:54.375855 containerd[1452]: time="2025-09-05T00:22:54.375820378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"60baf0eec0afa66d695d01c8f4acb6ae7c72f0608bdf28d8e280e4923805d44f\"" Sep 5 00:22:54.378580 kubelet[2138]: E0905 00:22:54.378343 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:54.381057 containerd[1452]: time="2025-09-05T00:22:54.381018926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f3837a0cfe595fee606e12ed8d94cae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"45a9d1c8c94842c8c81bf67461be20e1f976423a20bc58dfc4a2f3512a31b5c8\"" Sep 5 00:22:54.381616 kubelet[2138]: E0905 00:22:54.381578 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:54.381791 containerd[1452]: time="2025-09-05T00:22:54.381754334Z" level=info msg="CreateContainer within sandbox \"324b2b80ea4a47ef9a132cdaef16948701574667f08d7754c8b13854345337aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:22:54.383524 containerd[1452]: time="2025-09-05T00:22:54.383497864Z" level=info msg="CreateContainer within sandbox \"60baf0eec0afa66d695d01c8f4acb6ae7c72f0608bdf28d8e280e4923805d44f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:22:54.386283 containerd[1452]: time="2025-09-05T00:22:54.386257879Z" level=info msg="CreateContainer within sandbox \"45a9d1c8c94842c8c81bf67461be20e1f976423a20bc58dfc4a2f3512a31b5c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:22:54.400148 containerd[1452]: time="2025-09-05T00:22:54.400111805Z" level=info msg="CreateContainer within sandbox \"324b2b80ea4a47ef9a132cdaef16948701574667f08d7754c8b13854345337aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"277c6fdf045f8cec4b036c4a0c08691e089e2de1dc7ea92c7d9af8d86c6a7493\"" Sep 5 00:22:54.400665 containerd[1452]: time="2025-09-05T00:22:54.400642510Z" level=info msg="StartContainer for \"277c6fdf045f8cec4b036c4a0c08691e089e2de1dc7ea92c7d9af8d86c6a7493\"" Sep 5 00:22:54.405111 containerd[1452]: time="2025-09-05T00:22:54.405027502Z" level=info msg="CreateContainer within sandbox \"60baf0eec0afa66d695d01c8f4acb6ae7c72f0608bdf28d8e280e4923805d44f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ca8ed5e2985d548dea73d31633b006792b360902db0616c746b339e0a6d47c93\"" Sep 5 00:22:54.406844 containerd[1452]: time="2025-09-05T00:22:54.406814843Z" level=info msg="StartContainer for \"ca8ed5e2985d548dea73d31633b006792b360902db0616c746b339e0a6d47c93\"" Sep 5 00:22:54.408034 containerd[1452]: time="2025-09-05T00:22:54.408003271Z" level=info msg="CreateContainer within sandbox \"45a9d1c8c94842c8c81bf67461be20e1f976423a20bc58dfc4a2f3512a31b5c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"90eca319955c94e58c2c8c241c0268c20f0950068636a6b5f6876bcea672f07a\"" Sep 5 00:22:54.408614 containerd[1452]: time="2025-09-05T00:22:54.408589320Z" level=info msg="StartContainer for \"90eca319955c94e58c2c8c241c0268c20f0950068636a6b5f6876bcea672f07a\"" Sep 5 00:22:54.434569 systemd[1]: Started cri-containerd-277c6fdf045f8cec4b036c4a0c08691e089e2de1dc7ea92c7d9af8d86c6a7493.scope - libcontainer container 277c6fdf045f8cec4b036c4a0c08691e089e2de1dc7ea92c7d9af8d86c6a7493. Sep 5 00:22:54.434893 kubelet[2138]: E0905 00:22:54.434857 2138 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.155:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:22:54.438792 systemd[1]: Started cri-containerd-90eca319955c94e58c2c8c241c0268c20f0950068636a6b5f6876bcea672f07a.scope - libcontainer container 90eca319955c94e58c2c8c241c0268c20f0950068636a6b5f6876bcea672f07a. Sep 5 00:22:54.441147 systemd[1]: Started cri-containerd-ca8ed5e2985d548dea73d31633b006792b360902db0616c746b339e0a6d47c93.scope - libcontainer container ca8ed5e2985d548dea73d31633b006792b360902db0616c746b339e0a6d47c93. Sep 5 00:22:54.486453 containerd[1452]: time="2025-09-05T00:22:54.486366913Z" level=info msg="StartContainer for \"277c6fdf045f8cec4b036c4a0c08691e089e2de1dc7ea92c7d9af8d86c6a7493\" returns successfully" Sep 5 00:22:54.493586 containerd[1452]: time="2025-09-05T00:22:54.493543319Z" level=info msg="StartContainer for \"90eca319955c94e58c2c8c241c0268c20f0950068636a6b5f6876bcea672f07a\" returns successfully" Sep 5 00:22:54.497129 containerd[1452]: time="2025-09-05T00:22:54.496630557Z" level=info msg="StartContainer for \"ca8ed5e2985d548dea73d31633b006792b360902db0616c746b339e0a6d47c93\" returns successfully" Sep 5 00:22:54.851647 kubelet[2138]: I0905 00:22:54.851587 2138 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:22:55.302888 kubelet[2138]: E0905 00:22:55.302591 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:22:55.302888 kubelet[2138]: E0905 00:22:55.302733 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:55.304384 kubelet[2138]: E0905 00:22:55.304326 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:22:55.304652 kubelet[2138]: E0905 00:22:55.304528 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:55.306925 kubelet[2138]: E0905 00:22:55.306896 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:22:55.307028 kubelet[2138]: E0905 00:22:55.307005 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:55.738451 kubelet[2138]: E0905 00:22:55.738388 2138 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:22:55.835166 kubelet[2138]: I0905 00:22:55.835116 2138 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:22:55.835166 kubelet[2138]: E0905 00:22:55.835171 2138 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:22:55.843729 kubelet[2138]: E0905 00:22:55.843694 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:55.944447 kubelet[2138]: E0905 00:22:55.944401 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.045415 kubelet[2138]: E0905 00:22:56.045302 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.146325 kubelet[2138]: E0905 00:22:56.146279 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.247140 kubelet[2138]: E0905 00:22:56.247100 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.308977 kubelet[2138]: E0905 00:22:56.308852 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:22:56.309383 kubelet[2138]: E0905 00:22:56.308991 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:56.309383 kubelet[2138]: E0905 00:22:56.309008 2138 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:22:56.309383 kubelet[2138]: E0905 00:22:56.309118 2138 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:56.347481 kubelet[2138]: E0905 00:22:56.347447 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.448021 kubelet[2138]: E0905 00:22:56.447979 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.548919 kubelet[2138]: E0905 00:22:56.548886 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.649582 kubelet[2138]: E0905 00:22:56.649550 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.750862 kubelet[2138]: E0905 00:22:56.750516 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.851486 kubelet[2138]: E0905 00:22:56.851435 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:56.952222 kubelet[2138]: E0905 00:22:56.952084 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:57.052747 kubelet[2138]: E0905 00:22:57.052695 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:57.153186 kubelet[2138]: E0905 00:22:57.153131 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:57.254096 kubelet[2138]: E0905 00:22:57.253968 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:57.354604 kubelet[2138]: E0905 00:22:57.354565 2138 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:22:57.475600 kubelet[2138]: I0905 00:22:57.475569 2138 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:57.482486 kubelet[2138]: I0905 00:22:57.482452 2138 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:22:57.484837 kubelet[2138]: I0905 00:22:57.484818 2138 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:57.556101 systemd[1]: Reloading requested from client PID 2426 ('systemctl') (unit session-7.scope)... Sep 5 00:22:57.556117 systemd[1]: Reloading... Sep 5 00:22:57.633618 zram_generator::config[2468]: No configuration found. Sep 5 00:22:57.745312 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:22:57.836786 systemd[1]: Reloading finished in 280 ms. Sep 5 00:22:57.881098 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:22:57.899038 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:22:57.899363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:22:57.899414 systemd[1]: kubelet.service: Consumed 1.223s CPU time, 133.4M memory peak, 0B memory swap peak. Sep 5 00:22:57.907891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:22:58.084175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:22:58.090088 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:22:58.128486 kubelet[2510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:22:58.128486 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:22:58.128486 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:22:58.128840 kubelet[2510]: I0905 00:22:58.128554 2510 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:22:58.135245 kubelet[2510]: I0905 00:22:58.135195 2510 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:22:58.135245 kubelet[2510]: I0905 00:22:58.135225 2510 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:22:58.135524 kubelet[2510]: I0905 00:22:58.135498 2510 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:22:58.136870 kubelet[2510]: I0905 00:22:58.136851 2510 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 00:22:58.139419 kubelet[2510]: I0905 00:22:58.139391 2510 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:22:58.144389 kubelet[2510]: E0905 00:22:58.144341 2510 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:22:58.144389 kubelet[2510]: I0905 00:22:58.144382 2510 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:22:58.153003 kubelet[2510]: I0905 00:22:58.152967 2510 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:22:58.153277 kubelet[2510]: I0905 00:22:58.153241 2510 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:22:58.153437 kubelet[2510]: I0905 00:22:58.153273 2510 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:22:58.153527 kubelet[2510]: I0905 00:22:58.153440 2510 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:22:58.153527 kubelet[2510]: I0905 00:22:58.153450 2510 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:22:58.153527 kubelet[2510]: I0905 00:22:58.153501 2510 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:22:58.153688 kubelet[2510]: I0905 00:22:58.153673 2510 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:22:58.153724 kubelet[2510]: I0905 00:22:58.153688 2510 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:22:58.153745 kubelet[2510]: I0905 00:22:58.153724 2510 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:22:58.153745 kubelet[2510]: I0905 00:22:58.153741 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:22:58.154490 kubelet[2510]: I0905 00:22:58.154469 2510 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:22:58.154921 kubelet[2510]: I0905 00:22:58.154892 2510 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:22:58.157299 kubelet[2510]: I0905 00:22:58.157271 2510 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:22:58.157355 kubelet[2510]: I0905 00:22:58.157307 2510 server.go:1289] "Started kubelet" Sep 5 00:22:58.159071 kubelet[2510]: I0905 00:22:58.159012 2510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:22:58.159541 kubelet[2510]: I0905 00:22:58.159297 2510 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:22:58.160787 kubelet[2510]: I0905 00:22:58.160754 2510 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:22:58.163255 kubelet[2510]: I0905 00:22:58.161774 2510 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:22:58.165811 kubelet[2510]: I0905 00:22:58.165783 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:22:58.169462 kubelet[2510]: I0905 00:22:58.169436 2510 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:22:58.171460 kubelet[2510]: I0905 00:22:58.169529 2510 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:22:58.171460 kubelet[2510]: I0905 00:22:58.171123 2510 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:22:58.171460 kubelet[2510]: I0905 00:22:58.171237 2510 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:22:58.172531 kubelet[2510]: I0905 00:22:58.171685 2510 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:22:58.173479 kubelet[2510]: I0905 00:22:58.173448 2510 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:22:58.173619 kubelet[2510]: I0905 00:22:58.173598 2510 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:22:58.180945 kubelet[2510]: I0905 00:22:58.180904 2510 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:22:58.189542 kubelet[2510]: I0905 00:22:58.189481 2510 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:22:58.189542 kubelet[2510]: I0905 00:22:58.189505 2510 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:22:58.189542 kubelet[2510]: I0905 00:22:58.189525 2510 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:22:58.189542 kubelet[2510]: I0905 00:22:58.189533 2510 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:22:58.189657 kubelet[2510]: E0905 00:22:58.189576 2510 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:22:58.205868 kubelet[2510]: I0905 00:22:58.205848 2510 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:22:58.205868 kubelet[2510]: I0905 00:22:58.205863 2510 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:22:58.205944 kubelet[2510]: I0905 00:22:58.205882 2510 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:22:58.206035 kubelet[2510]: I0905 00:22:58.206019 2510 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:22:58.206063 kubelet[2510]: I0905 00:22:58.206033 2510 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:22:58.206084 kubelet[2510]: I0905 00:22:58.206064 2510 policy_none.go:49] "None policy: Start" Sep 5 00:22:58.206084 kubelet[2510]: I0905 00:22:58.206074 2510 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:22:58.206126 kubelet[2510]: I0905 00:22:58.206098 2510 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:22:58.206210 kubelet[2510]: I0905 00:22:58.206195 2510 state_mem.go:75] "Updated machine memory state" Sep 5 00:22:58.212854 kubelet[2510]: E0905 00:22:58.212821 2510 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:22:58.213023 kubelet[2510]: I0905 00:22:58.213001 2510 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:22:58.213052 kubelet[2510]: I0905 00:22:58.213021 2510 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:22:58.213447 kubelet[2510]: I0905 00:22:58.213233 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:22:58.214064 kubelet[2510]: E0905 00:22:58.213997 2510 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:22:58.290653 kubelet[2510]: I0905 00:22:58.290623 2510 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:22:58.290764 kubelet[2510]: I0905 00:22:58.290693 2510 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:58.290816 kubelet[2510]: I0905 00:22:58.290778 2510 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:58.320336 kubelet[2510]: I0905 00:22:58.320288 2510 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:22:58.404090 kubelet[2510]: E0905 00:22:58.404064 2510 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:22:58.433127 kubelet[2510]: I0905 00:22:58.433066 2510 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:22:58.433250 kubelet[2510]: I0905 00:22:58.433168 2510 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:22:58.433250 kubelet[2510]: E0905 00:22:58.433194 2510 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:58.433332 kubelet[2510]: E0905 00:22:58.433208 2510 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:58.474780 kubelet[2510]: I0905 00:22:58.474741 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3837a0cfe595fee606e12ed8d94cae8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f3837a0cfe595fee606e12ed8d94cae8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:58.474780 kubelet[2510]: I0905 00:22:58.474773 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3837a0cfe595fee606e12ed8d94cae8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f3837a0cfe595fee606e12ed8d94cae8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:58.474856 kubelet[2510]: I0905 00:22:58.474793 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:58.474856 kubelet[2510]: I0905 00:22:58.474817 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:58.474914 kubelet[2510]: I0905 00:22:58.474859 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:58.474942 kubelet[2510]: I0905 00:22:58.474912 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3837a0cfe595fee606e12ed8d94cae8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f3837a0cfe595fee606e12ed8d94cae8\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:58.475018 kubelet[2510]: I0905 00:22:58.474961 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:58.475043 kubelet[2510]: I0905 00:22:58.475023 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:22:58.475073 kubelet[2510]: I0905 00:22:58.475046 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:22:58.563417 sudo[2552]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 00:22:58.563825 sudo[2552]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 00:22:58.705728 kubelet[2510]: E0905 00:22:58.704868 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:58.733926 kubelet[2510]: E0905 00:22:58.733888 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:58.734227 kubelet[2510]: E0905 00:22:58.734189 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:59.044579 sudo[2552]: pam_unix(sudo:session): session closed for user root Sep 5 00:22:59.155162 kubelet[2510]: I0905 00:22:59.155111 2510 apiserver.go:52] "Watching apiserver" Sep 5 00:22:59.174259 kubelet[2510]: I0905 00:22:59.174218 2510 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:22:59.200755 kubelet[2510]: I0905 00:22:59.200732 2510 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:22:59.201233 kubelet[2510]: E0905 00:22:59.201199 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:59.201764 kubelet[2510]: I0905 00:22:59.201734 2510 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:59.207012 kubelet[2510]: E0905 00:22:59.206983 2510 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:22:59.207166 kubelet[2510]: E0905 00:22:59.207148 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:59.209262 kubelet[2510]: E0905 00:22:59.209024 2510 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:22:59.209262 kubelet[2510]: E0905 00:22:59.209187 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:22:59.218401 kubelet[2510]: I0905 00:22:59.218321 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.218303197 podStartE2EDuration="2.218303197s" podCreationTimestamp="2025-09-05 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:22:59.218146297 +0000 UTC m=+1.123817368" watchObservedRunningTime="2025-09-05 00:22:59.218303197 +0000 UTC m=+1.123974268" Sep 5 00:22:59.224262 kubelet[2510]: I0905 00:22:59.224171 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.224152397 podStartE2EDuration="2.224152397s" podCreationTimestamp="2025-09-05 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:22:59.224124955 +0000 UTC m=+1.129796026" watchObservedRunningTime="2025-09-05 00:22:59.224152397 +0000 UTC m=+1.129823468" Sep 5 00:23:00.202750 kubelet[2510]: E0905 00:23:00.202694 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:00.203249 kubelet[2510]: E0905 00:23:00.202910 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:00.263922 sudo[1636]: pam_unix(sudo:session): session closed for user root Sep 5 00:23:00.265819 sshd[1633]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:00.270475 systemd[1]: sshd@6-10.0.0.155:22-10.0.0.1:33014.service: Deactivated successfully. Sep 5 00:23:00.272710 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:23:00.272916 systemd[1]: session-7.scope: Consumed 5.266s CPU time, 157.2M memory peak, 0B memory swap peak. Sep 5 00:23:00.273366 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:23:00.274586 systemd-logind[1438]: Removed session 7. Sep 5 00:23:01.204456 kubelet[2510]: E0905 00:23:01.204405 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:02.920470 kubelet[2510]: E0905 00:23:02.920414 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:03.748021 kubelet[2510]: I0905 00:23:03.747979 2510 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:23:03.748362 containerd[1452]: time="2025-09-05T00:23:03.748324310Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:23:03.748814 kubelet[2510]: I0905 00:23:03.748540 2510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:23:04.693466 kubelet[2510]: I0905 00:23:04.691845 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.691826025 podStartE2EDuration="7.691826025s" podCreationTimestamp="2025-09-05 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:22:59.230074087 +0000 UTC m=+1.135745158" watchObservedRunningTime="2025-09-05 00:23:04.691826025 +0000 UTC m=+6.597497096" Sep 5 00:23:04.880347 systemd[1]: Created slice kubepods-besteffort-pod66487053_7f94_4e7a_827a_40a9fd7256a7.slice - libcontainer container kubepods-besteffort-pod66487053_7f94_4e7a_827a_40a9fd7256a7.slice. Sep 5 00:23:04.893980 systemd[1]: Created slice kubepods-burstable-podae42665c_257c_4156_b48c_fc5ddd651b05.slice - libcontainer container kubepods-burstable-podae42665c_257c_4156_b48c_fc5ddd651b05.slice. Sep 5 00:23:04.916324 kubelet[2510]: I0905 00:23:04.916279 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbpq2\" (UniqueName: \"kubernetes.io/projected/66487053-7f94-4e7a-827a-40a9fd7256a7-kube-api-access-nbpq2\") pod \"kube-proxy-2xt48\" (UID: \"66487053-7f94-4e7a-827a-40a9fd7256a7\") " pod="kube-system/kube-proxy-2xt48" Sep 5 00:23:04.916324 kubelet[2510]: I0905 00:23:04.916314 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cni-path\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916324 kubelet[2510]: I0905 00:23:04.916334 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae42665c-257c-4156-b48c-fc5ddd651b05-hubble-tls\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916501 kubelet[2510]: I0905 00:23:04.916350 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-run\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916501 kubelet[2510]: I0905 00:23:04.916372 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66487053-7f94-4e7a-827a-40a9fd7256a7-xtables-lock\") pod \"kube-proxy-2xt48\" (UID: \"66487053-7f94-4e7a-827a-40a9fd7256a7\") " pod="kube-system/kube-proxy-2xt48" Sep 5 00:23:04.916501 kubelet[2510]: I0905 00:23:04.916449 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66487053-7f94-4e7a-827a-40a9fd7256a7-lib-modules\") pod \"kube-proxy-2xt48\" (UID: \"66487053-7f94-4e7a-827a-40a9fd7256a7\") " pod="kube-system/kube-proxy-2xt48" Sep 5 00:23:04.916501 kubelet[2510]: I0905 00:23:04.916496 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-hostproc\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916601 kubelet[2510]: I0905 00:23:04.916518 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-cgroup\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916630 kubelet[2510]: I0905 00:23:04.916587 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-bpf-maps\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916662 kubelet[2510]: I0905 00:23:04.916637 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-xtables-lock\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916662 kubelet[2510]: I0905 00:23:04.916654 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae42665c-257c-4156-b48c-fc5ddd651b05-clustermesh-secrets\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916709 kubelet[2510]: I0905 00:23:04.916670 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-config-path\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916709 kubelet[2510]: I0905 00:23:04.916685 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-host-proc-sys-net\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916709 kubelet[2510]: I0905 00:23:04.916700 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-host-proc-sys-kernel\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916786 kubelet[2510]: I0905 00:23:04.916713 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn5k8\" (UniqueName: \"kubernetes.io/projected/ae42665c-257c-4156-b48c-fc5ddd651b05-kube-api-access-mn5k8\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916786 kubelet[2510]: I0905 00:23:04.916731 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66487053-7f94-4e7a-827a-40a9fd7256a7-kube-proxy\") pod \"kube-proxy-2xt48\" (UID: \"66487053-7f94-4e7a-827a-40a9fd7256a7\") " pod="kube-system/kube-proxy-2xt48" Sep 5 00:23:04.916786 kubelet[2510]: I0905 00:23:04.916745 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-etc-cni-netd\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.916786 kubelet[2510]: I0905 00:23:04.916767 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-lib-modules\") pod \"cilium-cps9z\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " pod="kube-system/cilium-cps9z" Sep 5 00:23:04.958813 systemd[1]: Created slice kubepods-besteffort-pod76ce4e2b_38e6_4930_84ab_14aff345feb3.slice - libcontainer container kubepods-besteffort-pod76ce4e2b_38e6_4930_84ab_14aff345feb3.slice. Sep 5 00:23:05.017484 kubelet[2510]: I0905 00:23:05.017413 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76ce4e2b-38e6-4930-84ab-14aff345feb3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fkhxz\" (UID: \"76ce4e2b-38e6-4930-84ab-14aff345feb3\") " pod="kube-system/cilium-operator-6c4d7847fc-fkhxz" Sep 5 00:23:05.018030 kubelet[2510]: I0905 00:23:05.017978 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fthq\" (UniqueName: \"kubernetes.io/projected/76ce4e2b-38e6-4930-84ab-14aff345feb3-kube-api-access-4fthq\") pod \"cilium-operator-6c4d7847fc-fkhxz\" (UID: \"76ce4e2b-38e6-4930-84ab-14aff345feb3\") " pod="kube-system/cilium-operator-6c4d7847fc-fkhxz" Sep 5 00:23:05.191074 kubelet[2510]: E0905 00:23:05.191019 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:05.191522 containerd[1452]: time="2025-09-05T00:23:05.191483386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xt48,Uid:66487053-7f94-4e7a-827a-40a9fd7256a7,Namespace:kube-system,Attempt:0,}" Sep 5 00:23:05.198339 kubelet[2510]: E0905 00:23:05.198297 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:05.198946 containerd[1452]: time="2025-09-05T00:23:05.198894111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cps9z,Uid:ae42665c-257c-4156-b48c-fc5ddd651b05,Namespace:kube-system,Attempt:0,}" Sep 5 00:23:05.223610 containerd[1452]: time="2025-09-05T00:23:05.220702312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:23:05.223610 containerd[1452]: time="2025-09-05T00:23:05.220783386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:23:05.223610 containerd[1452]: time="2025-09-05T00:23:05.220797111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:05.223610 containerd[1452]: time="2025-09-05T00:23:05.220882064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:05.229383 containerd[1452]: time="2025-09-05T00:23:05.229280848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:23:05.229482 containerd[1452]: time="2025-09-05T00:23:05.229392811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:23:05.229482 containerd[1452]: time="2025-09-05T00:23:05.229450281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:05.229663 containerd[1452]: time="2025-09-05T00:23:05.229609924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:05.244044 systemd[1]: Started cri-containerd-194435b77fd855efe6b65ce8251dfa8e44968a84495ac3365e463fd2173dee10.scope - libcontainer container 194435b77fd855efe6b65ce8251dfa8e44968a84495ac3365e463fd2173dee10. Sep 5 00:23:05.247707 systemd[1]: Started cri-containerd-217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92.scope - libcontainer container 217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92. Sep 5 00:23:05.262229 kubelet[2510]: E0905 00:23:05.261899 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:05.264367 containerd[1452]: time="2025-09-05T00:23:05.264335313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fkhxz,Uid:76ce4e2b-38e6-4930-84ab-14aff345feb3,Namespace:kube-system,Attempt:0,}" Sep 5 00:23:05.272659 containerd[1452]: time="2025-09-05T00:23:05.272618729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xt48,Uid:66487053-7f94-4e7a-827a-40a9fd7256a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"194435b77fd855efe6b65ce8251dfa8e44968a84495ac3365e463fd2173dee10\"" Sep 5 00:23:05.273886 kubelet[2510]: E0905 00:23:05.273389 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:05.279289 containerd[1452]: time="2025-09-05T00:23:05.279248118Z" level=info msg="CreateContainer within sandbox \"194435b77fd855efe6b65ce8251dfa8e44968a84495ac3365e463fd2173dee10\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:23:05.291718 containerd[1452]: time="2025-09-05T00:23:05.291675165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cps9z,Uid:ae42665c-257c-4156-b48c-fc5ddd651b05,Namespace:kube-system,Attempt:0,} returns sandbox id \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\"" Sep 5 00:23:05.292866 kubelet[2510]: E0905 00:23:05.292718 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:05.294028 containerd[1452]: time="2025-09-05T00:23:05.293999105Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 00:23:05.297040 containerd[1452]: time="2025-09-05T00:23:05.296951020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:23:05.297340 containerd[1452]: time="2025-09-05T00:23:05.297205564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:23:05.297340 containerd[1452]: time="2025-09-05T00:23:05.297269717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:05.297633 containerd[1452]: time="2025-09-05T00:23:05.297553696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:05.301548 containerd[1452]: time="2025-09-05T00:23:05.301511385Z" level=info msg="CreateContainer within sandbox \"194435b77fd855efe6b65ce8251dfa8e44968a84495ac3365e463fd2173dee10\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cb345c8f16d5e96d0dd30f289c7acd943e95585d8dffb8447b08ad7836f9a132\"" Sep 5 00:23:05.302341 containerd[1452]: time="2025-09-05T00:23:05.302234419Z" level=info msg="StartContainer for \"cb345c8f16d5e96d0dd30f289c7acd943e95585d8dffb8447b08ad7836f9a132\"" Sep 5 00:23:05.317555 systemd[1]: Started cri-containerd-4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0.scope - libcontainer container 4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0. Sep 5 00:23:05.338569 systemd[1]: Started cri-containerd-cb345c8f16d5e96d0dd30f289c7acd943e95585d8dffb8447b08ad7836f9a132.scope - libcontainer container cb345c8f16d5e96d0dd30f289c7acd943e95585d8dffb8447b08ad7836f9a132. Sep 5 00:23:05.358054 containerd[1452]: time="2025-09-05T00:23:05.357987204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fkhxz,Uid:76ce4e2b-38e6-4930-84ab-14aff345feb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0\"" Sep 5 00:23:05.358938 kubelet[2510]: E0905 00:23:05.358705 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:05.371489 kubelet[2510]: E0905 00:23:05.371049 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:05.375070 containerd[1452]: time="2025-09-05T00:23:05.375028407Z" level=info msg="StartContainer for \"cb345c8f16d5e96d0dd30f289c7acd943e95585d8dffb8447b08ad7836f9a132\" returns successfully" Sep 5 00:23:06.212414 kubelet[2510]: E0905 00:23:06.212374 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:06.213374 kubelet[2510]: E0905 00:23:06.213348 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:06.220974 kubelet[2510]: I0905 00:23:06.220919 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2xt48" podStartSLOduration=2.220901619 podStartE2EDuration="2.220901619s" podCreationTimestamp="2025-09-05 00:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:23:06.220697261 +0000 UTC m=+8.126368332" watchObservedRunningTime="2025-09-05 00:23:06.220901619 +0000 UTC m=+8.126572700" Sep 5 00:23:07.214150 kubelet[2510]: E0905 00:23:07.214115 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:08.245689 kubelet[2510]: E0905 00:23:08.245654 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:08.912640 update_engine[1439]: I20250905 00:23:08.912573 1439 update_attempter.cc:509] Updating boot flags... Sep 5 00:23:08.939458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2899) Sep 5 00:23:08.971447 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2901) Sep 5 00:23:09.017465 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2901) Sep 5 00:23:09.217467 kubelet[2510]: E0905 00:23:09.217301 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:10.218118 kubelet[2510]: E0905 00:23:10.218077 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:12.924754 kubelet[2510]: E0905 00:23:12.924690 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:17.111811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2229473074.mount: Deactivated successfully. Sep 5 00:23:19.486445 containerd[1452]: time="2025-09-05T00:23:19.486379220Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:23:19.487127 containerd[1452]: time="2025-09-05T00:23:19.487069791Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 5 00:23:19.488271 containerd[1452]: time="2025-09-05T00:23:19.488233025Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:23:19.489721 containerd[1452]: time="2025-09-05T00:23:19.489682930Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.195550191s" Sep 5 00:23:19.489774 containerd[1452]: time="2025-09-05T00:23:19.489723596Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 5 00:23:19.490771 containerd[1452]: time="2025-09-05T00:23:19.490732799Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 00:23:19.494850 containerd[1452]: time="2025-09-05T00:23:19.494820990Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:23:19.506741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293238700.mount: Deactivated successfully. Sep 5 00:23:19.513597 containerd[1452]: time="2025-09-05T00:23:19.513556865Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\"" Sep 5 00:23:19.514115 containerd[1452]: time="2025-09-05T00:23:19.514079130Z" level=info msg="StartContainer for \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\"" Sep 5 00:23:19.551564 systemd[1]: Started cri-containerd-17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276.scope - libcontainer container 17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276. Sep 5 00:23:19.579011 containerd[1452]: time="2025-09-05T00:23:19.578952659Z" level=info msg="StartContainer for \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\" returns successfully" Sep 5 00:23:19.588606 systemd[1]: cri-containerd-17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276.scope: Deactivated successfully. Sep 5 00:23:20.473222 kubelet[2510]: E0905 00:23:20.473161 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:20.504794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276-rootfs.mount: Deactivated successfully. Sep 5 00:23:20.599704 containerd[1452]: time="2025-09-05T00:23:20.599610008Z" level=info msg="shim disconnected" id=17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276 namespace=k8s.io Sep 5 00:23:20.599704 containerd[1452]: time="2025-09-05T00:23:20.599693345Z" level=warning msg="cleaning up after shim disconnected" id=17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276 namespace=k8s.io Sep 5 00:23:20.599704 containerd[1452]: time="2025-09-05T00:23:20.599706941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:23:21.114676 systemd[1]: Started sshd@7-10.0.0.155:22-10.0.0.1:52544.service - OpenSSH per-connection server daemon (10.0.0.1:52544). Sep 5 00:23:21.151878 sshd[2998]: Accepted publickey for core from 10.0.0.1 port 52544 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:21.153594 sshd[2998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:21.157965 systemd-logind[1438]: New session 8 of user core. Sep 5 00:23:21.168572 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:23:21.288817 sshd[2998]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:21.293402 systemd[1]: sshd@7-10.0.0.155:22-10.0.0.1:52544.service: Deactivated successfully. Sep 5 00:23:21.296088 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:23:21.296831 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:23:21.297854 systemd-logind[1438]: Removed session 8. Sep 5 00:23:21.477117 kubelet[2510]: E0905 00:23:21.475270 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:21.481575 containerd[1452]: time="2025-09-05T00:23:21.481525953Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:23:21.502521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931027063.mount: Deactivated successfully. Sep 5 00:23:21.504758 containerd[1452]: time="2025-09-05T00:23:21.504708132Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\"" Sep 5 00:23:21.505395 containerd[1452]: time="2025-09-05T00:23:21.505357937Z" level=info msg="StartContainer for \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\"" Sep 5 00:23:21.543701 systemd[1]: Started cri-containerd-61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5.scope - libcontainer container 61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5. Sep 5 00:23:21.630637 containerd[1452]: time="2025-09-05T00:23:21.630573462Z" level=info msg="StartContainer for \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\" returns successfully" Sep 5 00:23:21.636133 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:23:21.636379 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:23:21.636469 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:23:21.641817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:23:21.642053 systemd[1]: cri-containerd-61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5.scope: Deactivated successfully. Sep 5 00:23:21.654203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996130956.mount: Deactivated successfully. Sep 5 00:23:21.664374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5-rootfs.mount: Deactivated successfully. Sep 5 00:23:21.666095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:23:21.685050 containerd[1452]: time="2025-09-05T00:23:21.684990718Z" level=info msg="shim disconnected" id=61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5 namespace=k8s.io Sep 5 00:23:21.685050 containerd[1452]: time="2025-09-05T00:23:21.685049830Z" level=warning msg="cleaning up after shim disconnected" id=61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5 namespace=k8s.io Sep 5 00:23:21.685168 containerd[1452]: time="2025-09-05T00:23:21.685059669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:23:22.095827 containerd[1452]: time="2025-09-05T00:23:22.095766372Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:23:22.096624 containerd[1452]: time="2025-09-05T00:23:22.096569315Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 5 00:23:22.097736 containerd[1452]: time="2025-09-05T00:23:22.097687933Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:23:22.099099 containerd[1452]: time="2025-09-05T00:23:22.099060970Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.608298103s" Sep 5 00:23:22.099099 containerd[1452]: time="2025-09-05T00:23:22.099095424Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 5 00:23:22.103622 containerd[1452]: time="2025-09-05T00:23:22.103564384Z" level=info msg="CreateContainer within sandbox \"4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 00:23:22.191035 containerd[1452]: time="2025-09-05T00:23:22.190978425Z" level=info msg="CreateContainer within sandbox \"4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\"" Sep 5 00:23:22.191878 containerd[1452]: time="2025-09-05T00:23:22.191834888Z" level=info msg="StartContainer for \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\"" Sep 5 00:23:22.226576 systemd[1]: Started cri-containerd-02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948.scope - libcontainer container 02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948. Sep 5 00:23:22.296474 containerd[1452]: time="2025-09-05T00:23:22.296366118Z" level=info msg="StartContainer for \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\" returns successfully" Sep 5 00:23:22.479521 kubelet[2510]: E0905 00:23:22.478576 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:22.481192 kubelet[2510]: E0905 00:23:22.480953 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:22.488540 kubelet[2510]: I0905 00:23:22.488274 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fkhxz" podStartSLOduration=1.7479478080000002 podStartE2EDuration="18.488257868s" podCreationTimestamp="2025-09-05 00:23:04 +0000 UTC" firstStartedPulling="2025-09-05 00:23:05.35946269 +0000 UTC m=+7.265133761" lastFinishedPulling="2025-09-05 00:23:22.09977275 +0000 UTC m=+24.005443821" observedRunningTime="2025-09-05 00:23:22.487169628 +0000 UTC m=+24.392840709" watchObservedRunningTime="2025-09-05 00:23:22.488257868 +0000 UTC m=+24.393928939" Sep 5 00:23:22.489657 containerd[1452]: time="2025-09-05T00:23:22.489607440Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:23:22.519784 containerd[1452]: time="2025-09-05T00:23:22.519714883Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\"" Sep 5 00:23:22.524443 containerd[1452]: time="2025-09-05T00:23:22.521570299Z" level=info msg="StartContainer for \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\"" Sep 5 00:23:22.625566 systemd[1]: Started cri-containerd-01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe.scope - libcontainer container 01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe. Sep 5 00:23:22.657970 systemd[1]: cri-containerd-01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe.scope: Deactivated successfully. Sep 5 00:23:22.677868 containerd[1452]: time="2025-09-05T00:23:22.677826210Z" level=info msg="StartContainer for \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\" returns successfully" Sep 5 00:23:22.697487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe-rootfs.mount: Deactivated successfully. Sep 5 00:23:22.701227 containerd[1452]: time="2025-09-05T00:23:22.701165923Z" level=info msg="shim disconnected" id=01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe namespace=k8s.io Sep 5 00:23:22.701335 containerd[1452]: time="2025-09-05T00:23:22.701228261Z" level=warning msg="cleaning up after shim disconnected" id=01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe namespace=k8s.io Sep 5 00:23:22.701335 containerd[1452]: time="2025-09-05T00:23:22.701238340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:23:23.484216 kubelet[2510]: E0905 00:23:23.484181 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:23.484905 kubelet[2510]: E0905 00:23:23.484241 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:23.495342 containerd[1452]: time="2025-09-05T00:23:23.495289447Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:23:23.518945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768183483.mount: Deactivated successfully. Sep 5 00:23:23.519647 containerd[1452]: time="2025-09-05T00:23:23.519589095Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\"" Sep 5 00:23:23.520238 containerd[1452]: time="2025-09-05T00:23:23.520205806Z" level=info msg="StartContainer for \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\"" Sep 5 00:23:23.560582 systemd[1]: Started cri-containerd-949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960.scope - libcontainer container 949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960. Sep 5 00:23:23.586952 systemd[1]: cri-containerd-949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960.scope: Deactivated successfully. Sep 5 00:23:23.589103 containerd[1452]: time="2025-09-05T00:23:23.589056063Z" level=info msg="StartContainer for \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\" returns successfully" Sep 5 00:23:23.608474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960-rootfs.mount: Deactivated successfully. Sep 5 00:23:23.613145 containerd[1452]: time="2025-09-05T00:23:23.613083669Z" level=info msg="shim disconnected" id=949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960 namespace=k8s.io Sep 5 00:23:23.613302 containerd[1452]: time="2025-09-05T00:23:23.613146807Z" level=warning msg="cleaning up after shim disconnected" id=949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960 namespace=k8s.io Sep 5 00:23:23.613302 containerd[1452]: time="2025-09-05T00:23:23.613158058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:23:24.488569 kubelet[2510]: E0905 00:23:24.488175 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:24.494263 containerd[1452]: time="2025-09-05T00:23:24.494199068Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:23:24.527070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031232658.mount: Deactivated successfully. Sep 5 00:23:24.528025 containerd[1452]: time="2025-09-05T00:23:24.527978823Z" level=info msg="CreateContainer within sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\"" Sep 5 00:23:24.528563 containerd[1452]: time="2025-09-05T00:23:24.528530412Z" level=info msg="StartContainer for \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\"" Sep 5 00:23:24.565608 systemd[1]: Started cri-containerd-07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f.scope - libcontainer container 07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f. Sep 5 00:23:24.597618 containerd[1452]: time="2025-09-05T00:23:24.597565280Z" level=info msg="StartContainer for \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\" returns successfully" Sep 5 00:23:24.763907 kubelet[2510]: I0905 00:23:24.763773 2510 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:23:24.825090 systemd[1]: Created slice kubepods-burstable-pod506b64d0_5929_4ed2_a415_e3fd206f3a33.slice - libcontainer container kubepods-burstable-pod506b64d0_5929_4ed2_a415_e3fd206f3a33.slice. Sep 5 00:23:24.832226 systemd[1]: Created slice kubepods-burstable-pod948ac0d5_3758_46d8_85d9_85a234fb224c.slice - libcontainer container kubepods-burstable-pod948ac0d5_3758_46d8_85d9_85a234fb224c.slice. Sep 5 00:23:24.850694 kubelet[2510]: I0905 00:23:24.850623 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/948ac0d5-3758-46d8-85d9-85a234fb224c-config-volume\") pod \"coredns-674b8bbfcf-qzwsc\" (UID: \"948ac0d5-3758-46d8-85d9-85a234fb224c\") " pod="kube-system/coredns-674b8bbfcf-qzwsc" Sep 5 00:23:24.850694 kubelet[2510]: I0905 00:23:24.850683 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd4c6\" (UniqueName: \"kubernetes.io/projected/506b64d0-5929-4ed2-a415-e3fd206f3a33-kube-api-access-vd4c6\") pod \"coredns-674b8bbfcf-6j84h\" (UID: \"506b64d0-5929-4ed2-a415-e3fd206f3a33\") " pod="kube-system/coredns-674b8bbfcf-6j84h" Sep 5 00:23:24.850694 kubelet[2510]: I0905 00:23:24.850703 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wxm\" (UniqueName: \"kubernetes.io/projected/948ac0d5-3758-46d8-85d9-85a234fb224c-kube-api-access-55wxm\") pod \"coredns-674b8bbfcf-qzwsc\" (UID: \"948ac0d5-3758-46d8-85d9-85a234fb224c\") " pod="kube-system/coredns-674b8bbfcf-qzwsc" Sep 5 00:23:24.850922 kubelet[2510]: I0905 00:23:24.850716 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/506b64d0-5929-4ed2-a415-e3fd206f3a33-config-volume\") pod \"coredns-674b8bbfcf-6j84h\" (UID: \"506b64d0-5929-4ed2-a415-e3fd206f3a33\") " pod="kube-system/coredns-674b8bbfcf-6j84h" Sep 5 00:23:25.130725 kubelet[2510]: E0905 00:23:25.130340 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:25.131729 containerd[1452]: time="2025-09-05T00:23:25.131659601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6j84h,Uid:506b64d0-5929-4ed2-a415-e3fd206f3a33,Namespace:kube-system,Attempt:0,}" Sep 5 00:23:25.136434 kubelet[2510]: E0905 00:23:25.136396 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:25.136830 containerd[1452]: time="2025-09-05T00:23:25.136797382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qzwsc,Uid:948ac0d5-3758-46d8-85d9-85a234fb224c,Namespace:kube-system,Attempt:0,}" Sep 5 00:23:25.493101 kubelet[2510]: E0905 00:23:25.493058 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:26.305733 systemd[1]: Started sshd@8-10.0.0.155:22-10.0.0.1:52552.service - OpenSSH per-connection server daemon (10.0.0.1:52552). Sep 5 00:23:26.343289 sshd[3385]: Accepted publickey for core from 10.0.0.1 port 52552 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:26.345205 sshd[3385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:26.349634 systemd-logind[1438]: New session 9 of user core. Sep 5 00:23:26.356580 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:23:26.470247 sshd[3385]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:26.474369 systemd[1]: sshd@8-10.0.0.155:22-10.0.0.1:52552.service: Deactivated successfully. Sep 5 00:23:26.476319 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:23:26.476941 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:23:26.477830 systemd-logind[1438]: Removed session 9. Sep 5 00:23:26.495194 kubelet[2510]: E0905 00:23:26.495156 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:26.787878 systemd-networkd[1395]: cilium_host: Link UP Sep 5 00:23:26.788045 systemd-networkd[1395]: cilium_net: Link UP Sep 5 00:23:26.788049 systemd-networkd[1395]: cilium_net: Gained carrier Sep 5 00:23:26.788230 systemd-networkd[1395]: cilium_host: Gained carrier Sep 5 00:23:26.790634 systemd-networkd[1395]: cilium_host: Gained IPv6LL Sep 5 00:23:26.892720 systemd-networkd[1395]: cilium_vxlan: Link UP Sep 5 00:23:26.892732 systemd-networkd[1395]: cilium_vxlan: Gained carrier Sep 5 00:23:27.096455 kernel: NET: Registered PF_ALG protocol family Sep 5 00:23:27.490607 systemd-networkd[1395]: cilium_net: Gained IPv6LL Sep 5 00:23:27.497144 kubelet[2510]: E0905 00:23:27.497109 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:27.740049 systemd-networkd[1395]: lxc_health: Link UP Sep 5 00:23:27.746678 systemd-networkd[1395]: lxc_health: Gained carrier Sep 5 00:23:28.201921 systemd-networkd[1395]: lxc0334a6b7ba95: Link UP Sep 5 00:23:28.208464 kernel: eth0: renamed from tmp2b885 Sep 5 00:23:28.217249 systemd-networkd[1395]: lxcf0042deb00e1: Link UP Sep 5 00:23:28.233048 systemd-networkd[1395]: lxc0334a6b7ba95: Gained carrier Sep 5 00:23:28.235601 kernel: eth0: renamed from tmp23a94 Sep 5 00:23:28.240587 systemd-networkd[1395]: lxcf0042deb00e1: Gained carrier Sep 5 00:23:28.258661 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL Sep 5 00:23:29.200762 kubelet[2510]: E0905 00:23:29.200707 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:29.411813 systemd-networkd[1395]: lxc_health: Gained IPv6LL Sep 5 00:23:29.412952 systemd-networkd[1395]: lxcf0042deb00e1: Gained IPv6LL Sep 5 00:23:29.437766 kubelet[2510]: I0905 00:23:29.436293 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cps9z" podStartSLOduration=11.238952063 podStartE2EDuration="25.43625352s" podCreationTimestamp="2025-09-05 00:23:04 +0000 UTC" firstStartedPulling="2025-09-05 00:23:05.293271201 +0000 UTC m=+7.198942272" lastFinishedPulling="2025-09-05 00:23:19.490572658 +0000 UTC m=+21.396243729" observedRunningTime="2025-09-05 00:23:25.527265996 +0000 UTC m=+27.432937067" watchObservedRunningTime="2025-09-05 00:23:29.43625352 +0000 UTC m=+31.341924611" Sep 5 00:23:29.500836 kubelet[2510]: E0905 00:23:29.500313 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:30.178658 systemd-networkd[1395]: lxc0334a6b7ba95: Gained IPv6LL Sep 5 00:23:30.501816 kubelet[2510]: E0905 00:23:30.501669 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:31.485449 systemd[1]: Started sshd@9-10.0.0.155:22-10.0.0.1:55556.service - OpenSSH per-connection server daemon (10.0.0.1:55556). Sep 5 00:23:31.528920 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 55556 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:31.530785 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:31.534905 systemd-logind[1438]: New session 10 of user core. Sep 5 00:23:31.543610 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:23:31.663686 sshd[3780]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:31.668534 systemd[1]: sshd@9-10.0.0.155:22-10.0.0.1:55556.service: Deactivated successfully. Sep 5 00:23:31.670721 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:23:31.671634 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:23:31.672646 systemd-logind[1438]: Removed session 10. Sep 5 00:23:32.039764 containerd[1452]: time="2025-09-05T00:23:32.039002800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:23:32.039764 containerd[1452]: time="2025-09-05T00:23:32.039719637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:23:32.039764 containerd[1452]: time="2025-09-05T00:23:32.039732922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:32.040385 containerd[1452]: time="2025-09-05T00:23:32.039813865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:32.055693 systemd[1]: run-containerd-runc-k8s.io-23a94deb60a8b8646d7f57cd2c3311bb29c5605ea16c6e5675219bcab9e5a080-runc.pCww74.mount: Deactivated successfully. Sep 5 00:23:32.064573 systemd[1]: Started cri-containerd-23a94deb60a8b8646d7f57cd2c3311bb29c5605ea16c6e5675219bcab9e5a080.scope - libcontainer container 23a94deb60a8b8646d7f57cd2c3311bb29c5605ea16c6e5675219bcab9e5a080. Sep 5 00:23:32.076666 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:23:32.101017 containerd[1452]: time="2025-09-05T00:23:32.100977526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qzwsc,Uid:948ac0d5-3758-46d8-85d9-85a234fb224c,Namespace:kube-system,Attempt:0,} returns sandbox id \"23a94deb60a8b8646d7f57cd2c3311bb29c5605ea16c6e5675219bcab9e5a080\"" Sep 5 00:23:32.102101 kubelet[2510]: E0905 00:23:32.102069 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:32.127744 containerd[1452]: time="2025-09-05T00:23:32.127165943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:23:32.128041 containerd[1452]: time="2025-09-05T00:23:32.127986846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:23:32.128170 containerd[1452]: time="2025-09-05T00:23:32.128025298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:32.128335 containerd[1452]: time="2025-09-05T00:23:32.128296389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:23:32.155575 systemd[1]: Started cri-containerd-2b885e25fb82ca9833095d35fe78b4d44e35dc8b7fde614c7bb0a8d9acc0b41e.scope - libcontainer container 2b885e25fb82ca9833095d35fe78b4d44e35dc8b7fde614c7bb0a8d9acc0b41e. Sep 5 00:23:32.169172 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:23:32.194604 containerd[1452]: time="2025-09-05T00:23:32.194560237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6j84h,Uid:506b64d0-5929-4ed2-a415-e3fd206f3a33,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b885e25fb82ca9833095d35fe78b4d44e35dc8b7fde614c7bb0a8d9acc0b41e\"" Sep 5 00:23:32.195530 kubelet[2510]: E0905 00:23:32.195451 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:32.275326 containerd[1452]: time="2025-09-05T00:23:32.275271423Z" level=info msg="CreateContainer within sandbox \"23a94deb60a8b8646d7f57cd2c3311bb29c5605ea16c6e5675219bcab9e5a080\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:23:32.368271 containerd[1452]: time="2025-09-05T00:23:32.368141033Z" level=info msg="CreateContainer within sandbox \"2b885e25fb82ca9833095d35fe78b4d44e35dc8b7fde614c7bb0a8d9acc0b41e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:23:32.848595 containerd[1452]: time="2025-09-05T00:23:32.848525689Z" level=info msg="CreateContainer within sandbox \"23a94deb60a8b8646d7f57cd2c3311bb29c5605ea16c6e5675219bcab9e5a080\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e12a18334308764e40b2503be9aac9727fba084c12e26eda0144122b10edcf04\"" Sep 5 00:23:32.849211 containerd[1452]: time="2025-09-05T00:23:32.849179548Z" level=info msg="StartContainer for \"e12a18334308764e40b2503be9aac9727fba084c12e26eda0144122b10edcf04\"" Sep 5 00:23:32.882574 systemd[1]: Started cri-containerd-e12a18334308764e40b2503be9aac9727fba084c12e26eda0144122b10edcf04.scope - libcontainer container e12a18334308764e40b2503be9aac9727fba084c12e26eda0144122b10edcf04. Sep 5 00:23:32.959155 containerd[1452]: time="2025-09-05T00:23:32.959095361Z" level=info msg="CreateContainer within sandbox \"2b885e25fb82ca9833095d35fe78b4d44e35dc8b7fde614c7bb0a8d9acc0b41e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2aa5f25b191a87b280b17040996800641903826eeab7d02c337ee81c1aa0a39\"" Sep 5 00:23:32.959799 containerd[1452]: time="2025-09-05T00:23:32.959754360Z" level=info msg="StartContainer for \"a2aa5f25b191a87b280b17040996800641903826eeab7d02c337ee81c1aa0a39\"" Sep 5 00:23:32.990570 systemd[1]: Started cri-containerd-a2aa5f25b191a87b280b17040996800641903826eeab7d02c337ee81c1aa0a39.scope - libcontainer container a2aa5f25b191a87b280b17040996800641903826eeab7d02c337ee81c1aa0a39. Sep 5 00:23:33.192271 containerd[1452]: time="2025-09-05T00:23:33.192217101Z" level=info msg="StartContainer for \"a2aa5f25b191a87b280b17040996800641903826eeab7d02c337ee81c1aa0a39\" returns successfully" Sep 5 00:23:33.192806 containerd[1452]: time="2025-09-05T00:23:33.192215318Z" level=info msg="StartContainer for \"e12a18334308764e40b2503be9aac9727fba084c12e26eda0144122b10edcf04\" returns successfully" Sep 5 00:23:33.532747 kubelet[2510]: E0905 00:23:33.532596 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:33.534838 kubelet[2510]: E0905 00:23:33.534799 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:33.965819 kubelet[2510]: I0905 00:23:33.965741 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qzwsc" podStartSLOduration=29.965717871 podStartE2EDuration="29.965717871s" podCreationTimestamp="2025-09-05 00:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:23:33.704848322 +0000 UTC m=+35.610519403" watchObservedRunningTime="2025-09-05 00:23:33.965717871 +0000 UTC m=+35.871388942" Sep 5 00:23:33.989333 kubelet[2510]: I0905 00:23:33.989255 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6j84h" podStartSLOduration=29.989232885 podStartE2EDuration="29.989232885s" podCreationTimestamp="2025-09-05 00:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:23:33.966169379 +0000 UTC m=+35.871840460" watchObservedRunningTime="2025-09-05 00:23:33.989232885 +0000 UTC m=+35.894903956" Sep 5 00:23:34.535999 kubelet[2510]: E0905 00:23:34.535966 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:34.536457 kubelet[2510]: E0905 00:23:34.536049 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:35.537671 kubelet[2510]: E0905 00:23:35.537640 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:35.538141 kubelet[2510]: E0905 00:23:35.537691 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:23:36.680386 systemd[1]: Started sshd@10-10.0.0.155:22-10.0.0.1:55560.service - OpenSSH per-connection server daemon (10.0.0.1:55560). Sep 5 00:23:36.718349 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 55560 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:36.720318 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:36.724627 systemd-logind[1438]: New session 11 of user core. Sep 5 00:23:36.732565 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:23:36.859754 sshd[3968]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:36.875391 systemd[1]: sshd@10-10.0.0.155:22-10.0.0.1:55560.service: Deactivated successfully. Sep 5 00:23:36.877480 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:23:36.879786 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:23:36.892688 systemd[1]: Started sshd@11-10.0.0.155:22-10.0.0.1:55562.service - OpenSSH per-connection server daemon (10.0.0.1:55562). Sep 5 00:23:36.893592 systemd-logind[1438]: Removed session 11. Sep 5 00:23:36.921966 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 55562 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:36.923689 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:36.927798 systemd-logind[1438]: New session 12 of user core. Sep 5 00:23:36.932682 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:23:37.192856 sshd[3983]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:37.202367 systemd[1]: sshd@11-10.0.0.155:22-10.0.0.1:55562.service: Deactivated successfully. Sep 5 00:23:37.204469 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:23:37.206187 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:23:37.215681 systemd[1]: Started sshd@12-10.0.0.155:22-10.0.0.1:55568.service - OpenSSH per-connection server daemon (10.0.0.1:55568). Sep 5 00:23:37.217208 systemd-logind[1438]: Removed session 12. Sep 5 00:23:37.244875 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 55568 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:37.246689 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:37.250574 systemd-logind[1438]: New session 13 of user core. Sep 5 00:23:37.256629 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:23:37.450847 sshd[3996]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:37.454514 systemd[1]: sshd@12-10.0.0.155:22-10.0.0.1:55568.service: Deactivated successfully. Sep 5 00:23:37.456568 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:23:37.457124 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:23:37.457974 systemd-logind[1438]: Removed session 13. Sep 5 00:23:42.464406 systemd[1]: Started sshd@13-10.0.0.155:22-10.0.0.1:45228.service - OpenSSH per-connection server daemon (10.0.0.1:45228). Sep 5 00:23:42.499480 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 45228 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:42.501441 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:42.506242 systemd-logind[1438]: New session 14 of user core. Sep 5 00:23:42.522641 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:23:42.636113 sshd[4015]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:42.640144 systemd[1]: sshd@13-10.0.0.155:22-10.0.0.1:45228.service: Deactivated successfully. Sep 5 00:23:42.642462 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:23:42.643043 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:23:42.643917 systemd-logind[1438]: Removed session 14. Sep 5 00:23:47.647611 systemd[1]: Started sshd@14-10.0.0.155:22-10.0.0.1:45244.service - OpenSSH per-connection server daemon (10.0.0.1:45244). Sep 5 00:23:47.681168 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 45244 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:47.682909 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:47.686895 systemd-logind[1438]: New session 15 of user core. Sep 5 00:23:47.697564 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:23:47.814889 sshd[4029]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:47.819212 systemd[1]: sshd@14-10.0.0.155:22-10.0.0.1:45244.service: Deactivated successfully. Sep 5 00:23:47.821268 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:23:47.821949 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:23:47.822842 systemd-logind[1438]: Removed session 15. Sep 5 00:23:52.832355 systemd[1]: Started sshd@15-10.0.0.155:22-10.0.0.1:53090.service - OpenSSH per-connection server daemon (10.0.0.1:53090). Sep 5 00:23:52.875200 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 53090 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:52.877172 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:52.882070 systemd-logind[1438]: New session 16 of user core. Sep 5 00:23:52.893657 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:23:53.019953 sshd[4043]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:53.029113 systemd[1]: sshd@15-10.0.0.155:22-10.0.0.1:53090.service: Deactivated successfully. Sep 5 00:23:53.031224 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:23:53.032101 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:23:53.038737 systemd[1]: Started sshd@16-10.0.0.155:22-10.0.0.1:53102.service - OpenSSH per-connection server daemon (10.0.0.1:53102). Sep 5 00:23:53.039469 systemd-logind[1438]: Removed session 16. Sep 5 00:23:53.073356 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 53102 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:53.075181 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:53.080241 systemd-logind[1438]: New session 17 of user core. Sep 5 00:23:53.091597 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:23:54.118929 sshd[4057]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:54.136837 systemd[1]: sshd@16-10.0.0.155:22-10.0.0.1:53102.service: Deactivated successfully. Sep 5 00:23:54.139284 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:23:54.140846 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:23:54.146708 systemd[1]: Started sshd@17-10.0.0.155:22-10.0.0.1:53104.service - OpenSSH per-connection server daemon (10.0.0.1:53104). Sep 5 00:23:54.147663 systemd-logind[1438]: Removed session 17. Sep 5 00:23:54.182478 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 53104 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:54.184580 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:54.189134 systemd-logind[1438]: New session 18 of user core. Sep 5 00:23:54.205676 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:23:55.200757 sshd[4070]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:55.213791 systemd[1]: sshd@17-10.0.0.155:22-10.0.0.1:53104.service: Deactivated successfully. Sep 5 00:23:55.216000 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:23:55.217874 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:23:55.222688 systemd[1]: Started sshd@18-10.0.0.155:22-10.0.0.1:53120.service - OpenSSH per-connection server daemon (10.0.0.1:53120). Sep 5 00:23:55.223657 systemd-logind[1438]: Removed session 18. Sep 5 00:23:55.257128 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 53120 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:55.258978 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:55.263216 systemd-logind[1438]: New session 19 of user core. Sep 5 00:23:55.272584 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:23:55.604082 sshd[4091]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:55.613799 systemd[1]: sshd@18-10.0.0.155:22-10.0.0.1:53120.service: Deactivated successfully. Sep 5 00:23:55.615787 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:23:55.617770 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:23:55.619464 systemd[1]: Started sshd@19-10.0.0.155:22-10.0.0.1:53128.service - OpenSSH per-connection server daemon (10.0.0.1:53128). Sep 5 00:23:55.620473 systemd-logind[1438]: Removed session 19. Sep 5 00:23:55.654217 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 53128 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:23:55.656260 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:23:55.660524 systemd-logind[1438]: New session 20 of user core. Sep 5 00:23:55.676716 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:23:55.862832 sshd[4104]: pam_unix(sshd:session): session closed for user core Sep 5 00:23:55.867710 systemd[1]: sshd@19-10.0.0.155:22-10.0.0.1:53128.service: Deactivated successfully. Sep 5 00:23:55.869766 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:23:55.870533 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:23:55.871719 systemd-logind[1438]: Removed session 20. Sep 5 00:24:00.874802 systemd[1]: Started sshd@20-10.0.0.155:22-10.0.0.1:47128.service - OpenSSH per-connection server daemon (10.0.0.1:47128). Sep 5 00:24:00.909535 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 47128 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:24:00.911352 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:24:00.915619 systemd-logind[1438]: New session 21 of user core. Sep 5 00:24:00.929583 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:24:01.043595 sshd[4121]: pam_unix(sshd:session): session closed for user core Sep 5 00:24:01.047413 systemd[1]: sshd@20-10.0.0.155:22-10.0.0.1:47128.service: Deactivated successfully. Sep 5 00:24:01.049489 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:24:01.050132 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:24:01.051048 systemd-logind[1438]: Removed session 21. Sep 5 00:24:06.060310 systemd[1]: Started sshd@21-10.0.0.155:22-10.0.0.1:47134.service - OpenSSH per-connection server daemon (10.0.0.1:47134). Sep 5 00:24:06.095283 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 47134 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:24:06.097038 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:24:06.101533 systemd-logind[1438]: New session 22 of user core. Sep 5 00:24:06.108739 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:24:06.238877 sshd[4137]: pam_unix(sshd:session): session closed for user core Sep 5 00:24:06.243369 systemd[1]: sshd@21-10.0.0.155:22-10.0.0.1:47134.service: Deactivated successfully. Sep 5 00:24:06.245715 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:24:06.246517 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:24:06.247578 systemd-logind[1438]: Removed session 22. Sep 5 00:24:11.256145 systemd[1]: Started sshd@22-10.0.0.155:22-10.0.0.1:38118.service - OpenSSH per-connection server daemon (10.0.0.1:38118). Sep 5 00:24:11.309944 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 38118 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:24:11.311721 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:24:11.316212 systemd-logind[1438]: New session 23 of user core. Sep 5 00:24:11.326580 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:24:11.462066 sshd[4154]: pam_unix(sshd:session): session closed for user core Sep 5 00:24:11.466654 systemd[1]: sshd@22-10.0.0.155:22-10.0.0.1:38118.service: Deactivated successfully. Sep 5 00:24:11.469063 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:24:11.469788 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:24:11.470788 systemd-logind[1438]: Removed session 23. Sep 5 00:24:16.191313 kubelet[2510]: E0905 00:24:16.191228 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:16.473460 systemd[1]: Started sshd@23-10.0.0.155:22-10.0.0.1:38124.service - OpenSSH per-connection server daemon (10.0.0.1:38124). Sep 5 00:24:16.507921 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 38124 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:24:16.509726 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:24:16.513991 systemd-logind[1438]: New session 24 of user core. Sep 5 00:24:16.528571 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:24:16.632708 sshd[4168]: pam_unix(sshd:session): session closed for user core Sep 5 00:24:16.640652 systemd[1]: sshd@23-10.0.0.155:22-10.0.0.1:38124.service: Deactivated successfully. Sep 5 00:24:16.642690 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:24:16.644871 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:24:16.656821 systemd[1]: Started sshd@24-10.0.0.155:22-10.0.0.1:38134.service - OpenSSH per-connection server daemon (10.0.0.1:38134). Sep 5 00:24:16.657883 systemd-logind[1438]: Removed session 24. Sep 5 00:24:16.686940 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 38134 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:24:16.688775 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:24:16.693032 systemd-logind[1438]: New session 25 of user core. Sep 5 00:24:16.704559 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:24:18.055352 containerd[1452]: time="2025-09-05T00:24:18.055286194Z" level=info msg="StopContainer for \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\" with timeout 30 (s)" Sep 5 00:24:18.056572 containerd[1452]: time="2025-09-05T00:24:18.056526064Z" level=info msg="Stop container \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\" with signal terminated" Sep 5 00:24:18.090293 systemd[1]: cri-containerd-02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948.scope: Deactivated successfully. Sep 5 00:24:18.115074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948-rootfs.mount: Deactivated successfully. Sep 5 00:24:18.116177 containerd[1452]: time="2025-09-05T00:24:18.116053310Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:24:18.116940 containerd[1452]: time="2025-09-05T00:24:18.116780721Z" level=info msg="StopContainer for \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\" with timeout 2 (s)" Sep 5 00:24:18.117511 containerd[1452]: time="2025-09-05T00:24:18.117115340Z" level=info msg="Stop container \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\" with signal terminated" Sep 5 00:24:18.123048 containerd[1452]: time="2025-09-05T00:24:18.122852348Z" level=info msg="shim disconnected" id=02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948 namespace=k8s.io Sep 5 00:24:18.123048 containerd[1452]: time="2025-09-05T00:24:18.122920929Z" level=warning msg="cleaning up after shim disconnected" id=02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948 namespace=k8s.io Sep 5 00:24:18.123048 containerd[1452]: time="2025-09-05T00:24:18.122930106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:24:18.128281 systemd-networkd[1395]: lxc_health: Link DOWN Sep 5 00:24:18.128294 systemd-networkd[1395]: lxc_health: Lost carrier Sep 5 00:24:18.145983 containerd[1452]: time="2025-09-05T00:24:18.145882754Z" level=info msg="StopContainer for \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\" returns successfully" Sep 5 00:24:18.146927 containerd[1452]: time="2025-09-05T00:24:18.146650842Z" level=info msg="StopPodSandbox for \"4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0\"" Sep 5 00:24:18.146999 containerd[1452]: time="2025-09-05T00:24:18.146977817Z" level=info msg="Container to stop \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:24:18.151010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0-shm.mount: Deactivated successfully. Sep 5 00:24:18.157349 systemd[1]: cri-containerd-4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0.scope: Deactivated successfully. Sep 5 00:24:18.158669 systemd[1]: cri-containerd-07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f.scope: Deactivated successfully. Sep 5 00:24:18.158976 systemd[1]: cri-containerd-07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f.scope: Consumed 6.919s CPU time. Sep 5 00:24:18.182418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f-rootfs.mount: Deactivated successfully. Sep 5 00:24:18.187959 containerd[1452]: time="2025-09-05T00:24:18.187619390Z" level=info msg="shim disconnected" id=4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0 namespace=k8s.io Sep 5 00:24:18.187959 containerd[1452]: time="2025-09-05T00:24:18.187675227Z" level=info msg="shim disconnected" id=07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f namespace=k8s.io Sep 5 00:24:18.187959 containerd[1452]: time="2025-09-05T00:24:18.187715825Z" level=warning msg="cleaning up after shim disconnected" id=07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f namespace=k8s.io Sep 5 00:24:18.187959 containerd[1452]: time="2025-09-05T00:24:18.187728158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:24:18.187959 containerd[1452]: time="2025-09-05T00:24:18.187684215Z" level=warning msg="cleaning up after shim disconnected" id=4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0 namespace=k8s.io Sep 5 00:24:18.187959 containerd[1452]: time="2025-09-05T00:24:18.187945434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:24:18.187924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0-rootfs.mount: Deactivated successfully. Sep 5 00:24:18.212489 containerd[1452]: time="2025-09-05T00:24:18.212393240Z" level=info msg="StopContainer for \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\" returns successfully" Sep 5 00:24:18.213216 containerd[1452]: time="2025-09-05T00:24:18.213161419Z" level=info msg="StopPodSandbox for \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\"" Sep 5 00:24:18.213275 containerd[1452]: time="2025-09-05T00:24:18.213230541Z" level=info msg="Container to stop \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:24:18.213275 containerd[1452]: time="2025-09-05T00:24:18.213249717Z" level=info msg="Container to stop \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:24:18.213275 containerd[1452]: time="2025-09-05T00:24:18.213261590Z" level=info msg="Container to stop \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:24:18.213275 containerd[1452]: time="2025-09-05T00:24:18.213273754Z" level=info msg="Container to stop \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:24:18.213435 containerd[1452]: time="2025-09-05T00:24:18.213287159Z" level=info msg="Container to stop \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:24:18.221180 containerd[1452]: time="2025-09-05T00:24:18.221131224Z" level=info msg="TearDown network for sandbox \"4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0\" successfully" Sep 5 00:24:18.221180 containerd[1452]: time="2025-09-05T00:24:18.221170820Z" level=info msg="StopPodSandbox for \"4640d4fbc09f9113a54f44eaa603e6bf0b47f8e707493b223c99b27c41a1a5a0\" returns successfully" Sep 5 00:24:18.222008 systemd[1]: cri-containerd-217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92.scope: Deactivated successfully. Sep 5 00:24:18.235738 kubelet[2510]: E0905 00:24:18.235686 2510 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:24:18.254439 containerd[1452]: time="2025-09-05T00:24:18.254219999Z" level=info msg="shim disconnected" id=217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92 namespace=k8s.io Sep 5 00:24:18.254439 containerd[1452]: time="2025-09-05T00:24:18.254416494Z" level=warning msg="cleaning up after shim disconnected" id=217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92 namespace=k8s.io Sep 5 00:24:18.254439 containerd[1452]: time="2025-09-05T00:24:18.254449377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:24:18.270335 containerd[1452]: time="2025-09-05T00:24:18.270263308Z" level=info msg="TearDown network for sandbox \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" successfully" Sep 5 00:24:18.270335 containerd[1452]: time="2025-09-05T00:24:18.270306591Z" level=info msg="StopPodSandbox for \"217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92\" returns successfully" Sep 5 00:24:18.276026 kubelet[2510]: I0905 00:24:18.275991 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76ce4e2b-38e6-4930-84ab-14aff345feb3-cilium-config-path\") pod \"76ce4e2b-38e6-4930-84ab-14aff345feb3\" (UID: \"76ce4e2b-38e6-4930-84ab-14aff345feb3\") " Sep 5 00:24:18.276026 kubelet[2510]: I0905 00:24:18.276037 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fthq\" (UniqueName: \"kubernetes.io/projected/76ce4e2b-38e6-4930-84ab-14aff345feb3-kube-api-access-4fthq\") pod \"76ce4e2b-38e6-4930-84ab-14aff345feb3\" (UID: \"76ce4e2b-38e6-4930-84ab-14aff345feb3\") " Sep 5 00:24:18.280182 kubelet[2510]: I0905 00:24:18.280126 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76ce4e2b-38e6-4930-84ab-14aff345feb3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "76ce4e2b-38e6-4930-84ab-14aff345feb3" (UID: "76ce4e2b-38e6-4930-84ab-14aff345feb3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:24:18.281672 kubelet[2510]: I0905 00:24:18.281619 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ce4e2b-38e6-4930-84ab-14aff345feb3-kube-api-access-4fthq" (OuterVolumeSpecName: "kube-api-access-4fthq") pod "76ce4e2b-38e6-4930-84ab-14aff345feb3" (UID: "76ce4e2b-38e6-4930-84ab-14aff345feb3"). InnerVolumeSpecName "kube-api-access-4fthq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:24:18.376735 kubelet[2510]: I0905 00:24:18.376587 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cni-path\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.376735 kubelet[2510]: I0905 00:24:18.376653 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-etc-cni-netd\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.376735 kubelet[2510]: I0905 00:24:18.376689 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-config-path\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.376735 kubelet[2510]: I0905 00:24:18.376718 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-xtables-lock\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.376917 kubelet[2510]: I0905 00:24:18.376741 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-cgroup\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.376917 kubelet[2510]: I0905 00:24:18.376763 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-bpf-maps\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.376917 kubelet[2510]: I0905 00:24:18.376784 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-host-proc-sys-kernel\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.376917 kubelet[2510]: I0905 00:24:18.376808 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-run\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.376917 kubelet[2510]: I0905 00:24:18.376831 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae42665c-257c-4156-b48c-fc5ddd651b05-hubble-tls\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.376917 kubelet[2510]: I0905 00:24:18.376849 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-hostproc\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.377066 kubelet[2510]: I0905 00:24:18.376867 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-host-proc-sys-net\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.377066 kubelet[2510]: I0905 00:24:18.376893 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-lib-modules\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.377066 kubelet[2510]: I0905 00:24:18.376919 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae42665c-257c-4156-b48c-fc5ddd651b05-clustermesh-secrets\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.377066 kubelet[2510]: I0905 00:24:18.376942 2510 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn5k8\" (UniqueName: \"kubernetes.io/projected/ae42665c-257c-4156-b48c-fc5ddd651b05-kube-api-access-mn5k8\") pod \"ae42665c-257c-4156-b48c-fc5ddd651b05\" (UID: \"ae42665c-257c-4156-b48c-fc5ddd651b05\") " Sep 5 00:24:18.377066 kubelet[2510]: I0905 00:24:18.376984 2510 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76ce4e2b-38e6-4930-84ab-14aff345feb3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.377066 kubelet[2510]: I0905 00:24:18.376999 2510 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4fthq\" (UniqueName: \"kubernetes.io/projected/76ce4e2b-38e6-4930-84ab-14aff345feb3-kube-api-access-4fthq\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.377810 kubelet[2510]: I0905 00:24:18.376734 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cni-path" (OuterVolumeSpecName: "cni-path") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.377810 kubelet[2510]: I0905 00:24:18.377352 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.377810 kubelet[2510]: I0905 00:24:18.377382 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.377810 kubelet[2510]: I0905 00:24:18.377394 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.377810 kubelet[2510]: I0905 00:24:18.377405 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.377951 kubelet[2510]: I0905 00:24:18.377482 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.377951 kubelet[2510]: I0905 00:24:18.377484 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.377951 kubelet[2510]: I0905 00:24:18.377498 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.377951 kubelet[2510]: I0905 00:24:18.377515 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.380802 kubelet[2510]: I0905 00:24:18.380734 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae42665c-257c-4156-b48c-fc5ddd651b05-kube-api-access-mn5k8" (OuterVolumeSpecName: "kube-api-access-mn5k8") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "kube-api-access-mn5k8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:24:18.380887 kubelet[2510]: I0905 00:24:18.380804 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-hostproc" (OuterVolumeSpecName: "hostproc") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:24:18.381260 kubelet[2510]: I0905 00:24:18.381206 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae42665c-257c-4156-b48c-fc5ddd651b05-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:24:18.381595 kubelet[2510]: I0905 00:24:18.381566 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:24:18.383089 kubelet[2510]: I0905 00:24:18.383055 2510 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae42665c-257c-4156-b48c-fc5ddd651b05-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ae42665c-257c-4156-b48c-fc5ddd651b05" (UID: "ae42665c-257c-4156-b48c-fc5ddd651b05"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:24:18.477546 kubelet[2510]: I0905 00:24:18.477475 2510 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477546 kubelet[2510]: I0905 00:24:18.477527 2510 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477546 kubelet[2510]: I0905 00:24:18.477541 2510 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae42665c-257c-4156-b48c-fc5ddd651b05-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477546 kubelet[2510]: I0905 00:24:18.477552 2510 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477546 kubelet[2510]: I0905 00:24:18.477562 2510 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477546 kubelet[2510]: I0905 00:24:18.477571 2510 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477936 kubelet[2510]: I0905 00:24:18.477581 2510 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae42665c-257c-4156-b48c-fc5ddd651b05-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477936 kubelet[2510]: I0905 00:24:18.477589 2510 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mn5k8\" (UniqueName: \"kubernetes.io/projected/ae42665c-257c-4156-b48c-fc5ddd651b05-kube-api-access-mn5k8\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477936 kubelet[2510]: I0905 00:24:18.477599 2510 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477936 kubelet[2510]: I0905 00:24:18.477607 2510 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477936 kubelet[2510]: I0905 00:24:18.477615 2510 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477936 kubelet[2510]: I0905 00:24:18.477623 2510 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477936 kubelet[2510]: I0905 00:24:18.477631 2510 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.477936 kubelet[2510]: I0905 00:24:18.477638 2510 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae42665c-257c-4156-b48c-fc5ddd651b05-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 5 00:24:18.625574 kubelet[2510]: I0905 00:24:18.625501 2510 scope.go:117] "RemoveContainer" containerID="02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948" Sep 5 00:24:18.627272 containerd[1452]: time="2025-09-05T00:24:18.626899056Z" level=info msg="RemoveContainer for \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\"" Sep 5 00:24:18.633971 systemd[1]: Removed slice kubepods-besteffort-pod76ce4e2b_38e6_4930_84ab_14aff345feb3.slice - libcontainer container kubepods-besteffort-pod76ce4e2b_38e6_4930_84ab_14aff345feb3.slice. Sep 5 00:24:18.635247 systemd[1]: Removed slice kubepods-burstable-podae42665c_257c_4156_b48c_fc5ddd651b05.slice - libcontainer container kubepods-burstable-podae42665c_257c_4156_b48c_fc5ddd651b05.slice. Sep 5 00:24:18.635357 systemd[1]: kubepods-burstable-podae42665c_257c_4156_b48c_fc5ddd651b05.slice: Consumed 7.022s CPU time. Sep 5 00:24:18.826580 containerd[1452]: time="2025-09-05T00:24:18.826521808Z" level=info msg="RemoveContainer for \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\" returns successfully" Sep 5 00:24:18.826903 kubelet[2510]: I0905 00:24:18.826873 2510 scope.go:117] "RemoveContainer" containerID="02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948" Sep 5 00:24:18.830032 containerd[1452]: time="2025-09-05T00:24:18.829986272Z" level=error msg="ContainerStatus for \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\": not found" Sep 5 00:24:18.830193 kubelet[2510]: E0905 00:24:18.830164 2510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\": not found" containerID="02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948" Sep 5 00:24:18.830241 kubelet[2510]: I0905 00:24:18.830202 2510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948"} err="failed to get container status \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\": rpc error: code = NotFound desc = an error occurred when try to find container \"02bb5217204619603a04ff6718c4cca10a69b88da55e8b0b279b08a587b1a948\": not found" Sep 5 00:24:18.830241 kubelet[2510]: I0905 00:24:18.830240 2510 scope.go:117] "RemoveContainer" containerID="07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f" Sep 5 00:24:18.831152 containerd[1452]: time="2025-09-05T00:24:18.831131942Z" level=info msg="RemoveContainer for \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\"" Sep 5 00:24:18.853131 containerd[1452]: time="2025-09-05T00:24:18.853065270Z" level=info msg="RemoveContainer for \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\" returns successfully" Sep 5 00:24:18.853412 kubelet[2510]: I0905 00:24:18.853371 2510 scope.go:117] "RemoveContainer" containerID="949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960" Sep 5 00:24:18.854600 containerd[1452]: time="2025-09-05T00:24:18.854572422Z" level=info msg="RemoveContainer for \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\"" Sep 5 00:24:18.859746 containerd[1452]: time="2025-09-05T00:24:18.859700335Z" level=info msg="RemoveContainer for \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\" returns successfully" Sep 5 00:24:18.859947 kubelet[2510]: I0905 00:24:18.859904 2510 scope.go:117] "RemoveContainer" containerID="01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe" Sep 5 00:24:18.860932 containerd[1452]: time="2025-09-05T00:24:18.860902693Z" level=info msg="RemoveContainer for \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\"" Sep 5 00:24:18.872329 containerd[1452]: time="2025-09-05T00:24:18.872273010Z" level=info msg="RemoveContainer for \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\" returns successfully" Sep 5 00:24:18.872563 kubelet[2510]: I0905 00:24:18.872526 2510 scope.go:117] "RemoveContainer" containerID="61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5" Sep 5 00:24:18.873465 containerd[1452]: time="2025-09-05T00:24:18.873443046Z" level=info msg="RemoveContainer for \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\"" Sep 5 00:24:18.876746 containerd[1452]: time="2025-09-05T00:24:18.876715302Z" level=info msg="RemoveContainer for \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\" returns successfully" Sep 5 00:24:18.876884 kubelet[2510]: I0905 00:24:18.876855 2510 scope.go:117] "RemoveContainer" containerID="17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276" Sep 5 00:24:18.877729 containerd[1452]: time="2025-09-05T00:24:18.877639739Z" level=info msg="RemoveContainer for \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\"" Sep 5 00:24:18.881229 containerd[1452]: time="2025-09-05T00:24:18.881173735Z" level=info msg="RemoveContainer for \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\" returns successfully" Sep 5 00:24:18.881496 kubelet[2510]: I0905 00:24:18.881383 2510 scope.go:117] "RemoveContainer" containerID="07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f" Sep 5 00:24:18.881676 containerd[1452]: time="2025-09-05T00:24:18.881615489Z" level=error msg="ContainerStatus for \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\": not found" Sep 5 00:24:18.881796 kubelet[2510]: E0905 00:24:18.881771 2510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\": not found" containerID="07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f" Sep 5 00:24:18.881855 kubelet[2510]: I0905 00:24:18.881807 2510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f"} err="failed to get container status \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"07b8b61769bbe5550ebd35ad00da2f6515360d6a9bd59565ed7cf0fe4c1ffe0f\": not found" Sep 5 00:24:18.881855 kubelet[2510]: I0905 00:24:18.881831 2510 scope.go:117] "RemoveContainer" containerID="949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960" Sep 5 00:24:18.882047 containerd[1452]: time="2025-09-05T00:24:18.882010144Z" level=error msg="ContainerStatus for \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\": not found" Sep 5 00:24:18.882169 kubelet[2510]: E0905 00:24:18.882148 2510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\": not found" containerID="949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960" Sep 5 00:24:18.882220 kubelet[2510]: I0905 00:24:18.882172 2510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960"} err="failed to get container status \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\": rpc error: code = NotFound desc = an error occurred when try to find container \"949f4b88dae2b318141721ca237575cef2e8b6478c4aa1312bbe4406999e9960\": not found" Sep 5 00:24:18.882220 kubelet[2510]: I0905 00:24:18.882186 2510 scope.go:117] "RemoveContainer" containerID="01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe" Sep 5 00:24:18.882368 containerd[1452]: time="2025-09-05T00:24:18.882331808Z" level=error msg="ContainerStatus for \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\": not found" Sep 5 00:24:18.882507 kubelet[2510]: E0905 00:24:18.882480 2510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\": not found" containerID="01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe" Sep 5 00:24:18.882574 kubelet[2510]: I0905 00:24:18.882508 2510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe"} err="failed to get container status \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"01e9a9aa9a1c860a2d648ba477c8becf346bf8de69b0858295167764b7cea2fe\": not found" Sep 5 00:24:18.882574 kubelet[2510]: I0905 00:24:18.882526 2510 scope.go:117] "RemoveContainer" containerID="61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5" Sep 5 00:24:18.882728 containerd[1452]: time="2025-09-05T00:24:18.882691887Z" level=error msg="ContainerStatus for \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\": not found" Sep 5 00:24:18.882839 kubelet[2510]: E0905 00:24:18.882809 2510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\": not found" containerID="61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5" Sep 5 00:24:18.882889 kubelet[2510]: I0905 00:24:18.882836 2510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5"} err="failed to get container status \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\": rpc error: code = NotFound desc = an error occurred when try to find container \"61bb0a6c39d767381b1ed5f332a411b533009d0f816b5d93743a38650bb9bac5\": not found" Sep 5 00:24:18.882889 kubelet[2510]: I0905 00:24:18.882852 2510 scope.go:117] "RemoveContainer" containerID="17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276" Sep 5 00:24:18.883035 containerd[1452]: time="2025-09-05T00:24:18.883002240Z" level=error msg="ContainerStatus for \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\": not found" Sep 5 00:24:18.883163 kubelet[2510]: E0905 00:24:18.883133 2510 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\": not found" containerID="17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276" Sep 5 00:24:18.883220 kubelet[2510]: I0905 00:24:18.883160 2510 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276"} err="failed to get container status \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\": rpc error: code = NotFound desc = an error occurred when try to find container \"17add6e4fbc8c77aa24b2905042adf0a7177ce450b6e6d644528be00c190d276\": not found" Sep 5 00:24:19.089734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92-rootfs.mount: Deactivated successfully. Sep 5 00:24:19.089888 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-217a1e2c856b249f30a595d145aba99c142bfb5bb183152c383e402d5e246d92-shm.mount: Deactivated successfully. Sep 5 00:24:19.089996 systemd[1]: var-lib-kubelet-pods-76ce4e2b\x2d38e6\x2d4930\x2d84ab\x2d14aff345feb3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4fthq.mount: Deactivated successfully. Sep 5 00:24:19.090118 systemd[1]: var-lib-kubelet-pods-ae42665c\x2d257c\x2d4156\x2db48c\x2dfc5ddd651b05-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 5 00:24:19.090260 systemd[1]: var-lib-kubelet-pods-ae42665c\x2d257c\x2d4156\x2db48c\x2dfc5ddd651b05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmn5k8.mount: Deactivated successfully. Sep 5 00:24:19.090373 systemd[1]: var-lib-kubelet-pods-ae42665c\x2d257c\x2d4156\x2db48c\x2dfc5ddd651b05-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 5 00:24:20.021549 sshd[4182]: pam_unix(sshd:session): session closed for user core Sep 5 00:24:20.033404 systemd[1]: sshd@24-10.0.0.155:22-10.0.0.1:38134.service: Deactivated successfully. Sep 5 00:24:20.036664 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:24:20.039016 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:24:20.051936 systemd[1]: Started sshd@25-10.0.0.155:22-10.0.0.1:38914.service - OpenSSH per-connection server daemon (10.0.0.1:38914). Sep 5 00:24:20.053248 systemd-logind[1438]: Removed session 25. Sep 5 00:24:20.085691 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 38914 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:24:20.087626 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:24:20.092215 systemd-logind[1438]: New session 26 of user core. Sep 5 00:24:20.100572 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:24:20.190627 kubelet[2510]: E0905 00:24:20.190568 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:20.193773 kubelet[2510]: I0905 00:24:20.193741 2510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ce4e2b-38e6-4930-84ab-14aff345feb3" path="/var/lib/kubelet/pods/76ce4e2b-38e6-4930-84ab-14aff345feb3/volumes" Sep 5 00:24:20.194366 kubelet[2510]: I0905 00:24:20.194337 2510 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae42665c-257c-4156-b48c-fc5ddd651b05" path="/var/lib/kubelet/pods/ae42665c-257c-4156-b48c-fc5ddd651b05/volumes" Sep 5 00:24:20.401723 kubelet[2510]: I0905 00:24:20.401648 2510 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-05T00:24:20Z","lastTransitionTime":"2025-09-05T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 5 00:24:20.642760 sshd[4348]: pam_unix(sshd:session): session closed for user core Sep 5 00:24:20.659257 systemd[1]: sshd@25-10.0.0.155:22-10.0.0.1:38914.service: Deactivated successfully. Sep 5 00:24:20.665714 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:24:20.669620 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:24:20.675126 systemd-logind[1438]: Removed session 26. Sep 5 00:24:20.686255 systemd[1]: Started sshd@26-10.0.0.155:22-10.0.0.1:38926.service - OpenSSH per-connection server daemon (10.0.0.1:38926). Sep 5 00:24:20.696039 systemd[1]: Created slice kubepods-burstable-podf55ed5fe_56fd_4a7b_98b7_5ec98d21c227.slice - libcontainer container kubepods-burstable-podf55ed5fe_56fd_4a7b_98b7_5ec98d21c227.slice. Sep 5 00:24:20.727150 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 38926 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:24:20.729503 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:24:20.740683 systemd-logind[1438]: New session 27 of user core. Sep 5 00:24:20.747652 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:24:20.791063 kubelet[2510]: I0905 00:24:20.791026 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-host-proc-sys-net\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791166 kubelet[2510]: I0905 00:24:20.791067 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-xtables-lock\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791166 kubelet[2510]: I0905 00:24:20.791085 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgczk\" (UniqueName: \"kubernetes.io/projected/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-kube-api-access-rgczk\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791166 kubelet[2510]: I0905 00:24:20.791105 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-hostproc\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791264 kubelet[2510]: I0905 00:24:20.791208 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-cilium-ipsec-secrets\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791264 kubelet[2510]: I0905 00:24:20.791256 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-etc-cni-netd\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791311 kubelet[2510]: I0905 00:24:20.791277 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-cilium-run\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791311 kubelet[2510]: I0905 00:24:20.791293 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-cilium-cgroup\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791311 kubelet[2510]: I0905 00:24:20.791309 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-host-proc-sys-kernel\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791379 kubelet[2510]: I0905 00:24:20.791322 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-hubble-tls\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791379 kubelet[2510]: I0905 00:24:20.791336 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-lib-modules\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791379 kubelet[2510]: I0905 00:24:20.791349 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-clustermesh-secrets\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791379 kubelet[2510]: I0905 00:24:20.791369 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-cilium-config-path\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791498 kubelet[2510]: I0905 00:24:20.791384 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-bpf-maps\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.791498 kubelet[2510]: I0905 00:24:20.791397 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f55ed5fe-56fd-4a7b-98b7-5ec98d21c227-cni-path\") pod \"cilium-tzrng\" (UID: \"f55ed5fe-56fd-4a7b-98b7-5ec98d21c227\") " pod="kube-system/cilium-tzrng" Sep 5 00:24:20.801601 sshd[4361]: pam_unix(sshd:session): session closed for user core Sep 5 00:24:20.820666 systemd[1]: sshd@26-10.0.0.155:22-10.0.0.1:38926.service: Deactivated successfully. Sep 5 00:24:20.823005 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:24:20.824962 systemd-logind[1438]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:24:20.834696 systemd[1]: Started sshd@27-10.0.0.155:22-10.0.0.1:38942.service - OpenSSH per-connection server daemon (10.0.0.1:38942). Sep 5 00:24:20.835874 systemd-logind[1438]: Removed session 27. Sep 5 00:24:20.867241 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 38942 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:24:20.869168 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:24:20.873300 systemd-logind[1438]: New session 28 of user core. Sep 5 00:24:20.883554 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 00:24:21.002022 kubelet[2510]: E0905 00:24:21.001851 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:21.002678 containerd[1452]: time="2025-09-05T00:24:21.002617579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzrng,Uid:f55ed5fe-56fd-4a7b-98b7-5ec98d21c227,Namespace:kube-system,Attempt:0,}" Sep 5 00:24:21.031738 containerd[1452]: time="2025-09-05T00:24:21.031455278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:24:21.031738 containerd[1452]: time="2025-09-05T00:24:21.031553836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:24:21.031738 containerd[1452]: time="2025-09-05T00:24:21.031574967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:24:21.031738 containerd[1452]: time="2025-09-05T00:24:21.031662534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:24:21.058604 systemd[1]: Started cri-containerd-674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1.scope - libcontainer container 674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1. Sep 5 00:24:21.084895 containerd[1452]: time="2025-09-05T00:24:21.084839906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzrng,Uid:f55ed5fe-56fd-4a7b-98b7-5ec98d21c227,Namespace:kube-system,Attempt:0,} returns sandbox id \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\"" Sep 5 00:24:21.085966 kubelet[2510]: E0905 00:24:21.085922 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:21.093025 containerd[1452]: time="2025-09-05T00:24:21.092975717Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:24:21.107670 containerd[1452]: time="2025-09-05T00:24:21.107626975Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"558d33650b17c2361469cebe275b75aaf3307d5313902cebfe6d329fd35ba709\"" Sep 5 00:24:21.109278 containerd[1452]: time="2025-09-05T00:24:21.108412064Z" level=info msg="StartContainer for \"558d33650b17c2361469cebe275b75aaf3307d5313902cebfe6d329fd35ba709\"" Sep 5 00:24:21.134603 systemd[1]: Started cri-containerd-558d33650b17c2361469cebe275b75aaf3307d5313902cebfe6d329fd35ba709.scope - libcontainer container 558d33650b17c2361469cebe275b75aaf3307d5313902cebfe6d329fd35ba709. Sep 5 00:24:21.174732 containerd[1452]: time="2025-09-05T00:24:21.174674295Z" level=info msg="StartContainer for \"558d33650b17c2361469cebe275b75aaf3307d5313902cebfe6d329fd35ba709\" returns successfully" Sep 5 00:24:21.176362 systemd[1]: cri-containerd-558d33650b17c2361469cebe275b75aaf3307d5313902cebfe6d329fd35ba709.scope: Deactivated successfully. Sep 5 00:24:21.212450 containerd[1452]: time="2025-09-05T00:24:21.212359173Z" level=info msg="shim disconnected" id=558d33650b17c2361469cebe275b75aaf3307d5313902cebfe6d329fd35ba709 namespace=k8s.io Sep 5 00:24:21.212450 containerd[1452]: time="2025-09-05T00:24:21.212452982Z" level=warning msg="cleaning up after shim disconnected" id=558d33650b17c2361469cebe275b75aaf3307d5313902cebfe6d329fd35ba709 namespace=k8s.io Sep 5 00:24:21.212769 containerd[1452]: time="2025-09-05T00:24:21.212467390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:24:21.642061 kubelet[2510]: E0905 00:24:21.642020 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:21.651777 containerd[1452]: time="2025-09-05T00:24:21.651712040Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:24:21.665049 containerd[1452]: time="2025-09-05T00:24:21.664969468Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dbc01dc75e9146844bed19f1d7f5ad6996a82bc10ed7f9369a3f4e66e9ba7dac\"" Sep 5 00:24:21.666558 containerd[1452]: time="2025-09-05T00:24:21.665686346Z" level=info msg="StartContainer for \"dbc01dc75e9146844bed19f1d7f5ad6996a82bc10ed7f9369a3f4e66e9ba7dac\"" Sep 5 00:24:21.698691 systemd[1]: Started cri-containerd-dbc01dc75e9146844bed19f1d7f5ad6996a82bc10ed7f9369a3f4e66e9ba7dac.scope - libcontainer container dbc01dc75e9146844bed19f1d7f5ad6996a82bc10ed7f9369a3f4e66e9ba7dac. Sep 5 00:24:21.728459 containerd[1452]: time="2025-09-05T00:24:21.728045066Z" level=info msg="StartContainer for \"dbc01dc75e9146844bed19f1d7f5ad6996a82bc10ed7f9369a3f4e66e9ba7dac\" returns successfully" Sep 5 00:24:21.737054 systemd[1]: cri-containerd-dbc01dc75e9146844bed19f1d7f5ad6996a82bc10ed7f9369a3f4e66e9ba7dac.scope: Deactivated successfully. Sep 5 00:24:21.762624 containerd[1452]: time="2025-09-05T00:24:21.762543721Z" level=info msg="shim disconnected" id=dbc01dc75e9146844bed19f1d7f5ad6996a82bc10ed7f9369a3f4e66e9ba7dac namespace=k8s.io Sep 5 00:24:21.762624 containerd[1452]: time="2025-09-05T00:24:21.762614075Z" level=warning msg="cleaning up after shim disconnected" id=dbc01dc75e9146844bed19f1d7f5ad6996a82bc10ed7f9369a3f4e66e9ba7dac namespace=k8s.io Sep 5 00:24:21.762624 containerd[1452]: time="2025-09-05T00:24:21.762629835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:24:22.646572 kubelet[2510]: E0905 00:24:22.646532 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:22.654477 containerd[1452]: time="2025-09-05T00:24:22.654399022Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:24:22.671354 containerd[1452]: time="2025-09-05T00:24:22.671287297Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4f4d9aa603775175a66d902570dd5776b8bd3a6329f00daf3752d4cb8f30145\"" Sep 5 00:24:22.671915 containerd[1452]: time="2025-09-05T00:24:22.671882482Z" level=info msg="StartContainer for \"d4f4d9aa603775175a66d902570dd5776b8bd3a6329f00daf3752d4cb8f30145\"" Sep 5 00:24:22.705610 systemd[1]: Started cri-containerd-d4f4d9aa603775175a66d902570dd5776b8bd3a6329f00daf3752d4cb8f30145.scope - libcontainer container d4f4d9aa603775175a66d902570dd5776b8bd3a6329f00daf3752d4cb8f30145. Sep 5 00:24:22.736155 containerd[1452]: time="2025-09-05T00:24:22.735875485Z" level=info msg="StartContainer for \"d4f4d9aa603775175a66d902570dd5776b8bd3a6329f00daf3752d4cb8f30145\" returns successfully" Sep 5 00:24:22.737641 systemd[1]: cri-containerd-d4f4d9aa603775175a66d902570dd5776b8bd3a6329f00daf3752d4cb8f30145.scope: Deactivated successfully. Sep 5 00:24:22.767669 containerd[1452]: time="2025-09-05T00:24:22.767603596Z" level=info msg="shim disconnected" id=d4f4d9aa603775175a66d902570dd5776b8bd3a6329f00daf3752d4cb8f30145 namespace=k8s.io Sep 5 00:24:22.767669 containerd[1452]: time="2025-09-05T00:24:22.767663109Z" level=warning msg="cleaning up after shim disconnected" id=d4f4d9aa603775175a66d902570dd5776b8bd3a6329f00daf3752d4cb8f30145 namespace=k8s.io Sep 5 00:24:22.767669 containerd[1452]: time="2025-09-05T00:24:22.767671695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:24:22.898386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4f4d9aa603775175a66d902570dd5776b8bd3a6329f00daf3752d4cb8f30145-rootfs.mount: Deactivated successfully. Sep 5 00:24:23.237682 kubelet[2510]: E0905 00:24:23.237543 2510 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:24:23.654097 kubelet[2510]: E0905 00:24:23.654063 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:23.659767 containerd[1452]: time="2025-09-05T00:24:23.659717961Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:24:23.673341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255036684.mount: Deactivated successfully. Sep 5 00:24:23.674555 containerd[1452]: time="2025-09-05T00:24:23.674511196Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bd23dc469deabff96a0fddaed95779196edbb19659ecfa36d0c011f5908a4165\"" Sep 5 00:24:23.675180 containerd[1452]: time="2025-09-05T00:24:23.675018212Z" level=info msg="StartContainer for \"bd23dc469deabff96a0fddaed95779196edbb19659ecfa36d0c011f5908a4165\"" Sep 5 00:24:23.706599 systemd[1]: Started cri-containerd-bd23dc469deabff96a0fddaed95779196edbb19659ecfa36d0c011f5908a4165.scope - libcontainer container bd23dc469deabff96a0fddaed95779196edbb19659ecfa36d0c011f5908a4165. Sep 5 00:24:23.730997 systemd[1]: cri-containerd-bd23dc469deabff96a0fddaed95779196edbb19659ecfa36d0c011f5908a4165.scope: Deactivated successfully. Sep 5 00:24:23.732525 containerd[1452]: time="2025-09-05T00:24:23.732492351Z" level=info msg="StartContainer for \"bd23dc469deabff96a0fddaed95779196edbb19659ecfa36d0c011f5908a4165\" returns successfully" Sep 5 00:24:23.754562 containerd[1452]: time="2025-09-05T00:24:23.754488794Z" level=info msg="shim disconnected" id=bd23dc469deabff96a0fddaed95779196edbb19659ecfa36d0c011f5908a4165 namespace=k8s.io Sep 5 00:24:23.754562 containerd[1452]: time="2025-09-05T00:24:23.754557455Z" level=warning msg="cleaning up after shim disconnected" id=bd23dc469deabff96a0fddaed95779196edbb19659ecfa36d0c011f5908a4165 namespace=k8s.io Sep 5 00:24:23.754562 containerd[1452]: time="2025-09-05T00:24:23.754569979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:24:23.899077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd23dc469deabff96a0fddaed95779196edbb19659ecfa36d0c011f5908a4165-rootfs.mount: Deactivated successfully. Sep 5 00:24:24.657661 kubelet[2510]: E0905 00:24:24.657624 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:24.662346 containerd[1452]: time="2025-09-05T00:24:24.662307129Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:24:24.678306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977469735.mount: Deactivated successfully. Sep 5 00:24:24.682556 containerd[1452]: time="2025-09-05T00:24:24.682487222Z" level=info msg="CreateContainer within sandbox \"674f3de0793b2b287b54189a4b172340cdc1637509d5d34e928f1242e4b98ff1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e6ef3b75be3a64cc87cb16a92966f40cc1f492b549461ffe10cdbacbdfbf7d8e\"" Sep 5 00:24:24.683309 containerd[1452]: time="2025-09-05T00:24:24.683180393Z" level=info msg="StartContainer for \"e6ef3b75be3a64cc87cb16a92966f40cc1f492b549461ffe10cdbacbdfbf7d8e\"" Sep 5 00:24:24.711560 systemd[1]: Started cri-containerd-e6ef3b75be3a64cc87cb16a92966f40cc1f492b549461ffe10cdbacbdfbf7d8e.scope - libcontainer container e6ef3b75be3a64cc87cb16a92966f40cc1f492b549461ffe10cdbacbdfbf7d8e. Sep 5 00:24:24.747812 containerd[1452]: time="2025-09-05T00:24:24.747762536Z" level=info msg="StartContainer for \"e6ef3b75be3a64cc87cb16a92966f40cc1f492b549461ffe10cdbacbdfbf7d8e\" returns successfully" Sep 5 00:24:25.170460 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 5 00:24:25.190435 kubelet[2510]: E0905 00:24:25.190390 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:25.662757 kubelet[2510]: E0905 00:24:25.662715 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:25.677786 kubelet[2510]: I0905 00:24:25.677310 2510 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tzrng" podStartSLOduration=5.67729168 podStartE2EDuration="5.67729168s" podCreationTimestamp="2025-09-05 00:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:24:25.676936763 +0000 UTC m=+87.582607854" watchObservedRunningTime="2025-09-05 00:24:25.67729168 +0000 UTC m=+87.582962751" Sep 5 00:24:27.003648 kubelet[2510]: E0905 00:24:27.003611 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:27.187291 systemd[1]: run-containerd-runc-k8s.io-e6ef3b75be3a64cc87cb16a92966f40cc1f492b549461ffe10cdbacbdfbf7d8e-runc.NReEBy.mount: Deactivated successfully. Sep 5 00:24:28.350663 systemd-networkd[1395]: lxc_health: Link UP Sep 5 00:24:28.357802 systemd-networkd[1395]: lxc_health: Gained carrier Sep 5 00:24:29.004236 kubelet[2510]: E0905 00:24:29.004195 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:29.190784 kubelet[2510]: E0905 00:24:29.190746 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:29.506728 systemd-networkd[1395]: lxc_health: Gained IPv6LL Sep 5 00:24:29.669740 kubelet[2510]: E0905 00:24:29.669685 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:30.671292 kubelet[2510]: E0905 00:24:30.671255 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:24:33.542805 kubelet[2510]: E0905 00:24:33.542761 2510 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40262->127.0.0.1:37319: write tcp 127.0.0.1:40262->127.0.0.1:37319: write: broken pipe Sep 5 00:24:33.546939 sshd[4369]: pam_unix(sshd:session): session closed for user core Sep 5 00:24:33.551105 systemd[1]: sshd@27-10.0.0.155:22-10.0.0.1:38942.service: Deactivated successfully. Sep 5 00:24:33.553305 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 00:24:33.554006 systemd-logind[1438]: Session 28 logged out. Waiting for processes to exit. Sep 5 00:24:33.555075 systemd-logind[1438]: Removed session 28.