Sep 5 00:08:56.957398 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:33:49 -00 2025 Sep 5 00:08:56.957439 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:08:56.957455 kernel: BIOS-provided physical RAM map: Sep 5 00:08:56.957463 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:08:56.957469 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:08:56.957478 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:08:56.957486 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:08:56.957497 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:08:56.957511 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 5 00:08:56.957528 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 5 00:08:56.957541 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 5 00:08:56.957549 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 5 00:08:56.957560 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 5 00:08:56.957573 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 5 00:08:56.957595 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 5 00:08:56.957608 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:08:56.957631 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 5 00:08:56.957649 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 5 00:08:56.957657 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:08:56.957671 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:08:56.957678 kernel: NX (Execute Disable) protection: active Sep 5 00:08:56.957685 kernel: APIC: Static calls initialized Sep 5 00:08:56.957695 kernel: efi: EFI v2.7 by EDK II Sep 5 00:08:56.957706 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 5 00:08:56.957713 kernel: SMBIOS 2.8 present. Sep 5 00:08:56.957719 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 5 00:08:56.957730 kernel: Hypervisor detected: KVM Sep 5 00:08:56.957741 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:08:56.957747 kernel: kvm-clock: using sched offset of 5462126978 cycles Sep 5 00:08:56.957755 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:08:56.957762 kernel: tsc: Detected 2794.748 MHz processor Sep 5 00:08:56.957793 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:08:56.957810 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:08:56.957818 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 5 00:08:56.957825 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 5 00:08:56.957832 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:08:56.957843 kernel: Using GB pages for direct mapping Sep 5 00:08:56.957856 kernel: Secure boot disabled Sep 5 00:08:56.957865 kernel: ACPI: Early table checksum verification disabled Sep 5 00:08:56.957872 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 5 00:08:56.957885 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:08:56.957897 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:08:56.957904 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:08:56.957914 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 5 00:08:56.957921 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:08:56.957932 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:08:56.957939 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:08:56.957946 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:08:56.957954 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:08:56.957965 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 5 00:08:56.957979 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 5 00:08:56.957996 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 5 00:08:56.958003 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 5 00:08:56.958014 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 5 00:08:56.958025 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 5 00:08:56.958035 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 5 00:08:56.958043 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 5 00:08:56.958050 kernel: No NUMA configuration found Sep 5 00:08:56.958061 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 5 00:08:56.958075 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 5 00:08:56.958085 kernel: Zone ranges: Sep 5 00:08:56.958092 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:08:56.958101 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 5 00:08:56.958108 kernel: Normal empty Sep 5 00:08:56.958116 kernel: Movable zone start for each node Sep 5 00:08:56.958123 kernel: Early memory node ranges Sep 5 00:08:56.958130 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 5 00:08:56.958137 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 5 00:08:56.958144 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 5 00:08:56.958166 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 5 00:08:56.958183 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 5 00:08:56.958196 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 5 00:08:56.958212 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 5 00:08:56.958219 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:08:56.958229 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 5 00:08:56.958238 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 5 00:08:56.958245 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:08:56.958252 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 5 00:08:56.958263 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 5 00:08:56.958271 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 5 00:08:56.958278 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:08:56.958285 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:08:56.958295 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:08:56.958302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:08:56.958309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:08:56.958317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:08:56.958324 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:08:56.958334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:08:56.958341 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:08:56.958351 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:08:56.958359 kernel: TSC deadline timer available Sep 5 00:08:56.958366 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 5 00:08:56.958373 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:08:56.958380 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:08:56.958389 kernel: kvm-guest: setup PV sched yield Sep 5 00:08:56.958396 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 5 00:08:56.958408 kernel: Booting paravirtualized kernel on KVM Sep 5 00:08:56.958416 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:08:56.958434 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:08:56.958443 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 5 00:08:56.958450 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 5 00:08:56.958457 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:08:56.958464 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:08:56.958472 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:08:56.958487 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:08:56.958503 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:08:56.958510 kernel: random: crng init done Sep 5 00:08:56.958517 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:08:56.958525 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:08:56.958534 kernel: Fallback order for Node 0: 0 Sep 5 00:08:56.958542 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 5 00:08:56.958549 kernel: Policy zone: DMA32 Sep 5 00:08:56.958556 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:08:56.958564 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42872K init, 2324K bss, 166140K reserved, 0K cma-reserved) Sep 5 00:08:56.958576 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:08:56.958584 kernel: ftrace: allocating 37969 entries in 149 pages Sep 5 00:08:56.958592 kernel: ftrace: allocated 149 pages with 4 groups Sep 5 00:08:56.958601 kernel: Dynamic Preempt: voluntary Sep 5 00:08:56.958621 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:08:56.958635 kernel: rcu: RCU event tracing is enabled. Sep 5 00:08:56.958653 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:08:56.958663 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:08:56.958679 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:08:56.958692 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:08:56.958701 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:08:56.958711 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:08:56.958726 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:08:56.958738 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:08:56.958757 kernel: Console: colour dummy device 80x25 Sep 5 00:08:56.958856 kernel: printk: console [ttyS0] enabled Sep 5 00:08:56.958869 kernel: ACPI: Core revision 20230628 Sep 5 00:08:56.958877 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:08:56.958884 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:08:56.958894 kernel: x2apic enabled Sep 5 00:08:56.958902 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:08:56.958909 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:08:56.958917 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:08:56.958925 kernel: kvm-guest: setup PV IPIs Sep 5 00:08:56.958932 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:08:56.958943 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 5 00:08:56.958951 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 00:08:56.958961 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:08:56.958968 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:08:56.958978 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:08:56.958990 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:08:56.958997 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:08:56.959005 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:08:56.959012 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:08:56.959023 kernel: active return thunk: retbleed_return_thunk Sep 5 00:08:56.959030 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:08:56.959038 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:08:56.959046 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:08:56.959056 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:08:56.959066 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:08:56.959076 kernel: active return thunk: srso_return_thunk Sep 5 00:08:56.959084 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:08:56.959091 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:08:56.959104 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:08:56.959111 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:08:56.959129 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:08:56.959143 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:08:56.959157 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:08:56.959177 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:08:56.959197 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:08:56.959216 kernel: landlock: Up and running. Sep 5 00:08:56.959224 kernel: SELinux: Initializing. Sep 5 00:08:56.959249 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:08:56.959264 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:08:56.959272 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:08:56.959280 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:08:56.959288 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:08:56.959298 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:08:56.959305 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:08:56.959322 kernel: ... version: 0 Sep 5 00:08:56.959343 kernel: ... bit width: 48 Sep 5 00:08:56.959351 kernel: ... generic registers: 6 Sep 5 00:08:56.959359 kernel: ... value mask: 0000ffffffffffff Sep 5 00:08:56.959367 kernel: ... max period: 00007fffffffffff Sep 5 00:08:56.959375 kernel: ... fixed-purpose events: 0 Sep 5 00:08:56.959389 kernel: ... event mask: 000000000000003f Sep 5 00:08:56.959397 kernel: signal: max sigframe size: 1776 Sep 5 00:08:56.959410 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:08:56.959418 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:08:56.959435 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:08:56.959447 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:08:56.959455 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:08:56.959462 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:08:56.959487 kernel: smpboot: Max logical packages: 1 Sep 5 00:08:56.959496 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 00:08:56.959512 kernel: devtmpfs: initialized Sep 5 00:08:56.959521 kernel: x86/mm: Memory block size: 128MB Sep 5 00:08:56.959529 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 5 00:08:56.959548 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 5 00:08:56.959570 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 5 00:08:56.959580 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 5 00:08:56.959593 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 5 00:08:56.959603 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:08:56.959612 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:08:56.959625 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:08:56.959634 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:08:56.959644 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:08:56.959653 kernel: audit: type=2000 audit(1757030936.096:1): state=initialized audit_enabled=0 res=1 Sep 5 00:08:56.959668 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:08:56.959676 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:08:56.959683 kernel: cpuidle: using governor menu Sep 5 00:08:56.959691 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:08:56.959699 kernel: dca service started, version 1.12.1 Sep 5 00:08:56.959707 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 5 00:08:56.959714 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:08:56.959722 kernel: PCI: Using configuration type 1 for base access Sep 5 00:08:56.959729 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:08:56.959742 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:08:56.959757 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:08:56.959780 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:08:56.959788 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:08:56.959807 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:08:56.959824 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:08:56.959832 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:08:56.959840 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:08:56.959856 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 5 00:08:56.959872 kernel: ACPI: Interpreter enabled Sep 5 00:08:56.959884 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:08:56.959893 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:08:56.959901 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:08:56.959909 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:08:56.959917 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:08:56.959924 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:08:56.960226 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:08:56.960441 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:08:56.960630 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:08:56.960652 kernel: PCI host bridge to bus 0000:00 Sep 5 00:08:56.960898 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:08:56.961022 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:08:56.961161 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:08:56.961307 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:08:56.961477 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:08:56.961618 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 5 00:08:56.961795 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:08:56.962032 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 5 00:08:56.962441 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 5 00:08:56.962764 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 5 00:08:56.963136 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 5 00:08:56.963285 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 5 00:08:56.963479 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 5 00:08:56.963696 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:08:56.963973 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:08:56.964122 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 5 00:08:56.964294 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 5 00:08:56.964662 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 5 00:08:56.965139 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 5 00:08:56.965480 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 5 00:08:56.965641 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 5 00:08:56.965842 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 5 00:08:56.966085 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 5 00:08:56.966247 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 5 00:08:56.966422 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 5 00:08:56.966671 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 5 00:08:56.967223 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 5 00:08:56.967525 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 5 00:08:56.967722 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:08:56.967953 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 5 00:08:56.968195 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 5 00:08:56.968435 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 5 00:08:56.968818 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 5 00:08:56.969328 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 5 00:08:56.969353 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:08:56.969375 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:08:56.969396 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:08:56.969412 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:08:56.969441 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:08:56.969459 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:08:56.969481 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:08:56.969498 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:08:56.969514 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:08:56.969529 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:08:56.969543 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:08:56.969554 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:08:56.969563 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:08:56.969584 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:08:56.969594 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:08:56.969609 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:08:56.969619 kernel: iommu: Default domain type: Translated Sep 5 00:08:56.969629 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:08:56.969641 kernel: efivars: Registered efivars operations Sep 5 00:08:56.969651 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:08:56.969660 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:08:56.969680 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 5 00:08:56.969711 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 5 00:08:56.969729 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 5 00:08:56.969876 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 5 00:08:56.970261 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:08:56.970620 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:08:56.971057 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:08:56.971088 kernel: vgaarb: loaded Sep 5 00:08:56.971103 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:08:56.971142 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:08:56.971160 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:08:56.971169 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:08:56.971177 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:08:56.971185 kernel: pnp: PnP ACPI init Sep 5 00:08:56.971381 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:08:56.971396 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:08:56.971404 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:08:56.971413 kernel: NET: Registered PF_INET protocol family Sep 5 00:08:56.971444 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:08:56.971452 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:08:56.971460 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:08:56.971471 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:08:56.971488 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:08:56.971503 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:08:56.971512 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:08:56.971519 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:08:56.971531 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:08:56.971556 kernel: NET: Registered PF_XDP protocol family Sep 5 00:08:56.972079 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 5 00:08:56.973056 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 5 00:08:56.973294 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:08:56.973533 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:08:56.973682 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:08:56.973820 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:08:56.973950 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:08:56.974145 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 5 00:08:56.974163 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:08:56.974173 kernel: Initialise system trusted keyrings Sep 5 00:08:56.974187 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:08:56.974214 kernel: Key type asymmetric registered Sep 5 00:08:56.974225 kernel: Asymmetric key parser 'x509' registered Sep 5 00:08:56.974238 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:08:56.974252 kernel: io scheduler mq-deadline registered Sep 5 00:08:56.974270 kernel: io scheduler kyber registered Sep 5 00:08:56.974287 kernel: io scheduler bfq registered Sep 5 00:08:56.974300 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:08:56.974312 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:08:56.974323 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:08:56.974332 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:08:56.974339 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:08:56.974347 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:08:56.974355 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:08:56.974372 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:08:56.974383 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:08:56.974394 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:08:56.974600 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:08:56.974794 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:08:56.974948 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:08:56 UTC (1757030936) Sep 5 00:08:56.975068 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:08:56.975078 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:08:56.975092 kernel: efifb: probing for efifb Sep 5 00:08:56.975100 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 5 00:08:56.975108 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 5 00:08:56.975116 kernel: efifb: scrolling: redraw Sep 5 00:08:56.975124 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 5 00:08:56.975132 kernel: Console: switching to colour frame buffer device 100x37 Sep 5 00:08:56.975160 kernel: fb0: EFI VGA frame buffer device Sep 5 00:08:56.975171 kernel: pstore: Using crash dump compression: deflate Sep 5 00:08:56.975179 kernel: pstore: Registered efi_pstore as persistent store backend Sep 5 00:08:56.975190 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:08:56.975200 kernel: Segment Routing with IPv6 Sep 5 00:08:56.975208 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:08:56.975216 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:08:56.975224 kernel: Key type dns_resolver registered Sep 5 00:08:56.975232 kernel: IPI shorthand broadcast: enabled Sep 5 00:08:56.975240 kernel: sched_clock: Marking stable (1033002773, 110498077)->(1191705259, -48204409) Sep 5 00:08:56.975248 kernel: registered taskstats version 1 Sep 5 00:08:56.975256 kernel: Loading compiled-in X.509 certificates Sep 5 00:08:56.975267 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: fbb6a9f06c02a4dbdf06d4c5d95c782040e8492c' Sep 5 00:08:56.975275 kernel: Key type .fscrypt registered Sep 5 00:08:56.975283 kernel: Key type fscrypt-provisioning registered Sep 5 00:08:56.975291 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:08:56.975299 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:08:56.975308 kernel: ima: No architecture policies found Sep 5 00:08:56.975316 kernel: clk: Disabling unused clocks Sep 5 00:08:56.975324 kernel: Freeing unused kernel image (initmem) memory: 42872K Sep 5 00:08:56.975332 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:08:56.975343 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 5 00:08:56.975351 kernel: Run /init as init process Sep 5 00:08:56.975359 kernel: with arguments: Sep 5 00:08:56.975368 kernel: /init Sep 5 00:08:56.975375 kernel: with environment: Sep 5 00:08:56.975383 kernel: HOME=/ Sep 5 00:08:56.975391 kernel: TERM=linux Sep 5 00:08:56.975399 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:08:56.975410 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:08:56.975435 systemd[1]: Detected virtualization kvm. Sep 5 00:08:56.975444 systemd[1]: Detected architecture x86-64. Sep 5 00:08:56.975452 systemd[1]: Running in initrd. Sep 5 00:08:56.975464 systemd[1]: No hostname configured, using default hostname. Sep 5 00:08:56.975475 systemd[1]: Hostname set to . Sep 5 00:08:56.975484 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:08:56.975492 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:08:56.975501 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:08:56.975510 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:08:56.975519 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:08:56.975528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:08:56.975537 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:08:56.975548 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:08:56.975558 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:08:56.975568 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:08:56.975577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:08:56.975585 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:08:56.975594 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:08:56.975605 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:08:56.975614 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:08:56.975623 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:08:56.975631 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:08:56.975640 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:08:56.975648 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:08:56.975657 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:08:56.975666 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:08:56.975674 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:08:56.975686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:08:56.975694 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:08:56.975703 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:08:56.975712 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:08:56.975720 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:08:56.975729 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:08:56.975739 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:08:56.975749 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:08:56.975758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:08:56.975837 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:08:56.975848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:08:56.975857 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:08:56.975894 systemd-journald[192]: Collecting audit messages is disabled. Sep 5 00:08:56.975921 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:08:56.975930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:08:56.975939 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:08:56.975948 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:08:56.975959 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:08:56.975969 systemd-journald[192]: Journal started Sep 5 00:08:56.975987 systemd-journald[192]: Runtime Journal (/run/log/journal/5f4949d994a34295b3ddd9fd2ed5517d) is 6.0M, max 48.3M, 42.2M free. Sep 5 00:08:56.949941 systemd-modules-load[193]: Inserted module 'overlay' Sep 5 00:08:57.019575 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:08:56.984973 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:08:56.989952 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:08:57.024945 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:08:57.016382 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:08:57.028289 kernel: Bridge firewalling registered Sep 5 00:08:57.027747 systemd-modules-load[193]: Inserted module 'br_netfilter' Sep 5 00:08:57.029452 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:08:57.033109 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:08:57.035545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:08:57.038507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:08:57.041275 dracut-cmdline[215]: dracut-dracut-053 Sep 5 00:08:57.043624 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:08:57.047356 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:08:57.054897 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:08:57.089796 systemd-resolved[246]: Positive Trust Anchors: Sep 5 00:08:57.089824 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:08:57.089855 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:08:57.092752 systemd-resolved[246]: Defaulting to hostname 'linux'. Sep 5 00:08:57.094119 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:08:57.099152 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:08:57.126796 kernel: SCSI subsystem initialized Sep 5 00:08:57.135791 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:08:57.145791 kernel: iscsi: registered transport (tcp) Sep 5 00:08:57.167799 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:08:57.167824 kernel: QLogic iSCSI HBA Driver Sep 5 00:08:57.224097 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:08:57.233906 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:08:57.257792 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:08:57.257829 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:08:57.259328 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:08:57.299791 kernel: raid6: avx2x4 gen() 29471 MB/s Sep 5 00:08:57.316789 kernel: raid6: avx2x2 gen() 30500 MB/s Sep 5 00:08:57.333833 kernel: raid6: avx2x1 gen() 25700 MB/s Sep 5 00:08:57.333848 kernel: raid6: using algorithm avx2x2 gen() 30500 MB/s Sep 5 00:08:57.351848 kernel: raid6: .... xor() 19716 MB/s, rmw enabled Sep 5 00:08:57.351874 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:08:57.372797 kernel: xor: automatically using best checksumming function avx Sep 5 00:08:57.535819 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:08:57.549153 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:08:57.559993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:08:57.572317 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 5 00:08:57.577611 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:08:57.590009 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:08:57.603058 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 5 00:08:57.635436 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:08:57.654720 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:08:57.727623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:08:57.737906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:08:57.753233 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:08:57.756272 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:08:57.759324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:08:57.761599 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:08:57.771061 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:08:57.776805 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:08:57.780462 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:08:57.784529 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:08:57.784550 kernel: GPT:9289727 != 19775487 Sep 5 00:08:57.784566 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:08:57.784581 kernel: GPT:9289727 != 19775487 Sep 5 00:08:57.784596 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:08:57.784331 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:08:57.788923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:08:57.789824 kernel: libata version 3.00 loaded. Sep 5 00:08:57.792963 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:08:57.796823 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:08:57.797052 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:08:57.798205 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 5 00:08:57.799835 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:08:57.810480 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:08:57.810537 kernel: AES CTR mode by8 optimization enabled Sep 5 00:08:57.815688 kernel: scsi host0: ahci Sep 5 00:08:57.816202 kernel: scsi host1: ahci Sep 5 00:08:57.816473 kernel: scsi host2: ahci Sep 5 00:08:57.815794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:08:57.820303 kernel: scsi host3: ahci Sep 5 00:08:57.820674 kernel: scsi host4: ahci Sep 5 00:08:57.820918 kernel: scsi host5: ahci Sep 5 00:08:57.816121 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:08:57.830992 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 5 00:08:57.831014 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 5 00:08:57.831056 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 5 00:08:57.831067 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 5 00:08:57.831077 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 5 00:08:57.831112 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 5 00:08:57.821092 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:08:57.840236 kernel: BTRFS: device fsid 3713859d-e283-4add-80dc-7ca8465b1d1d devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (465) Sep 5 00:08:57.825317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:08:57.825597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:08:57.836974 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:08:57.845805 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Sep 5 00:08:57.846172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:08:57.875909 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:08:57.887389 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:08:57.897486 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:08:57.897754 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:08:57.902960 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:08:57.918111 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:08:57.920348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:08:57.920583 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:08:57.921446 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:08:57.922657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:08:57.935640 disk-uuid[558]: Primary Header is updated. Sep 5 00:08:57.935640 disk-uuid[558]: Secondary Entries is updated. Sep 5 00:08:57.935640 disk-uuid[558]: Secondary Header is updated. Sep 5 00:08:57.939795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:08:57.942562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:08:57.946802 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:08:57.951980 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:08:57.969451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:08:58.145665 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:08:58.145750 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:08:58.145762 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:08:58.145788 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:08:58.146803 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:08:58.147793 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:08:58.148809 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:08:58.148844 kernel: ata3.00: applying bridge limits Sep 5 00:08:58.149794 kernel: ata3.00: configured for UDMA/100 Sep 5 00:08:58.151803 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:08:58.195308 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:08:58.195587 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:08:58.207799 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:08:58.949801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:08:58.950049 disk-uuid[560]: The operation has completed successfully. Sep 5 00:08:58.989336 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:08:58.989537 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:08:59.032384 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:08:59.039800 sh[597]: Success Sep 5 00:08:59.064808 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 5 00:08:59.125652 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:08:59.157571 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:08:59.159159 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:08:59.184680 kernel: BTRFS info (device dm-0): first mount of filesystem 3713859d-e283-4add-80dc-7ca8465b1d1d Sep 5 00:08:59.184759 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:08:59.184785 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:08:59.185984 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:08:59.186873 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:08:59.203890 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:08:59.208532 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:08:59.223197 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:08:59.228313 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:08:59.247249 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:08:59.247329 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:08:59.247349 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:08:59.257830 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:08:59.278728 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:08:59.281846 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:08:59.303366 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:08:59.314717 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:08:59.436139 ignition[687]: Ignition 2.19.0 Sep 5 00:08:59.436159 ignition[687]: Stage: fetch-offline Sep 5 00:08:59.436216 ignition[687]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:08:59.436236 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:08:59.436381 ignition[687]: parsed url from cmdline: "" Sep 5 00:08:59.436386 ignition[687]: no config URL provided Sep 5 00:08:59.436393 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:08:59.436404 ignition[687]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:08:59.436443 ignition[687]: op(1): [started] loading QEMU firmware config module Sep 5 00:08:59.437297 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:08:59.450990 ignition[687]: op(1): [finished] loading QEMU firmware config module Sep 5 00:08:59.481283 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:08:59.493145 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:08:59.505034 ignition[687]: parsing config with SHA512: 67e1cb27d2e6f9806560c393a7ae40e972589d15ad00524539272e5eaaa58fe078eec259a15b7086d231fed8f8d58f30e18e74535283ac53b40a1ce8ea52b87c Sep 5 00:08:59.510247 unknown[687]: fetched base config from "system" Sep 5 00:08:59.510268 unknown[687]: fetched user config from "qemu" Sep 5 00:08:59.510834 ignition[687]: fetch-offline: fetch-offline passed Sep 5 00:08:59.514378 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:08:59.510936 ignition[687]: Ignition finished successfully Sep 5 00:08:59.535316 systemd-networkd[787]: lo: Link UP Sep 5 00:08:59.535333 systemd-networkd[787]: lo: Gained carrier Sep 5 00:08:59.537702 systemd-networkd[787]: Enumeration completed Sep 5 00:08:59.537881 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:08:59.539843 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:08:59.539850 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:08:59.540994 systemd-networkd[787]: eth0: Link UP Sep 5 00:08:59.540999 systemd-networkd[787]: eth0: Gained carrier Sep 5 00:08:59.541009 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:08:59.542184 systemd[1]: Reached target network.target - Network. Sep 5 00:08:59.544858 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:08:59.554083 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:08:59.568999 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:08:59.582376 ignition[790]: Ignition 2.19.0 Sep 5 00:08:59.582396 ignition[790]: Stage: kargs Sep 5 00:08:59.582665 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:08:59.582681 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:08:59.583937 ignition[790]: kargs: kargs passed Sep 5 00:08:59.584007 ignition[790]: Ignition finished successfully Sep 5 00:08:59.591175 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:08:59.832359 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:08:59.867043 ignition[798]: Ignition 2.19.0 Sep 5 00:08:59.867812 ignition[798]: Stage: disks Sep 5 00:08:59.868065 ignition[798]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:08:59.868080 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:08:59.880452 ignition[798]: disks: disks passed Sep 5 00:08:59.880576 ignition[798]: Ignition finished successfully Sep 5 00:08:59.887880 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:08:59.888561 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:08:59.890681 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:08:59.891227 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:08:59.891636 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:08:59.892221 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:08:59.909143 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:08:59.941804 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:08:59.951988 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:08:59.969968 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:09:00.163844 kernel: EXT4-fs (vda9): mounted filesystem 83287606-d110-4d13-a801-c8d88205bd5a r/w with ordered data mode. Quota mode: none. Sep 5 00:09:00.168168 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:09:00.170716 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:09:00.193468 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:09:00.205541 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:09:00.213997 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Sep 5 00:09:00.214038 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:09:00.214055 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:09:00.214070 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:09:00.213230 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:09:00.220048 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:09:00.213316 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:09:00.213407 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:09:00.222142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:09:00.225853 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:09:00.246195 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:09:00.324787 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:09:00.336951 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:09:00.346845 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:09:00.357478 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:09:00.570120 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:09:00.577979 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:09:00.586293 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:09:00.596911 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:09:00.598663 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:09:00.704111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:09:00.726491 ignition[928]: INFO : Ignition 2.19.0 Sep 5 00:09:00.726491 ignition[928]: INFO : Stage: mount Sep 5 00:09:00.740487 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:09:00.740487 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:09:00.740487 ignition[928]: INFO : mount: mount passed Sep 5 00:09:00.740487 ignition[928]: INFO : Ignition finished successfully Sep 5 00:09:00.749747 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:09:00.764626 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:09:00.781974 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:09:00.807832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Sep 5 00:09:00.810743 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:09:00.810807 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:09:00.810825 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:09:00.820839 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:09:00.825090 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:09:00.883566 ignition[959]: INFO : Ignition 2.19.0 Sep 5 00:09:00.883566 ignition[959]: INFO : Stage: files Sep 5 00:09:00.885941 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:09:00.885941 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:09:00.885941 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:09:00.890795 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:09:00.890795 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:09:00.897287 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:09:00.899161 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:09:00.900882 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:09:00.900136 unknown[959]: wrote ssh authorized keys file for user: core Sep 5 00:09:00.904181 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:09:00.906536 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 5 00:09:00.950099 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:09:01.281956 systemd-networkd[787]: eth0: Gained IPv6LL Sep 5 00:09:01.283141 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:09:01.283141 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:09:01.283141 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 5 00:09:01.384611 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 00:09:01.603502 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:09:01.603502 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:09:01.607198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 5 00:09:02.229419 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 00:09:02.937579 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:09:02.937579 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 5 00:09:02.941312 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:09:02.943906 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:09:02.943906 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 5 00:09:02.943906 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 5 00:09:02.948051 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:09:02.949930 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:09:02.949930 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 5 00:09:02.952970 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:09:02.981950 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:09:02.990313 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:09:02.991942 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:09:02.991942 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:09:02.994598 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:09:02.996039 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:09:02.997762 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:09:02.999375 ignition[959]: INFO : files: files passed Sep 5 00:09:03.000090 ignition[959]: INFO : Ignition finished successfully Sep 5 00:09:03.003574 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:09:03.014012 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:09:03.015469 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:09:03.018943 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:09:03.020050 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:09:03.026954 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:09:03.031442 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:09:03.031442 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:09:03.034570 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:09:03.039108 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:09:03.041838 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:09:03.053937 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:09:03.082661 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:09:03.082824 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:09:03.083675 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:09:03.088104 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:09:03.088487 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:09:03.091609 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:09:03.113268 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:09:03.126033 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:09:03.138280 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:09:03.140538 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:09:03.141071 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:09:03.141370 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:09:03.141508 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:09:03.145034 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:09:03.145513 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:09:03.145853 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:09:03.146317 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:09:03.146634 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:09:03.147308 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:09:03.147607 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:09:03.148101 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:09:03.148424 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:09:03.149044 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:09:03.149329 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:09:03.149465 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:09:03.167895 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:09:03.168425 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:09:03.168700 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:09:03.168849 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:09:03.169202 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:09:03.169348 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:09:03.178709 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:09:03.178976 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:09:03.179586 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:09:03.184176 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:09:03.187870 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:09:03.190960 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:09:03.193116 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:09:03.195240 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:09:03.196221 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:09:03.198352 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:09:03.199352 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:09:03.201483 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:09:03.202840 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:09:03.205766 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:09:03.206898 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:09:03.221018 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:09:03.223105 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:09:03.224245 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:09:03.227640 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:09:03.229811 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:09:03.231130 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:09:03.233675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:09:03.235909 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:09:03.244262 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:09:03.256586 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:09:03.269363 ignition[1015]: INFO : Ignition 2.19.0 Sep 5 00:09:03.269363 ignition[1015]: INFO : Stage: umount Sep 5 00:09:03.270928 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:09:03.270928 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:09:03.273075 ignition[1015]: INFO : umount: umount passed Sep 5 00:09:03.273075 ignition[1015]: INFO : Ignition finished successfully Sep 5 00:09:03.274725 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:09:03.274880 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:09:03.277331 systemd[1]: Stopped target network.target - Network. Sep 5 00:09:03.278398 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:09:03.278465 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:09:03.280224 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:09:03.280298 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:09:03.282150 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:09:03.282203 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:09:03.284048 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:09:03.284100 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:09:03.286184 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:09:03.288025 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:09:03.292274 systemd-networkd[787]: eth0: DHCPv6 lease lost Sep 5 00:09:03.296451 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:09:03.296613 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:09:03.298586 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:09:03.298675 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:09:03.302701 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:09:03.302893 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:09:03.304505 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:09:03.304550 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:09:03.312958 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:09:03.313850 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:09:03.313912 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:09:03.316053 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:09:03.316104 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:09:03.318139 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:09:03.318190 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:09:03.320502 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:09:03.331328 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:09:03.331472 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:09:03.334197 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:09:03.334386 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:09:03.336642 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:09:03.336747 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:09:03.337895 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:09:03.337938 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:09:03.340027 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:09:03.340079 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:09:03.342409 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:09:03.342467 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:09:03.343939 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:09:03.343993 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:09:03.355977 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:09:03.357209 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:09:03.357307 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:09:03.359888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:09:03.360000 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:09:03.369049 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:09:03.369179 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:09:04.014973 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:09:04.063717 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:09:04.063888 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:09:04.064568 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:09:04.068474 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:09:04.068552 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:09:04.082069 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:09:04.091419 systemd[1]: Switching root. Sep 5 00:09:04.133858 systemd-journald[192]: Journal stopped Sep 5 00:09:05.790491 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 5 00:09:05.790552 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:09:05.790575 kernel: SELinux: policy capability open_perms=1 Sep 5 00:09:05.790587 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:09:05.790598 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:09:05.790615 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:09:05.790628 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:09:05.790639 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:09:05.790654 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:09:05.790666 kernel: audit: type=1403 audit(1757030945.000:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:09:05.790688 systemd[1]: Successfully loaded SELinux policy in 42.321ms. Sep 5 00:09:05.790703 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.762ms. Sep 5 00:09:05.790716 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:09:05.790729 systemd[1]: Detected virtualization kvm. Sep 5 00:09:05.790741 systemd[1]: Detected architecture x86-64. Sep 5 00:09:05.790753 systemd[1]: Detected first boot. Sep 5 00:09:05.790765 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:09:05.790797 zram_generator::config[1059]: No configuration found. Sep 5 00:09:05.790810 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:09:05.790823 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:09:05.790835 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:09:05.790847 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:09:05.790860 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:09:05.790873 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:09:05.790885 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:09:05.790903 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:09:05.790915 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:09:05.790929 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:09:05.790941 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:09:05.790953 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:09:05.790966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:09:05.790978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:09:05.790991 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:09:05.791003 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:09:05.791021 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:09:05.791033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:09:05.791046 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:09:05.791058 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:09:05.791070 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:09:05.791082 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:09:05.791094 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:09:05.791116 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:09:05.791129 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:09:05.791141 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:09:05.791153 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:09:05.791165 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:09:05.791187 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:09:05.791203 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:09:05.791220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:09:05.791235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:09:05.791251 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:09:05.791273 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:09:05.791288 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:09:05.791303 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:09:05.791318 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:09:05.791331 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:05.791343 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:09:05.791355 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:09:05.791368 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:09:05.791386 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:09:05.791398 systemd[1]: Reached target machines.target - Containers. Sep 5 00:09:05.791410 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:09:05.791422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:09:05.791435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:09:05.791447 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:09:05.791459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:09:05.791471 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:09:05.791483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:09:05.791502 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:09:05.791515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:09:05.791528 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:09:05.791540 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:09:05.791552 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:09:05.791564 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:09:05.791576 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:09:05.791588 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:09:05.791605 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:09:05.791617 kernel: loop: module loaded Sep 5 00:09:05.791628 kernel: fuse: init (API version 7.39) Sep 5 00:09:05.791640 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:09:05.791671 systemd-journald[1122]: Collecting audit messages is disabled. Sep 5 00:09:05.791697 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:09:05.791709 systemd-journald[1122]: Journal started Sep 5 00:09:05.791737 systemd-journald[1122]: Runtime Journal (/run/log/journal/5f4949d994a34295b3ddd9fd2ed5517d) is 6.0M, max 48.3M, 42.2M free. Sep 5 00:09:05.554203 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:09:05.578861 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:09:05.579474 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:09:05.798852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:09:05.800994 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:09:05.801032 systemd[1]: Stopped verity-setup.service. Sep 5 00:09:05.804413 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:05.807789 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:09:05.809392 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:09:05.810803 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:09:05.812281 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:09:05.813584 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:09:05.814994 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:09:05.816438 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:09:05.817863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:09:05.819678 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:09:05.819942 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:09:05.821664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:09:05.821921 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:09:05.823591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:09:05.823757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:09:05.825561 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:09:05.825728 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:09:05.829854 kernel: ACPI: bus type drm_connector registered Sep 5 00:09:05.827617 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:09:05.827798 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:09:05.830090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:09:05.831768 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:09:05.831958 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:09:05.833525 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:09:05.835340 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:09:05.849116 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:09:05.857911 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:09:05.861537 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:09:05.862899 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:09:05.862946 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:09:05.865574 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:09:05.868966 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:09:05.885950 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:09:05.887343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:09:05.889282 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:09:05.891609 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:09:05.893186 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:09:05.898800 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:09:05.900426 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:09:05.905100 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:09:05.909656 systemd-journald[1122]: Time spent on flushing to /var/log/journal/5f4949d994a34295b3ddd9fd2ed5517d is 32.688ms for 995 entries. Sep 5 00:09:05.909656 systemd-journald[1122]: System Journal (/var/log/journal/5f4949d994a34295b3ddd9fd2ed5517d) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:09:06.020712 systemd-journald[1122]: Received client request to flush runtime journal. Sep 5 00:09:06.020792 kernel: loop0: detected capacity change from 0 to 229808 Sep 5 00:09:06.020836 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:09:06.020860 kernel: loop1: detected capacity change from 0 to 140768 Sep 5 00:09:05.908979 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:09:05.913496 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:09:05.915088 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:09:05.916984 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:09:05.928686 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:09:05.941087 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:09:05.953566 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 00:09:05.959644 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:09:05.978112 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:09:05.980013 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:09:05.989987 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:09:05.991943 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:09:05.998369 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:09:06.023641 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:09:06.028081 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:09:06.029913 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:09:06.040813 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:09:06.050949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:09:06.061810 kernel: loop2: detected capacity change from 0 to 142488 Sep 5 00:09:06.074969 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Sep 5 00:09:06.074994 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Sep 5 00:09:06.086841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:09:06.120040 kernel: loop3: detected capacity change from 0 to 229808 Sep 5 00:09:06.128797 kernel: loop4: detected capacity change from 0 to 140768 Sep 5 00:09:06.140806 kernel: loop5: detected capacity change from 0 to 142488 Sep 5 00:09:06.150756 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:09:06.151443 (sd-merge)[1198]: Merged extensions into '/usr'. Sep 5 00:09:06.155676 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:09:06.155694 systemd[1]: Reloading... Sep 5 00:09:06.219802 zram_generator::config[1224]: No configuration found. Sep 5 00:09:06.291536 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:09:06.347041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:09:06.397151 systemd[1]: Reloading finished in 240 ms. Sep 5 00:09:06.434476 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:09:06.436240 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:09:06.449036 systemd[1]: Starting ensure-sysext.service... Sep 5 00:09:06.451181 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:09:06.458724 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:09:06.458734 systemd[1]: Reloading... Sep 5 00:09:06.481931 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:09:06.482755 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:09:06.483901 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:09:06.484292 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 5 00:09:06.484440 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 5 00:09:06.488579 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:09:06.488596 systemd-tmpfiles[1263]: Skipping /boot Sep 5 00:09:06.504659 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:09:06.504676 systemd-tmpfiles[1263]: Skipping /boot Sep 5 00:09:06.516804 zram_generator::config[1292]: No configuration found. Sep 5 00:09:06.640114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:09:06.697857 systemd[1]: Reloading finished in 238 ms. Sep 5 00:09:06.714989 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:09:06.734513 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:09:06.745426 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:09:06.748787 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:09:06.751699 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:09:06.757023 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:09:06.761286 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:09:06.769893 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:09:06.776204 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:06.776399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:09:06.779386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:09:06.796467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:09:06.801202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:09:06.801791 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Sep 5 00:09:06.802433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:09:06.804614 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:09:06.805676 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:06.807165 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:09:06.809553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:09:06.809847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:09:06.812300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:09:06.812632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:09:06.815205 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:09:06.815385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:09:06.827659 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:06.828319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:09:06.831864 augenrules[1357]: No rules Sep 5 00:09:06.840142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:09:06.845172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:09:06.850679 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:09:06.851821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:09:06.853351 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:09:06.856108 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:06.857411 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:09:06.859480 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:09:06.862655 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:09:06.865432 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:09:06.869260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:09:06.869444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:09:06.871599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:09:06.871855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:09:06.874538 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:09:06.874758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:09:06.877318 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:09:06.886189 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:09:06.907803 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1372) Sep 5 00:09:06.917167 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:09:06.925155 systemd[1]: Finished ensure-sysext.service. Sep 5 00:09:06.930055 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:06.930218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:09:06.939966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:09:06.942611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:09:06.945906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:09:06.948140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:09:06.949322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:09:06.953944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:09:06.958001 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:09:06.961842 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:09:06.961869 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:06.962524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:09:06.962717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:09:06.964239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:09:06.964418 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:09:06.966132 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:09:06.966325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:09:06.967878 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:09:06.968050 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:09:06.980740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:09:06.980954 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:09:06.987083 systemd-resolved[1332]: Positive Trust Anchors: Sep 5 00:09:06.987101 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:09:06.987135 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:09:06.991068 systemd-resolved[1332]: Defaulting to hostname 'linux'. Sep 5 00:09:06.992951 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:09:06.994329 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:09:06.998217 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:09:07.004801 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 5 00:09:07.007998 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:09:07.015825 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:09:07.025561 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:09:07.046315 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 5 00:09:07.050349 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 5 00:09:07.050735 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:09:07.050975 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 5 00:09:07.051243 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:09:07.059950 systemd-networkd[1409]: lo: Link UP Sep 5 00:09:07.059971 systemd-networkd[1409]: lo: Gained carrier Sep 5 00:09:07.063872 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:09:07.064908 systemd-networkd[1409]: Enumeration completed Sep 5 00:09:07.065335 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:09:07.065346 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:09:07.065488 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:09:07.066466 systemd-networkd[1409]: eth0: Link UP Sep 5 00:09:07.066471 systemd-networkd[1409]: eth0: Gained carrier Sep 5 00:09:07.066483 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:09:07.067975 systemd[1]: Reached target network.target - Network. Sep 5 00:09:07.070015 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:09:07.077915 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:09:07.080098 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Sep 5 00:09:07.082007 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:09:07.696692 systemd-resolved[1332]: Clock change detected. Flushing caches. Sep 5 00:09:07.696813 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:09:07.696861 systemd-timesyncd[1410]: Initial clock synchronization to Fri 2025-09-05 00:09:07.696647 UTC. Sep 5 00:09:07.719345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:09:07.732372 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:09:07.732656 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:09:07.739702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:09:07.775103 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:09:07.788626 kernel: kvm_amd: TSC scaling supported Sep 5 00:09:07.788701 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:09:07.788720 kernel: kvm_amd: Nested Paging enabled Sep 5 00:09:07.789149 kernel: kvm_amd: LBR virtualization supported Sep 5 00:09:07.790577 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:09:07.790607 kernel: kvm_amd: Virtual GIF supported Sep 5 00:09:07.813099 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:09:07.819820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:09:07.846536 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:09:07.861287 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:09:07.869535 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:09:07.899184 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:09:07.900782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:09:07.901937 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:09:07.903282 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:09:07.904621 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:09:07.906144 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:09:07.907356 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:09:07.908848 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:09:07.910272 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:09:07.910300 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:09:07.911322 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:09:07.913265 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:09:07.916495 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:09:07.935075 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:09:07.937817 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:09:07.939701 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:09:07.940929 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:09:07.941954 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:09:07.943012 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:09:07.943048 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:09:07.944251 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:09:07.946629 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:09:07.951183 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:09:07.955987 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:09:07.957387 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:09:07.959999 jq[1443]: false Sep 5 00:09:07.961296 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:09:07.965093 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:09:07.967209 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:09:07.972271 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:09:07.975499 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:09:07.980384 extend-filesystems[1444]: Found loop3 Sep 5 00:09:07.980384 extend-filesystems[1444]: Found loop4 Sep 5 00:09:07.980384 extend-filesystems[1444]: Found loop5 Sep 5 00:09:07.984534 extend-filesystems[1444]: Found sr0 Sep 5 00:09:07.984534 extend-filesystems[1444]: Found vda Sep 5 00:09:07.984534 extend-filesystems[1444]: Found vda1 Sep 5 00:09:07.984534 extend-filesystems[1444]: Found vda2 Sep 5 00:09:07.984534 extend-filesystems[1444]: Found vda3 Sep 5 00:09:07.984534 extend-filesystems[1444]: Found usr Sep 5 00:09:07.984534 extend-filesystems[1444]: Found vda4 Sep 5 00:09:07.984534 extend-filesystems[1444]: Found vda6 Sep 5 00:09:07.984534 extend-filesystems[1444]: Found vda7 Sep 5 00:09:07.984534 extend-filesystems[1444]: Found vda9 Sep 5 00:09:07.984534 extend-filesystems[1444]: Checking size of /dev/vda9 Sep 5 00:09:07.994089 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:09:07.993174 dbus-daemon[1442]: [system] SELinux support is enabled Sep 5 00:09:07.996885 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:09:07.999899 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:09:08.008995 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:09:08.014144 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:09:08.016101 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:09:08.019507 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:09:08.021447 jq[1462]: true Sep 5 00:09:08.023467 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:09:08.023693 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:09:08.024052 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:09:08.025330 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:09:08.030177 update_engine[1461]: I20250905 00:09:08.030058 1461 main.cc:92] Flatcar Update Engine starting Sep 5 00:09:08.034355 update_engine[1461]: I20250905 00:09:08.031519 1461 update_check_scheduler.cc:74] Next update check in 11m26s Sep 5 00:09:08.035422 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:09:08.036083 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:09:08.047572 jq[1465]: true Sep 5 00:09:08.058478 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:09:08.060819 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (Power Button) Sep 5 00:09:08.060844 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:09:08.062190 systemd-logind[1457]: New seat seat0. Sep 5 00:09:08.065041 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:09:08.065727 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:09:08.066720 tar[1464]: linux-amd64/LICENSE Sep 5 00:09:08.068041 tar[1464]: linux-amd64/helm Sep 5 00:09:08.068590 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:09:08.068633 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:09:08.069934 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:09:08.071368 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:09:08.084157 extend-filesystems[1444]: Resized partition /dev/vda9 Sep 5 00:09:08.085569 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:09:08.095195 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1380) Sep 5 00:09:08.096959 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:09:08.258132 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:09:08.283137 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:09:08.298309 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:09:08.307630 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:09:08.307891 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:09:08.310630 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:09:08.347811 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:09:08.357343 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:09:08.375383 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:09:08.377665 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:09:08.378926 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:09:08.395631 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:09:08.415099 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:09:08.444837 extend-filesystems[1491]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:09:08.444837 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:09:08.444837 extend-filesystems[1491]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:09:08.449195 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Sep 5 00:09:08.449676 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:09:08.447441 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:09:08.448616 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:09:08.452254 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:09:08.455749 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:09:08.517692 containerd[1466]: time="2025-09-05T00:09:08.517593242Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:09:08.539615 containerd[1466]: time="2025-09-05T00:09:08.539530239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:08.541640 containerd[1466]: time="2025-09-05T00:09:08.541589753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:08.541640 containerd[1466]: time="2025-09-05T00:09:08.541618937Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:09:08.541640 containerd[1466]: time="2025-09-05T00:09:08.541637332Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:09:08.541898 containerd[1466]: time="2025-09-05T00:09:08.541865770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:09:08.541898 containerd[1466]: time="2025-09-05T00:09:08.541890677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:08.542007 containerd[1466]: time="2025-09-05T00:09:08.541983742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:08.542007 containerd[1466]: time="2025-09-05T00:09:08.542004000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:08.542322 containerd[1466]: time="2025-09-05T00:09:08.542286269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:08.542322 containerd[1466]: time="2025-09-05T00:09:08.542316726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:08.542392 containerd[1466]: time="2025-09-05T00:09:08.542335732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:08.542392 containerd[1466]: time="2025-09-05T00:09:08.542351411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:08.542548 containerd[1466]: time="2025-09-05T00:09:08.542522773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:08.542879 containerd[1466]: time="2025-09-05T00:09:08.542845087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:08.543034 containerd[1466]: time="2025-09-05T00:09:08.543001941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:08.543034 containerd[1466]: time="2025-09-05T00:09:08.543024524Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:09:08.543201 containerd[1466]: time="2025-09-05T00:09:08.543177110Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:09:08.543277 containerd[1466]: time="2025-09-05T00:09:08.543256278Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:09:08.548361 containerd[1466]: time="2025-09-05T00:09:08.548320958Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:09:08.548409 containerd[1466]: time="2025-09-05T00:09:08.548380570Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:09:08.548473 containerd[1466]: time="2025-09-05T00:09:08.548408232Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:09:08.548473 containerd[1466]: time="2025-09-05T00:09:08.548447015Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:09:08.548561 containerd[1466]: time="2025-09-05T00:09:08.548471150Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:09:08.548675 containerd[1466]: time="2025-09-05T00:09:08.548630769Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:09:08.548997 containerd[1466]: time="2025-09-05T00:09:08.548964055Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:09:08.549146 containerd[1466]: time="2025-09-05T00:09:08.549123203Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:09:08.549173 containerd[1466]: time="2025-09-05T00:09:08.549148010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:09:08.549173 containerd[1466]: time="2025-09-05T00:09:08.549164330Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:09:08.549209 containerd[1466]: time="2025-09-05T00:09:08.549181432Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:09:08.549209 containerd[1466]: time="2025-09-05T00:09:08.549198314Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:09:08.549257 containerd[1466]: time="2025-09-05T00:09:08.549213182Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:09:08.549257 containerd[1466]: time="2025-09-05T00:09:08.549228671Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:09:08.549257 containerd[1466]: time="2025-09-05T00:09:08.549243900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:09:08.549308 containerd[1466]: time="2025-09-05T00:09:08.549256142Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:09:08.549308 containerd[1466]: time="2025-09-05T00:09:08.549267073Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:09:08.549308 containerd[1466]: time="2025-09-05T00:09:08.549278104Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:09:08.549308 containerd[1466]: time="2025-09-05T00:09:08.549297270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549389 containerd[1466]: time="2025-09-05T00:09:08.549313139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549389 containerd[1466]: time="2025-09-05T00:09:08.549328458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549389 containerd[1466]: time="2025-09-05T00:09:08.549344328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549389 containerd[1466]: time="2025-09-05T00:09:08.549359266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549389 containerd[1466]: time="2025-09-05T00:09:08.549375156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549510 containerd[1466]: time="2025-09-05T00:09:08.549389903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549510 containerd[1466]: time="2025-09-05T00:09:08.549405042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549510 containerd[1466]: time="2025-09-05T00:09:08.549429458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549510 containerd[1466]: time="2025-09-05T00:09:08.549446349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549510 containerd[1466]: time="2025-09-05T00:09:08.549459544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549510 containerd[1466]: time="2025-09-05T00:09:08.549470324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549510 containerd[1466]: time="2025-09-05T00:09:08.549482106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549510 containerd[1466]: time="2025-09-05T00:09:08.549495672Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:09:08.549680 containerd[1466]: time="2025-09-05T00:09:08.549518905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549680 containerd[1466]: time="2025-09-05T00:09:08.549538182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549680 containerd[1466]: time="2025-09-05T00:09:08.549554993Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:09:08.549680 containerd[1466]: time="2025-09-05T00:09:08.549629914Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:09:08.549680 containerd[1466]: time="2025-09-05T00:09:08.549655612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:09:08.549680 containerd[1466]: time="2025-09-05T00:09:08.549670099Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:09:08.549787 containerd[1466]: time="2025-09-05T00:09:08.549683614Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:09:08.549787 containerd[1466]: time="2025-09-05T00:09:08.549695296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.549787 containerd[1466]: time="2025-09-05T00:09:08.549716396Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:09:08.549787 containerd[1466]: time="2025-09-05T00:09:08.549729520Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:09:08.549787 containerd[1466]: time="2025-09-05T00:09:08.549744068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:09:08.550141 containerd[1466]: time="2025-09-05T00:09:08.550046485Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:09:08.550141 containerd[1466]: time="2025-09-05T00:09:08.550140301Z" level=info msg="Connect containerd service" Sep 5 00:09:08.550308 containerd[1466]: time="2025-09-05T00:09:08.550188722Z" level=info msg="using legacy CRI server" Sep 5 00:09:08.550308 containerd[1466]: time="2025-09-05T00:09:08.550204141Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:09:08.550308 containerd[1466]: time="2025-09-05T00:09:08.550299580Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:09:08.551995 containerd[1466]: time="2025-09-05T00:09:08.551959052Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:09:08.552284 containerd[1466]: time="2025-09-05T00:09:08.552181650Z" level=info msg="Start subscribing containerd event" Sep 5 00:09:08.552284 containerd[1466]: time="2025-09-05T00:09:08.552224100Z" level=info msg="Start recovering state" Sep 5 00:09:08.552943 containerd[1466]: time="2025-09-05T00:09:08.552378700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:09:08.552943 containerd[1466]: time="2025-09-05T00:09:08.552377838Z" level=info msg="Start event monitor" Sep 5 00:09:08.552943 containerd[1466]: time="2025-09-05T00:09:08.552439604Z" level=info msg="Start snapshots syncer" Sep 5 00:09:08.552943 containerd[1466]: time="2025-09-05T00:09:08.552448200Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:09:08.552943 containerd[1466]: time="2025-09-05T00:09:08.552459261Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:09:08.552943 containerd[1466]: time="2025-09-05T00:09:08.552470892Z" level=info msg="Start streaming server" Sep 5 00:09:08.552653 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:09:08.553187 containerd[1466]: time="2025-09-05T00:09:08.553171286Z" level=info msg="containerd successfully booted in 0.037298s" Sep 5 00:09:08.577544 tar[1464]: linux-amd64/README.md Sep 5 00:09:08.590961 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:09:09.319318 systemd-networkd[1409]: eth0: Gained IPv6LL Sep 5 00:09:09.323410 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:09:09.325787 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:09:09.339424 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:09:09.342488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:09.345177 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:09:09.367485 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:09:09.367722 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:09:09.369634 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:09:09.371898 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:09:10.735669 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:09:10.748342 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:46308.service - OpenSSH per-connection server daemon (10.0.0.1:46308). Sep 5 00:09:10.797838 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 46308 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:09:10.800744 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:10.809529 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:09:10.822331 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:09:10.826315 systemd-logind[1457]: New session 1 of user core. Sep 5 00:09:10.844735 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:09:10.860396 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:09:10.864873 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:09:11.026265 systemd[1555]: Queued start job for default target default.target. Sep 5 00:09:11.038105 systemd[1555]: Created slice app.slice - User Application Slice. Sep 5 00:09:11.038131 systemd[1555]: Reached target paths.target - Paths. Sep 5 00:09:11.038144 systemd[1555]: Reached target timers.target - Timers. Sep 5 00:09:11.040369 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:09:11.056991 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:09:11.057326 systemd[1555]: Reached target sockets.target - Sockets. Sep 5 00:09:11.057369 systemd[1555]: Reached target basic.target - Basic System. Sep 5 00:09:11.057434 systemd[1555]: Reached target default.target - Main User Target. Sep 5 00:09:11.057495 systemd[1555]: Startup finished in 172ms. Sep 5 00:09:11.057798 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:09:11.082254 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:09:11.121706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:11.123471 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:09:11.124859 systemd[1]: Startup finished in 1.205s (kernel) + 8.250s (initrd) + 5.552s (userspace) = 15.008s. Sep 5 00:09:11.127216 (kubelet)[1569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:09:11.143505 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:46310.service - OpenSSH per-connection server daemon (10.0.0.1:46310). Sep 5 00:09:11.187891 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 46310 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:09:11.215779 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:11.221378 systemd-logind[1457]: New session 2 of user core. Sep 5 00:09:11.230227 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:09:11.290077 sshd[1573]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:11.297134 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:46310.service: Deactivated successfully. Sep 5 00:09:11.299909 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:09:11.387745 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:09:11.399425 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:46322.service - OpenSSH per-connection server daemon (10.0.0.1:46322). Sep 5 00:09:11.400526 systemd-logind[1457]: Removed session 2. Sep 5 00:09:11.436542 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 46322 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:09:11.438593 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:11.443880 systemd-logind[1457]: New session 3 of user core. Sep 5 00:09:11.453229 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:09:11.507617 sshd[1588]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:11.526987 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:46322.service: Deactivated successfully. Sep 5 00:09:11.529420 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:09:11.531366 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:09:11.533400 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:46326.service - OpenSSH per-connection server daemon (10.0.0.1:46326). Sep 5 00:09:11.534497 systemd-logind[1457]: Removed session 3. Sep 5 00:09:11.574735 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 46326 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:09:11.616845 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:11.622970 systemd-logind[1457]: New session 4 of user core. Sep 5 00:09:11.630256 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:09:11.690384 sshd[1595]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:11.701023 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:46326.service: Deactivated successfully. Sep 5 00:09:11.703373 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:09:11.705552 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:09:11.707498 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:46328.service - OpenSSH per-connection server daemon (10.0.0.1:46328). Sep 5 00:09:11.708806 systemd-logind[1457]: Removed session 4. Sep 5 00:09:11.751259 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 46328 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:09:11.753164 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:11.758805 systemd-logind[1457]: New session 5 of user core. Sep 5 00:09:11.769225 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:09:11.869718 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:09:11.870092 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:09:11.890463 sudo[1606]: pam_unix(sudo:session): session closed for user root Sep 5 00:09:11.892918 sshd[1603]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:11.902927 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:46328.service: Deactivated successfully. Sep 5 00:09:11.904693 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:09:11.906552 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:09:11.920762 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:46344.service - OpenSSH per-connection server daemon (10.0.0.1:46344). Sep 5 00:09:11.922247 systemd-logind[1457]: Removed session 5. Sep 5 00:09:11.962582 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 46344 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:09:11.965313 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:11.970295 systemd-logind[1457]: New session 6 of user core. Sep 5 00:09:11.971742 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:09:12.017843 kubelet[1569]: E0905 00:09:12.017698 1569 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:09:12.022930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:09:12.023243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:09:12.023658 systemd[1]: kubelet.service: Consumed 2.497s CPU time. Sep 5 00:09:12.030341 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:09:12.030687 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:09:12.034918 sudo[1617]: pam_unix(sudo:session): session closed for user root Sep 5 00:09:12.041486 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:09:12.041817 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:09:12.063417 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:09:12.065116 auditctl[1620]: No rules Sep 5 00:09:12.065624 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:09:12.065922 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:09:12.069271 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:09:12.100846 augenrules[1638]: No rules Sep 5 00:09:12.102839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:09:12.104272 sudo[1615]: pam_unix(sudo:session): session closed for user root Sep 5 00:09:12.106150 sshd[1612]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:12.118183 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:46344.service: Deactivated successfully. Sep 5 00:09:12.120145 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:09:12.121577 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:09:12.129382 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:46356.service - OpenSSH per-connection server daemon (10.0.0.1:46356). Sep 5 00:09:12.130365 systemd-logind[1457]: Removed session 6. Sep 5 00:09:12.165388 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 46356 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:09:12.167140 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:12.171189 systemd-logind[1457]: New session 7 of user core. Sep 5 00:09:12.180275 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:09:12.236034 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:09:12.236519 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:09:12.942408 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:09:12.942613 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:09:13.945673 dockerd[1667]: time="2025-09-05T00:09:13.945576437Z" level=info msg="Starting up" Sep 5 00:09:14.498455 dockerd[1667]: time="2025-09-05T00:09:14.498382443Z" level=info msg="Loading containers: start." Sep 5 00:09:14.639115 kernel: Initializing XFRM netlink socket Sep 5 00:09:14.767505 systemd-networkd[1409]: docker0: Link UP Sep 5 00:09:14.794895 dockerd[1667]: time="2025-09-05T00:09:14.794837077Z" level=info msg="Loading containers: done." Sep 5 00:09:14.822585 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3867695983-merged.mount: Deactivated successfully. Sep 5 00:09:14.823682 dockerd[1667]: time="2025-09-05T00:09:14.823624003Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:09:14.823823 dockerd[1667]: time="2025-09-05T00:09:14.823801917Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:09:14.824000 dockerd[1667]: time="2025-09-05T00:09:14.823976595Z" level=info msg="Daemon has completed initialization" Sep 5 00:09:14.874825 dockerd[1667]: time="2025-09-05T00:09:14.874685520Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:09:14.875689 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:09:16.030141 containerd[1466]: time="2025-09-05T00:09:16.029990302Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 00:09:16.921888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939509826.mount: Deactivated successfully. Sep 5 00:09:18.385405 containerd[1466]: time="2025-09-05T00:09:18.385327751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:18.489428 containerd[1466]: time="2025-09-05T00:09:18.489294163Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 5 00:09:18.627044 containerd[1466]: time="2025-09-05T00:09:18.626967188Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:18.686338 containerd[1466]: time="2025-09-05T00:09:18.686166549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:18.687460 containerd[1466]: time="2025-09-05T00:09:18.687423196Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 2.657379835s" Sep 5 00:09:18.687460 containerd[1466]: time="2025-09-05T00:09:18.687466347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 5 00:09:18.688336 containerd[1466]: time="2025-09-05T00:09:18.688299820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 00:09:20.282244 containerd[1466]: time="2025-09-05T00:09:20.282171117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:20.283981 containerd[1466]: time="2025-09-05T00:09:20.283938533Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 5 00:09:20.285304 containerd[1466]: time="2025-09-05T00:09:20.285259691Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:20.288156 containerd[1466]: time="2025-09-05T00:09:20.288124965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:20.289045 containerd[1466]: time="2025-09-05T00:09:20.289017840Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.600679757s" Sep 5 00:09:20.289122 containerd[1466]: time="2025-09-05T00:09:20.289049860Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 5 00:09:20.289937 containerd[1466]: time="2025-09-05T00:09:20.289902479Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 00:09:22.273427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:09:22.284522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:22.541351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:22.545972 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:09:22.816266 containerd[1466]: time="2025-09-05T00:09:22.816145279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:22.817854 containerd[1466]: time="2025-09-05T00:09:22.817799131Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 5 00:09:22.819155 containerd[1466]: time="2025-09-05T00:09:22.819126651Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:22.822546 containerd[1466]: time="2025-09-05T00:09:22.822507342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:22.823588 containerd[1466]: time="2025-09-05T00:09:22.823533246Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 2.533596984s" Sep 5 00:09:22.823701 containerd[1466]: time="2025-09-05T00:09:22.823586947Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 5 00:09:22.824717 containerd[1466]: time="2025-09-05T00:09:22.824698953Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 00:09:22.839750 kubelet[1884]: E0905 00:09:22.839696 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:09:22.846826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:09:22.847082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:09:24.232843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204237576.mount: Deactivated successfully. Sep 5 00:09:25.264807 containerd[1466]: time="2025-09-05T00:09:25.264729132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:25.265742 containerd[1466]: time="2025-09-05T00:09:25.265701506Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 5 00:09:25.266933 containerd[1466]: time="2025-09-05T00:09:25.266894363Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:25.269120 containerd[1466]: time="2025-09-05T00:09:25.269032484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:25.269700 containerd[1466]: time="2025-09-05T00:09:25.269659821Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.444933947s" Sep 5 00:09:25.269700 containerd[1466]: time="2025-09-05T00:09:25.269692502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 5 00:09:25.270201 containerd[1466]: time="2025-09-05T00:09:25.270152384Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 00:09:25.887236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79818394.mount: Deactivated successfully. Sep 5 00:09:28.296142 containerd[1466]: time="2025-09-05T00:09:28.296057087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:28.301025 containerd[1466]: time="2025-09-05T00:09:28.300962969Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 5 00:09:28.303605 containerd[1466]: time="2025-09-05T00:09:28.303563567Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:28.307115 containerd[1466]: time="2025-09-05T00:09:28.307083770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:28.309200 containerd[1466]: time="2025-09-05T00:09:28.309147300Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.038964359s" Sep 5 00:09:28.309247 containerd[1466]: time="2025-09-05T00:09:28.309195861Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 5 00:09:28.309846 containerd[1466]: time="2025-09-05T00:09:28.309820613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:09:28.950888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590422662.mount: Deactivated successfully. Sep 5 00:09:28.958114 containerd[1466]: time="2025-09-05T00:09:28.958034367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:28.958956 containerd[1466]: time="2025-09-05T00:09:28.958875865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:09:28.960159 containerd[1466]: time="2025-09-05T00:09:28.960119949Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:28.963576 containerd[1466]: time="2025-09-05T00:09:28.963517241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:28.964681 containerd[1466]: time="2025-09-05T00:09:28.964646780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 654.798425ms" Sep 5 00:09:28.964753 containerd[1466]: time="2025-09-05T00:09:28.964685442Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:09:28.965346 containerd[1466]: time="2025-09-05T00:09:28.965236656Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 00:09:29.458813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279246916.mount: Deactivated successfully. Sep 5 00:09:32.081813 containerd[1466]: time="2025-09-05T00:09:32.081724467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:32.082815 containerd[1466]: time="2025-09-05T00:09:32.082742386Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 5 00:09:32.084162 containerd[1466]: time="2025-09-05T00:09:32.084120882Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:32.087382 containerd[1466]: time="2025-09-05T00:09:32.087346191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:32.089051 containerd[1466]: time="2025-09-05T00:09:32.088968975Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.123696452s" Sep 5 00:09:32.089104 containerd[1466]: time="2025-09-05T00:09:32.089052903Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 5 00:09:32.988840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:09:33.001274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:33.191056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:33.197364 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:09:33.285606 kubelet[2044]: E0905 00:09:33.285434 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:09:33.290253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:09:33.290476 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:09:35.084287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:35.095343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:35.119769 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Sep 5 00:09:35.119789 systemd[1]: Reloading... Sep 5 00:09:35.219092 zram_generator::config[2101]: No configuration found. Sep 5 00:09:35.432486 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:09:35.510930 systemd[1]: Reloading finished in 390 ms. Sep 5 00:09:35.565187 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:35.568306 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:09:35.568566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:35.570254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:35.744313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:35.749508 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:09:35.793044 kubelet[2148]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:09:35.793044 kubelet[2148]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:09:35.793044 kubelet[2148]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:09:35.793486 kubelet[2148]: I0905 00:09:35.793104 2148 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:09:36.680652 kubelet[2148]: I0905 00:09:36.680577 2148 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:09:36.680652 kubelet[2148]: I0905 00:09:36.680622 2148 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:09:36.680907 kubelet[2148]: I0905 00:09:36.680870 2148 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:09:36.704138 kubelet[2148]: I0905 00:09:36.704085 2148 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:09:36.705560 kubelet[2148]: E0905 00:09:36.705516 2148 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:09:36.714368 kubelet[2148]: E0905 00:09:36.714308 2148 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:09:36.714368 kubelet[2148]: I0905 00:09:36.714354 2148 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:09:36.720449 kubelet[2148]: I0905 00:09:36.720416 2148 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:09:36.720723 kubelet[2148]: I0905 00:09:36.720684 2148 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:09:36.720911 kubelet[2148]: I0905 00:09:36.720709 2148 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:09:36.720992 kubelet[2148]: I0905 00:09:36.720918 2148 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:09:36.720992 kubelet[2148]: I0905 00:09:36.720935 2148 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:09:36.721178 kubelet[2148]: I0905 00:09:36.721152 2148 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:09:36.723472 kubelet[2148]: I0905 00:09:36.723428 2148 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:09:36.723472 kubelet[2148]: I0905 00:09:36.723481 2148 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:09:36.723645 kubelet[2148]: I0905 00:09:36.723523 2148 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:09:36.727439 kubelet[2148]: I0905 00:09:36.727313 2148 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:09:36.730443 kubelet[2148]: E0905 00:09:36.730409 2148 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:09:36.730533 kubelet[2148]: E0905 00:09:36.730409 2148 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:09:36.732716 kubelet[2148]: I0905 00:09:36.732689 2148 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:09:36.733199 kubelet[2148]: I0905 00:09:36.733176 2148 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:09:36.733974 kubelet[2148]: W0905 00:09:36.733943 2148 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:09:36.737041 kubelet[2148]: I0905 00:09:36.737013 2148 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:09:36.737108 kubelet[2148]: I0905 00:09:36.737091 2148 server.go:1289] "Started kubelet" Sep 5 00:09:36.738163 kubelet[2148]: I0905 00:09:36.737549 2148 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:09:36.739919 kubelet[2148]: I0905 00:09:36.738492 2148 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:09:36.739919 kubelet[2148]: I0905 00:09:36.738496 2148 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:09:36.739919 kubelet[2148]: I0905 00:09:36.738656 2148 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:09:36.741762 kubelet[2148]: I0905 00:09:36.741297 2148 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:09:36.741844 kubelet[2148]: E0905 00:09:36.740733 2148 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623a5d3bc4d5a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:09:36.737039778 +0000 UTC m=+0.977211851,LastTimestamp:2025-09-05 00:09:36.737039778 +0000 UTC m=+0.977211851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:09:36.742271 kubelet[2148]: I0905 00:09:36.742114 2148 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:09:36.742271 kubelet[2148]: I0905 00:09:36.742212 2148 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:09:36.742271 kubelet[2148]: I0905 00:09:36.742256 2148 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:09:36.742361 kubelet[2148]: I0905 00:09:36.742285 2148 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:09:36.742361 kubelet[2148]: I0905 00:09:36.742326 2148 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:09:36.742550 kubelet[2148]: I0905 00:09:36.742356 2148 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:09:36.742615 kubelet[2148]: E0905 00:09:36.742590 2148 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:09:36.745128 kubelet[2148]: E0905 00:09:36.743842 2148 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:09:36.745128 kubelet[2148]: E0905 00:09:36.744153 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Sep 5 00:09:36.745128 kubelet[2148]: E0905 00:09:36.744578 2148 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:09:36.745128 kubelet[2148]: I0905 00:09:36.744831 2148 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:09:36.760701 kubelet[2148]: I0905 00:09:36.760613 2148 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:09:36.760701 kubelet[2148]: I0905 00:09:36.760688 2148 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:09:36.760857 kubelet[2148]: I0905 00:09:36.760728 2148 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:09:36.844627 kubelet[2148]: E0905 00:09:36.844564 2148 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:09:36.944902 kubelet[2148]: E0905 00:09:36.944711 2148 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:09:36.945091 kubelet[2148]: E0905 00:09:36.944944 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Sep 5 00:09:37.045527 kubelet[2148]: E0905 00:09:37.045443 2148 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:09:37.146211 kubelet[2148]: E0905 00:09:37.146113 2148 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:09:37.216294 kubelet[2148]: I0905 00:09:37.216164 2148 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:09:37.217842 kubelet[2148]: I0905 00:09:37.217790 2148 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:09:37.217842 kubelet[2148]: I0905 00:09:37.217836 2148 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:09:37.217985 kubelet[2148]: I0905 00:09:37.217891 2148 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:09:37.217985 kubelet[2148]: I0905 00:09:37.217909 2148 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:09:37.218034 kubelet[2148]: E0905 00:09:37.217981 2148 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:09:37.218668 kubelet[2148]: E0905 00:09:37.218630 2148 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:09:37.219285 kubelet[2148]: I0905 00:09:37.219268 2148 policy_none.go:49] "None policy: Start" Sep 5 00:09:37.219346 kubelet[2148]: I0905 00:09:37.219297 2148 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:09:37.219346 kubelet[2148]: I0905 00:09:37.219314 2148 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:09:37.226458 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:09:37.241720 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:09:37.245693 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:09:37.246394 kubelet[2148]: E0905 00:09:37.246276 2148 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:09:37.258220 kubelet[2148]: E0905 00:09:37.258177 2148 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:09:37.258591 kubelet[2148]: I0905 00:09:37.258552 2148 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:09:37.258722 kubelet[2148]: I0905 00:09:37.258579 2148 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:09:37.259229 kubelet[2148]: I0905 00:09:37.258929 2148 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:09:37.259994 kubelet[2148]: E0905 00:09:37.259737 2148 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:09:37.259994 kubelet[2148]: E0905 00:09:37.259830 2148 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:09:37.330546 systemd[1]: Created slice kubepods-burstable-podd9d4508ab646321b970017e3ac6d2352.slice - libcontainer container kubepods-burstable-podd9d4508ab646321b970017e3ac6d2352.slice. Sep 5 00:09:37.345426 kubelet[2148]: I0905 00:09:37.345394 2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9d4508ab646321b970017e3ac6d2352-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d9d4508ab646321b970017e3ac6d2352\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:37.345541 kubelet[2148]: I0905 00:09:37.345433 2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:37.345541 kubelet[2148]: I0905 00:09:37.345464 2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:37.345541 kubelet[2148]: I0905 00:09:37.345499 2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:37.345541 kubelet[2148]: I0905 00:09:37.345526 2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:37.345704 kubelet[2148]: I0905 00:09:37.345595 2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9d4508ab646321b970017e3ac6d2352-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9d4508ab646321b970017e3ac6d2352\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:37.345704 kubelet[2148]: I0905 00:09:37.345637 2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9d4508ab646321b970017e3ac6d2352-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9d4508ab646321b970017e3ac6d2352\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:37.345704 kubelet[2148]: I0905 00:09:37.345659 2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:37.345704 kubelet[2148]: I0905 00:09:37.345674 2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:09:37.345704 kubelet[2148]: E0905 00:09:37.345632 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Sep 5 00:09:37.348675 kubelet[2148]: E0905 00:09:37.348645 2148 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:09:37.351656 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 00:09:37.353368 kubelet[2148]: E0905 00:09:37.353346 2148 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:09:37.355091 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 00:09:37.356953 kubelet[2148]: E0905 00:09:37.356916 2148 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:09:37.359868 kubelet[2148]: I0905 00:09:37.359849 2148 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:09:37.360220 kubelet[2148]: E0905 00:09:37.360182 2148 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Sep 5 00:09:37.562489 kubelet[2148]: I0905 00:09:37.562319 2148 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:09:37.562849 kubelet[2148]: E0905 00:09:37.562791 2148 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Sep 5 00:09:37.628769 kubelet[2148]: E0905 00:09:37.628715 2148 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:09:37.649716 kubelet[2148]: E0905 00:09:37.649663 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:37.650447 containerd[1466]: time="2025-09-05T00:09:37.650410680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d9d4508ab646321b970017e3ac6d2352,Namespace:kube-system,Attempt:0,}" Sep 5 00:09:37.654740 kubelet[2148]: E0905 00:09:37.654700 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:37.655178 containerd[1466]: time="2025-09-05T00:09:37.655143277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 00:09:37.657651 kubelet[2148]: E0905 00:09:37.657629 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:37.658052 containerd[1466]: time="2025-09-05T00:09:37.658008902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 00:09:37.797687 kubelet[2148]: E0905 00:09:37.797639 2148 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:09:37.964979 kubelet[2148]: I0905 00:09:37.964818 2148 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:09:37.965547 kubelet[2148]: E0905 00:09:37.965194 2148 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Sep 5 00:09:38.097721 kubelet[2148]: E0905 00:09:38.097664 2148 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:09:38.103701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910756105.mount: Deactivated successfully. Sep 5 00:09:38.110244 containerd[1466]: time="2025-09-05T00:09:38.110185667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:09:38.111060 containerd[1466]: time="2025-09-05T00:09:38.110990977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:09:38.114056 containerd[1466]: time="2025-09-05T00:09:38.114003819Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:09:38.115393 containerd[1466]: time="2025-09-05T00:09:38.115347178Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:09:38.116455 containerd[1466]: time="2025-09-05T00:09:38.116414069Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:09:38.117185 containerd[1466]: time="2025-09-05T00:09:38.117114624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:09:38.118113 containerd[1466]: time="2025-09-05T00:09:38.118081808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:09:38.118958 containerd[1466]: time="2025-09-05T00:09:38.118933946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:09:38.119706 containerd[1466]: time="2025-09-05T00:09:38.119676148Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.190096ms" Sep 5 00:09:38.123210 containerd[1466]: time="2025-09-05T00:09:38.123174630Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.073334ms" Sep 5 00:09:38.127901 containerd[1466]: time="2025-09-05T00:09:38.127852895Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.6573ms" Sep 5 00:09:38.134168 kubelet[2148]: E0905 00:09:38.134118 2148 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:09:38.146387 kubelet[2148]: E0905 00:09:38.146337 2148 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="1.6s" Sep 5 00:09:38.330470 containerd[1466]: time="2025-09-05T00:09:38.330263249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:09:38.330470 containerd[1466]: time="2025-09-05T00:09:38.330327660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:09:38.331164 containerd[1466]: time="2025-09-05T00:09:38.330362305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:38.332318 containerd[1466]: time="2025-09-05T00:09:38.332255216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:38.333187 containerd[1466]: time="2025-09-05T00:09:38.333061518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:09:38.333747 containerd[1466]: time="2025-09-05T00:09:38.333675510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:09:38.333838 containerd[1466]: time="2025-09-05T00:09:38.333734851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:38.334227 containerd[1466]: time="2025-09-05T00:09:38.334049431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:38.336317 containerd[1466]: time="2025-09-05T00:09:38.336243927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:09:38.336454 containerd[1466]: time="2025-09-05T00:09:38.336311875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:09:38.336454 containerd[1466]: time="2025-09-05T00:09:38.336348734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:38.336704 containerd[1466]: time="2025-09-05T00:09:38.336467086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:38.364254 systemd[1]: Started cri-containerd-ed1aae47acfd4c79ff44051909b035fae4be25f43db9fdb64f884a34587a919d.scope - libcontainer container ed1aae47acfd4c79ff44051909b035fae4be25f43db9fdb64f884a34587a919d. Sep 5 00:09:38.372540 systemd[1]: Started cri-containerd-cd9097a02253ce26f7894b0321c30d853a90e0ddb3217087504d5e99c1e60811.scope - libcontainer container cd9097a02253ce26f7894b0321c30d853a90e0ddb3217087504d5e99c1e60811. Sep 5 00:09:38.389235 systemd[1]: Started cri-containerd-e7c092e70e2e2a4fa00713e48fdea0795e1831adf95c4ee60be1bd3370dc279a.scope - libcontainer container e7c092e70e2e2a4fa00713e48fdea0795e1831adf95c4ee60be1bd3370dc279a. Sep 5 00:09:38.433014 containerd[1466]: time="2025-09-05T00:09:38.432974657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d9d4508ab646321b970017e3ac6d2352,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed1aae47acfd4c79ff44051909b035fae4be25f43db9fdb64f884a34587a919d\"" Sep 5 00:09:38.434782 kubelet[2148]: E0905 00:09:38.434268 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:38.443491 containerd[1466]: time="2025-09-05T00:09:38.443452200Z" level=info msg="CreateContainer within sandbox \"ed1aae47acfd4c79ff44051909b035fae4be25f43db9fdb64f884a34587a919d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:09:38.445623 containerd[1466]: time="2025-09-05T00:09:38.445572988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd9097a02253ce26f7894b0321c30d853a90e0ddb3217087504d5e99c1e60811\"" Sep 5 00:09:38.448540 kubelet[2148]: E0905 00:09:38.448513 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:38.453049 containerd[1466]: time="2025-09-05T00:09:38.452933834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7c092e70e2e2a4fa00713e48fdea0795e1831adf95c4ee60be1bd3370dc279a\"" Sep 5 00:09:38.453829 kubelet[2148]: E0905 00:09:38.453694 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:38.455713 containerd[1466]: time="2025-09-05T00:09:38.455678483Z" level=info msg="CreateContainer within sandbox \"cd9097a02253ce26f7894b0321c30d853a90e0ddb3217087504d5e99c1e60811\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:09:38.458867 containerd[1466]: time="2025-09-05T00:09:38.458828341Z" level=info msg="CreateContainer within sandbox \"e7c092e70e2e2a4fa00713e48fdea0795e1831adf95c4ee60be1bd3370dc279a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:09:38.468381 containerd[1466]: time="2025-09-05T00:09:38.468327318Z" level=info msg="CreateContainer within sandbox \"ed1aae47acfd4c79ff44051909b035fae4be25f43db9fdb64f884a34587a919d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d714f4f9e5222687cc129cf2473867131516f4acf3206f6b6be986c69e42411b\"" Sep 5 00:09:38.469042 containerd[1466]: time="2025-09-05T00:09:38.469003987Z" level=info msg="StartContainer for \"d714f4f9e5222687cc129cf2473867131516f4acf3206f6b6be986c69e42411b\"" Sep 5 00:09:38.480143 containerd[1466]: time="2025-09-05T00:09:38.480098127Z" level=info msg="CreateContainer within sandbox \"cd9097a02253ce26f7894b0321c30d853a90e0ddb3217087504d5e99c1e60811\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5322a3f562ae546cbbe67669c7799fa91b327fd3becbd6e52d48d031c422b41b\"" Sep 5 00:09:38.482435 containerd[1466]: time="2025-09-05T00:09:38.482304265Z" level=info msg="StartContainer for \"5322a3f562ae546cbbe67669c7799fa91b327fd3becbd6e52d48d031c422b41b\"" Sep 5 00:09:38.486270 containerd[1466]: time="2025-09-05T00:09:38.486225039Z" level=info msg="CreateContainer within sandbox \"e7c092e70e2e2a4fa00713e48fdea0795e1831adf95c4ee60be1bd3370dc279a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"15cfb19852e251d316942e05521c18db04eaf39b71b5ad0962c57e269562fe8f\"" Sep 5 00:09:38.486799 containerd[1466]: time="2025-09-05T00:09:38.486774570Z" level=info msg="StartContainer for \"15cfb19852e251d316942e05521c18db04eaf39b71b5ad0962c57e269562fe8f\"" Sep 5 00:09:38.515339 systemd[1]: Started cri-containerd-d714f4f9e5222687cc129cf2473867131516f4acf3206f6b6be986c69e42411b.scope - libcontainer container d714f4f9e5222687cc129cf2473867131516f4acf3206f6b6be986c69e42411b. Sep 5 00:09:38.521002 systemd[1]: Started cri-containerd-5322a3f562ae546cbbe67669c7799fa91b327fd3becbd6e52d48d031c422b41b.scope - libcontainer container 5322a3f562ae546cbbe67669c7799fa91b327fd3becbd6e52d48d031c422b41b. Sep 5 00:09:38.549245 systemd[1]: Started cri-containerd-15cfb19852e251d316942e05521c18db04eaf39b71b5ad0962c57e269562fe8f.scope - libcontainer container 15cfb19852e251d316942e05521c18db04eaf39b71b5ad0962c57e269562fe8f. Sep 5 00:09:38.575104 containerd[1466]: time="2025-09-05T00:09:38.574946946Z" level=info msg="StartContainer for \"d714f4f9e5222687cc129cf2473867131516f4acf3206f6b6be986c69e42411b\" returns successfully" Sep 5 00:09:38.597564 containerd[1466]: time="2025-09-05T00:09:38.595910167Z" level=info msg="StartContainer for \"5322a3f562ae546cbbe67669c7799fa91b327fd3becbd6e52d48d031c422b41b\" returns successfully" Sep 5 00:09:38.639662 containerd[1466]: time="2025-09-05T00:09:38.639594817Z" level=info msg="StartContainer for \"15cfb19852e251d316942e05521c18db04eaf39b71b5ad0962c57e269562fe8f\" returns successfully" Sep 5 00:09:38.768049 kubelet[2148]: I0905 00:09:38.767841 2148 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:09:39.230469 kubelet[2148]: E0905 00:09:39.230402 2148 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:09:39.231087 kubelet[2148]: E0905 00:09:39.230692 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:39.231770 kubelet[2148]: E0905 00:09:39.231732 2148 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:09:39.231960 kubelet[2148]: E0905 00:09:39.231921 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:39.235337 kubelet[2148]: E0905 00:09:39.235318 2148 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:09:39.235474 kubelet[2148]: E0905 00:09:39.235445 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:40.045622 kubelet[2148]: E0905 00:09:40.045561 2148 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:09:40.187782 kubelet[2148]: I0905 00:09:40.187735 2148 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:09:40.236248 kubelet[2148]: I0905 00:09:40.236216 2148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:09:40.236915 kubelet[2148]: I0905 00:09:40.236216 2148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:40.238553 kubelet[2148]: I0905 00:09:40.236342 2148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:40.244923 kubelet[2148]: I0905 00:09:40.244898 2148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:40.245972 kubelet[2148]: E0905 00:09:40.245951 2148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 00:09:40.246257 kubelet[2148]: E0905 00:09:40.246233 2148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:40.246393 kubelet[2148]: E0905 00:09:40.246081 2148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:40.246698 kubelet[2148]: E0905 00:09:40.246601 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:40.246698 kubelet[2148]: E0905 00:09:40.246494 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:40.246698 kubelet[2148]: E0905 00:09:40.246390 2148 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:40.247828 kubelet[2148]: E0905 00:09:40.247688 2148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:40.247828 kubelet[2148]: I0905 00:09:40.247706 2148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:40.248960 kubelet[2148]: E0905 00:09:40.248916 2148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:40.248960 kubelet[2148]: I0905 00:09:40.248938 2148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:09:40.250235 kubelet[2148]: E0905 00:09:40.250206 2148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 00:09:40.729528 kubelet[2148]: I0905 00:09:40.729459 2148 apiserver.go:52] "Watching apiserver" Sep 5 00:09:40.743211 kubelet[2148]: I0905 00:09:40.743108 2148 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:09:42.212434 systemd[1]: Reloading requested from client PID 2441 ('systemctl') (unit session-7.scope)... Sep 5 00:09:42.212453 systemd[1]: Reloading... Sep 5 00:09:42.306111 zram_generator::config[2480]: No configuration found. Sep 5 00:09:42.429607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:09:42.523644 systemd[1]: Reloading finished in 310 ms. Sep 5 00:09:42.569010 kubelet[2148]: I0905 00:09:42.568941 2148 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:09:42.569312 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:42.600996 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:09:42.601354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:42.601414 systemd[1]: kubelet.service: Consumed 1.589s CPU time, 135.2M memory peak, 0B memory swap peak. Sep 5 00:09:42.611278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:42.790646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:42.796154 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:09:42.841243 kubelet[2525]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:09:42.841243 kubelet[2525]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:09:42.841243 kubelet[2525]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:09:42.841243 kubelet[2525]: I0905 00:09:42.841180 2525 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:09:42.848084 kubelet[2525]: I0905 00:09:42.848042 2525 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:09:42.848084 kubelet[2525]: I0905 00:09:42.848078 2525 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:09:42.848305 kubelet[2525]: I0905 00:09:42.848281 2525 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:09:42.849679 kubelet[2525]: I0905 00:09:42.849647 2525 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 00:09:42.853908 kubelet[2525]: I0905 00:09:42.853869 2525 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:09:42.857054 kubelet[2525]: E0905 00:09:42.857027 2525 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:09:42.857117 kubelet[2525]: I0905 00:09:42.857056 2525 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:09:42.862754 kubelet[2525]: I0905 00:09:42.862714 2525 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:09:42.863087 kubelet[2525]: I0905 00:09:42.863035 2525 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:09:42.863349 kubelet[2525]: I0905 00:09:42.863101 2525 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:09:42.863444 kubelet[2525]: I0905 00:09:42.863356 2525 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:09:42.863444 kubelet[2525]: I0905 00:09:42.863365 2525 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:09:42.863444 kubelet[2525]: I0905 00:09:42.863428 2525 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:09:42.863671 kubelet[2525]: I0905 00:09:42.863648 2525 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:09:42.863704 kubelet[2525]: I0905 00:09:42.863692 2525 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:09:42.863734 kubelet[2525]: I0905 00:09:42.863720 2525 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:09:42.863813 kubelet[2525]: I0905 00:09:42.863790 2525 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:09:42.866861 kubelet[2525]: I0905 00:09:42.865750 2525 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:09:42.866861 kubelet[2525]: I0905 00:09:42.866541 2525 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:09:42.877802 kubelet[2525]: I0905 00:09:42.873487 2525 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:09:42.877802 kubelet[2525]: I0905 00:09:42.873552 2525 server.go:1289] "Started kubelet" Sep 5 00:09:42.877802 kubelet[2525]: I0905 00:09:42.874464 2525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:09:42.877802 kubelet[2525]: I0905 00:09:42.874855 2525 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:09:42.877802 kubelet[2525]: I0905 00:09:42.874913 2525 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:09:42.880605 kubelet[2525]: I0905 00:09:42.880559 2525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:09:42.880837 kubelet[2525]: I0905 00:09:42.880818 2525 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:09:42.881417 kubelet[2525]: I0905 00:09:42.881391 2525 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:09:42.882524 kubelet[2525]: I0905 00:09:42.882480 2525 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:09:42.882821 kubelet[2525]: I0905 00:09:42.882802 2525 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:09:42.883175 kubelet[2525]: I0905 00:09:42.880877 2525 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:09:42.884678 kubelet[2525]: I0905 00:09:42.884616 2525 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:09:42.887797 kubelet[2525]: I0905 00:09:42.886531 2525 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:09:42.888569 kubelet[2525]: I0905 00:09:42.888545 2525 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:09:42.890330 kubelet[2525]: E0905 00:09:42.890267 2525 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:09:42.896968 kubelet[2525]: I0905 00:09:42.896916 2525 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:09:42.907061 kubelet[2525]: I0905 00:09:42.906674 2525 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:09:42.907061 kubelet[2525]: I0905 00:09:42.906710 2525 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:09:42.907061 kubelet[2525]: I0905 00:09:42.906732 2525 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:09:42.907061 kubelet[2525]: I0905 00:09:42.906740 2525 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:09:42.907061 kubelet[2525]: E0905 00:09:42.906799 2525 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:09:42.933549 kubelet[2525]: I0905 00:09:42.933508 2525 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:09:42.933549 kubelet[2525]: I0905 00:09:42.933526 2525 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:09:42.933549 kubelet[2525]: I0905 00:09:42.933554 2525 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:09:42.933751 kubelet[2525]: I0905 00:09:42.933691 2525 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:09:42.933751 kubelet[2525]: I0905 00:09:42.933704 2525 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:09:42.933751 kubelet[2525]: I0905 00:09:42.933735 2525 policy_none.go:49] "None policy: Start" Sep 5 00:09:42.933751 kubelet[2525]: I0905 00:09:42.933744 2525 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:09:42.933751 kubelet[2525]: I0905 00:09:42.933754 2525 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:09:42.933904 kubelet[2525]: I0905 00:09:42.933858 2525 state_mem.go:75] "Updated machine memory state" Sep 5 00:09:42.938769 kubelet[2525]: E0905 00:09:42.938733 2525 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:09:42.939227 kubelet[2525]: I0905 00:09:42.939020 2525 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:09:42.939227 kubelet[2525]: I0905 00:09:42.939054 2525 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:09:42.939496 kubelet[2525]: I0905 00:09:42.939400 2525 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:09:42.942186 kubelet[2525]: E0905 00:09:42.941279 2525 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:09:43.008898 kubelet[2525]: I0905 00:09:43.008815 2525 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:43.009083 kubelet[2525]: I0905 00:09:43.008837 2525 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:09:43.009178 kubelet[2525]: I0905 00:09:43.008962 2525 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:43.049469 kubelet[2525]: I0905 00:09:43.049332 2525 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:09:43.057559 kubelet[2525]: I0905 00:09:43.057506 2525 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:09:43.057681 kubelet[2525]: I0905 00:09:43.057650 2525 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:09:43.183717 kubelet[2525]: I0905 00:09:43.183664 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:43.183841 kubelet[2525]: I0905 00:09:43.183720 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:43.183841 kubelet[2525]: I0905 00:09:43.183770 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:43.183841 kubelet[2525]: I0905 00:09:43.183834 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:43.183946 kubelet[2525]: I0905 00:09:43.183860 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9d4508ab646321b970017e3ac6d2352-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9d4508ab646321b970017e3ac6d2352\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:43.183946 kubelet[2525]: I0905 00:09:43.183905 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:09:43.183946 kubelet[2525]: I0905 00:09:43.183940 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:09:43.184130 kubelet[2525]: I0905 00:09:43.184054 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9d4508ab646321b970017e3ac6d2352-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9d4508ab646321b970017e3ac6d2352\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:43.184193 kubelet[2525]: I0905 00:09:43.184148 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9d4508ab646321b970017e3ac6d2352-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d9d4508ab646321b970017e3ac6d2352\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:09:43.203313 sudo[2566]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 00:09:43.203829 sudo[2566]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 00:09:43.315857 kubelet[2525]: E0905 00:09:43.315687 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:43.316005 kubelet[2525]: E0905 00:09:43.315924 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:43.316573 kubelet[2525]: E0905 00:09:43.316391 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:43.865879 kubelet[2525]: I0905 00:09:43.865801 2525 apiserver.go:52] "Watching apiserver" Sep 5 00:09:43.883536 kubelet[2525]: I0905 00:09:43.883458 2525 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:09:43.891769 sudo[2566]: pam_unix(sudo:session): session closed for user root Sep 5 00:09:43.925150 kubelet[2525]: E0905 00:09:43.923981 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:43.925150 kubelet[2525]: I0905 00:09:43.924598 2525 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:09:43.925150 kubelet[2525]: E0905 00:09:43.924988 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:43.930112 kubelet[2525]: I0905 00:09:43.929127 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.929098867 podStartE2EDuration="929.098867ms" podCreationTimestamp="2025-09-05 00:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:09:43.928837235 +0000 UTC m=+1.126823870" watchObservedRunningTime="2025-09-05 00:09:43.929098867 +0000 UTC m=+1.127085502" Sep 5 00:09:43.937586 kubelet[2525]: E0905 00:09:43.937537 2525 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:09:43.938507 kubelet[2525]: I0905 00:09:43.938002 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.937988631 podStartE2EDuration="937.988631ms" podCreationTimestamp="2025-09-05 00:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:09:43.937392577 +0000 UTC m=+1.135379212" watchObservedRunningTime="2025-09-05 00:09:43.937988631 +0000 UTC m=+1.135975266" Sep 5 00:09:43.939423 kubelet[2525]: E0905 00:09:43.939378 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:43.958710 kubelet[2525]: I0905 00:09:43.958427 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.958406829 podStartE2EDuration="958.406829ms" podCreationTimestamp="2025-09-05 00:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:09:43.948725217 +0000 UTC m=+1.146711852" watchObservedRunningTime="2025-09-05 00:09:43.958406829 +0000 UTC m=+1.156393464" Sep 5 00:09:44.924838 kubelet[2525]: E0905 00:09:44.924798 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:44.925407 kubelet[2525]: E0905 00:09:44.925385 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:45.295987 sudo[1649]: pam_unix(sudo:session): session closed for user root Sep 5 00:09:45.298289 sshd[1646]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:45.303428 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:46356.service: Deactivated successfully. Sep 5 00:09:45.306560 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:09:45.306818 systemd[1]: session-7.scope: Consumed 5.661s CPU time, 163.5M memory peak, 0B memory swap peak. Sep 5 00:09:45.307841 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:09:45.308962 systemd-logind[1457]: Removed session 7. Sep 5 00:09:46.585265 kubelet[2525]: E0905 00:09:46.585204 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:48.703858 kubelet[2525]: I0905 00:09:48.703818 2525 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:09:48.704460 kubelet[2525]: I0905 00:09:48.704316 2525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:09:48.704492 containerd[1466]: time="2025-09-05T00:09:48.704119154Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:09:49.400239 systemd[1]: Created slice kubepods-burstable-podfc3cdb4d_38d2_4705_b8b2_b93403e180e0.slice - libcontainer container kubepods-burstable-podfc3cdb4d_38d2_4705_b8b2_b93403e180e0.slice. Sep 5 00:09:49.408390 systemd[1]: Created slice kubepods-besteffort-podae0bbfca_7a73_4b82_9218_fa7110777573.slice - libcontainer container kubepods-besteffort-podae0bbfca_7a73_4b82_9218_fa7110777573.slice. Sep 5 00:09:49.425171 kubelet[2525]: I0905 00:09:49.425129 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-xtables-lock\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425306 kubelet[2525]: I0905 00:09:49.425176 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae0bbfca-7a73-4b82-9218-fa7110777573-xtables-lock\") pod \"kube-proxy-5xxjn\" (UID: \"ae0bbfca-7a73-4b82-9218-fa7110777573\") " pod="kube-system/kube-proxy-5xxjn" Sep 5 00:09:49.425306 kubelet[2525]: I0905 00:09:49.425199 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9b6\" (UniqueName: \"kubernetes.io/projected/ae0bbfca-7a73-4b82-9218-fa7110777573-kube-api-access-cl9b6\") pod \"kube-proxy-5xxjn\" (UID: \"ae0bbfca-7a73-4b82-9218-fa7110777573\") " pod="kube-system/kube-proxy-5xxjn" Sep 5 00:09:49.425306 kubelet[2525]: I0905 00:09:49.425216 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-bpf-maps\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425306 kubelet[2525]: I0905 00:09:49.425229 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-hostproc\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425306 kubelet[2525]: I0905 00:09:49.425260 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cni-path\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425429 kubelet[2525]: I0905 00:09:49.425335 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-lib-modules\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425429 kubelet[2525]: I0905 00:09:49.425353 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-clustermesh-secrets\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425429 kubelet[2525]: I0905 00:09:49.425369 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-config-path\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425429 kubelet[2525]: I0905 00:09:49.425424 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-host-proc-sys-net\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425522 kubelet[2525]: I0905 00:09:49.425439 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-cgroup\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425522 kubelet[2525]: I0905 00:09:49.425452 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-etc-cni-netd\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425522 kubelet[2525]: I0905 00:09:49.425508 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hslp\" (UniqueName: \"kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-kube-api-access-7hslp\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425597 kubelet[2525]: I0905 00:09:49.425526 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae0bbfca-7a73-4b82-9218-fa7110777573-kube-proxy\") pod \"kube-proxy-5xxjn\" (UID: \"ae0bbfca-7a73-4b82-9218-fa7110777573\") " pod="kube-system/kube-proxy-5xxjn" Sep 5 00:09:49.425597 kubelet[2525]: I0905 00:09:49.425542 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-host-proc-sys-kernel\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425597 kubelet[2525]: I0905 00:09:49.425574 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-hubble-tls\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.425597 kubelet[2525]: I0905 00:09:49.425592 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae0bbfca-7a73-4b82-9218-fa7110777573-lib-modules\") pod \"kube-proxy-5xxjn\" (UID: \"ae0bbfca-7a73-4b82-9218-fa7110777573\") " pod="kube-system/kube-proxy-5xxjn" Sep 5 00:09:49.425725 kubelet[2525]: I0905 00:09:49.425671 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-run\") pod \"cilium-6tdz5\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " pod="kube-system/cilium-6tdz5" Sep 5 00:09:49.536374 kubelet[2525]: E0905 00:09:49.535537 2525 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 00:09:49.536374 kubelet[2525]: E0905 00:09:49.535582 2525 projected.go:194] Error preparing data for projected volume kube-api-access-7hslp for pod kube-system/cilium-6tdz5: configmap "kube-root-ca.crt" not found Sep 5 00:09:49.536374 kubelet[2525]: E0905 00:09:49.535674 2525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-kube-api-access-7hslp podName:fc3cdb4d-38d2-4705-b8b2-b93403e180e0 nodeName:}" failed. No retries permitted until 2025-09-05 00:09:50.035637134 +0000 UTC m=+7.233623769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7hslp" (UniqueName: "kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-kube-api-access-7hslp") pod "cilium-6tdz5" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0") : configmap "kube-root-ca.crt" not found Sep 5 00:09:49.538338 kubelet[2525]: E0905 00:09:49.538215 2525 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 00:09:49.538338 kubelet[2525]: E0905 00:09:49.538244 2525 projected.go:194] Error preparing data for projected volume kube-api-access-cl9b6 for pod kube-system/kube-proxy-5xxjn: configmap "kube-root-ca.crt" not found Sep 5 00:09:49.538338 kubelet[2525]: E0905 00:09:49.538308 2525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae0bbfca-7a73-4b82-9218-fa7110777573-kube-api-access-cl9b6 podName:ae0bbfca-7a73-4b82-9218-fa7110777573 nodeName:}" failed. No retries permitted until 2025-09-05 00:09:50.038273385 +0000 UTC m=+7.236260020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cl9b6" (UniqueName: "kubernetes.io/projected/ae0bbfca-7a73-4b82-9218-fa7110777573-kube-api-access-cl9b6") pod "kube-proxy-5xxjn" (UID: "ae0bbfca-7a73-4b82-9218-fa7110777573") : configmap "kube-root-ca.crt" not found Sep 5 00:09:49.884657 systemd[1]: Created slice kubepods-besteffort-podad614057_908d_4d5e_8641_8907f6534e2c.slice - libcontainer container kubepods-besteffort-podad614057_908d_4d5e_8641_8907f6534e2c.slice. Sep 5 00:09:49.928338 kubelet[2525]: I0905 00:09:49.928270 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad614057-908d-4d5e-8641-8907f6534e2c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2gs76\" (UID: \"ad614057-908d-4d5e-8641-8907f6534e2c\") " pod="kube-system/cilium-operator-6c4d7847fc-2gs76" Sep 5 00:09:49.928338 kubelet[2525]: I0905 00:09:49.928330 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgq8q\" (UniqueName: \"kubernetes.io/projected/ad614057-908d-4d5e-8641-8907f6534e2c-kube-api-access-fgq8q\") pod \"cilium-operator-6c4d7847fc-2gs76\" (UID: \"ad614057-908d-4d5e-8641-8907f6534e2c\") " pod="kube-system/cilium-operator-6c4d7847fc-2gs76" Sep 5 00:09:50.116636 kubelet[2525]: E0905 00:09:50.116582 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:50.188365 kubelet[2525]: E0905 00:09:50.188213 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:50.188995 containerd[1466]: time="2025-09-05T00:09:50.188941488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2gs76,Uid:ad614057-908d-4d5e-8641-8907f6534e2c,Namespace:kube-system,Attempt:0,}" Sep 5 00:09:50.215481 containerd[1466]: time="2025-09-05T00:09:50.214832725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:09:50.215481 containerd[1466]: time="2025-09-05T00:09:50.215445351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:09:50.215481 containerd[1466]: time="2025-09-05T00:09:50.215456782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:50.215797 containerd[1466]: time="2025-09-05T00:09:50.215541063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:50.238218 systemd[1]: Started cri-containerd-3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1.scope - libcontainer container 3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1. Sep 5 00:09:50.275844 containerd[1466]: time="2025-09-05T00:09:50.275788954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2gs76,Uid:ad614057-908d-4d5e-8641-8907f6534e2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\"" Sep 5 00:09:50.276671 kubelet[2525]: E0905 00:09:50.276637 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:50.277593 containerd[1466]: time="2025-09-05T00:09:50.277554203Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 00:09:50.306401 kubelet[2525]: E0905 00:09:50.306354 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:50.307044 containerd[1466]: time="2025-09-05T00:09:50.306993341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6tdz5,Uid:fc3cdb4d-38d2-4705-b8b2-b93403e180e0,Namespace:kube-system,Attempt:0,}" Sep 5 00:09:50.315952 kubelet[2525]: E0905 00:09:50.315830 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:50.319227 containerd[1466]: time="2025-09-05T00:09:50.319180431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xxjn,Uid:ae0bbfca-7a73-4b82-9218-fa7110777573,Namespace:kube-system,Attempt:0,}" Sep 5 00:09:50.334702 containerd[1466]: time="2025-09-05T00:09:50.334607917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:09:50.334702 containerd[1466]: time="2025-09-05T00:09:50.334676838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:09:50.334702 containerd[1466]: time="2025-09-05T00:09:50.334688179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:50.334887 containerd[1466]: time="2025-09-05T00:09:50.334766158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:50.356237 systemd[1]: Started cri-containerd-cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3.scope - libcontainer container cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3. Sep 5 00:09:50.378057 containerd[1466]: time="2025-09-05T00:09:50.378016896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6tdz5,Uid:fc3cdb4d-38d2-4705-b8b2-b93403e180e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\"" Sep 5 00:09:50.378698 kubelet[2525]: E0905 00:09:50.378638 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:50.439506 containerd[1466]: time="2025-09-05T00:09:50.439319766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:09:50.440270 containerd[1466]: time="2025-09-05T00:09:50.440004468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:09:50.440270 containerd[1466]: time="2025-09-05T00:09:50.440030037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:50.440372 containerd[1466]: time="2025-09-05T00:09:50.440268290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:09:50.463373 systemd[1]: Started cri-containerd-73ef19de0bffdd21e1ccdc64903dda9ccd74f4b476dac96d9375643c7b685a8f.scope - libcontainer container 73ef19de0bffdd21e1ccdc64903dda9ccd74f4b476dac96d9375643c7b685a8f. Sep 5 00:09:50.486154 containerd[1466]: time="2025-09-05T00:09:50.486098477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xxjn,Uid:ae0bbfca-7a73-4b82-9218-fa7110777573,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ef19de0bffdd21e1ccdc64903dda9ccd74f4b476dac96d9375643c7b685a8f\"" Sep 5 00:09:50.486866 kubelet[2525]: E0905 00:09:50.486835 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:50.492442 containerd[1466]: time="2025-09-05T00:09:50.492398484Z" level=info msg="CreateContainer within sandbox \"73ef19de0bffdd21e1ccdc64903dda9ccd74f4b476dac96d9375643c7b685a8f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:09:50.509175 containerd[1466]: time="2025-09-05T00:09:50.509117980Z" level=info msg="CreateContainer within sandbox \"73ef19de0bffdd21e1ccdc64903dda9ccd74f4b476dac96d9375643c7b685a8f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"20244d19fac66d58bbd8030a1afd200ea9ccaddfef7b2264fcba366fe4c72c97\"" Sep 5 00:09:50.509765 containerd[1466]: time="2025-09-05T00:09:50.509735534Z" level=info msg="StartContainer for \"20244d19fac66d58bbd8030a1afd200ea9ccaddfef7b2264fcba366fe4c72c97\"" Sep 5 00:09:50.541203 systemd[1]: Started cri-containerd-20244d19fac66d58bbd8030a1afd200ea9ccaddfef7b2264fcba366fe4c72c97.scope - libcontainer container 20244d19fac66d58bbd8030a1afd200ea9ccaddfef7b2264fcba366fe4c72c97. Sep 5 00:09:50.571773 containerd[1466]: time="2025-09-05T00:09:50.571729709Z" level=info msg="StartContainer for \"20244d19fac66d58bbd8030a1afd200ea9ccaddfef7b2264fcba366fe4c72c97\" returns successfully" Sep 5 00:09:50.941639 kubelet[2525]: E0905 00:09:50.941475 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:50.942381 kubelet[2525]: E0905 00:09:50.942113 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:50.954273 kubelet[2525]: I0905 00:09:50.954202 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5xxjn" podStartSLOduration=1.954183191 podStartE2EDuration="1.954183191s" podCreationTimestamp="2025-09-05 00:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:09:50.953930139 +0000 UTC m=+8.151916775" watchObservedRunningTime="2025-09-05 00:09:50.954183191 +0000 UTC m=+8.152169826" Sep 5 00:09:51.949492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2957436514.mount: Deactivated successfully. Sep 5 00:09:52.382976 containerd[1466]: time="2025-09-05T00:09:52.382904665Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:52.383825 containerd[1466]: time="2025-09-05T00:09:52.383766622Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 5 00:09:52.385149 containerd[1466]: time="2025-09-05T00:09:52.385124121Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:52.386452 containerd[1466]: time="2025-09-05T00:09:52.386425343Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.108833789s" Sep 5 00:09:52.386506 containerd[1466]: time="2025-09-05T00:09:52.386456331Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 5 00:09:52.388198 containerd[1466]: time="2025-09-05T00:09:52.388148626Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 00:09:52.391659 containerd[1466]: time="2025-09-05T00:09:52.391608097Z" level=info msg="CreateContainer within sandbox \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 00:09:52.406011 containerd[1466]: time="2025-09-05T00:09:52.405969657Z" level=info msg="CreateContainer within sandbox \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\"" Sep 5 00:09:52.406397 containerd[1466]: time="2025-09-05T00:09:52.406366612Z" level=info msg="StartContainer for \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\"" Sep 5 00:09:52.438251 systemd[1]: Started cri-containerd-8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273.scope - libcontainer container 8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273. Sep 5 00:09:52.464950 containerd[1466]: time="2025-09-05T00:09:52.464909105Z" level=info msg="StartContainer for \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\" returns successfully" Sep 5 00:09:52.948567 kubelet[2525]: E0905 00:09:52.948515 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:53.001958 kubelet[2525]: E0905 00:09:53.001905 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:53.020929 kubelet[2525]: I0905 00:09:53.020733 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2gs76" podStartSLOduration=1.9104873580000001 podStartE2EDuration="4.020706849s" podCreationTimestamp="2025-09-05 00:09:49 +0000 UTC" firstStartedPulling="2025-09-05 00:09:50.277250626 +0000 UTC m=+7.475237261" lastFinishedPulling="2025-09-05 00:09:52.387470117 +0000 UTC m=+9.585456752" observedRunningTime="2025-09-05 00:09:52.974346845 +0000 UTC m=+10.172333480" watchObservedRunningTime="2025-09-05 00:09:53.020706849 +0000 UTC m=+10.218693484" Sep 5 00:09:53.120315 update_engine[1461]: I20250905 00:09:53.117111 1461 update_attempter.cc:509] Updating boot flags... Sep 5 00:09:53.169155 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2965) Sep 5 00:09:53.213772 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2966) Sep 5 00:09:53.951105 kubelet[2525]: E0905 00:09:53.950243 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:53.951105 kubelet[2525]: E0905 00:09:53.950700 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:54.952224 kubelet[2525]: E0905 00:09:54.952184 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:56.602552 kubelet[2525]: E0905 00:09:56.602490 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:09:57.852633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295949024.mount: Deactivated successfully. Sep 5 00:10:01.794244 containerd[1466]: time="2025-09-05T00:10:01.794158859Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:01.795239 containerd[1466]: time="2025-09-05T00:10:01.795178676Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 5 00:10:01.796595 containerd[1466]: time="2025-09-05T00:10:01.796545067Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:01.798737 containerd[1466]: time="2025-09-05T00:10:01.798675140Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.410474616s" Sep 5 00:10:01.798799 containerd[1466]: time="2025-09-05T00:10:01.798739071Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 5 00:10:01.808351 containerd[1466]: time="2025-09-05T00:10:01.808301886Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:10:01.825719 containerd[1466]: time="2025-09-05T00:10:01.825653535Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\"" Sep 5 00:10:01.826388 containerd[1466]: time="2025-09-05T00:10:01.826363575Z" level=info msg="StartContainer for \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\"" Sep 5 00:10:01.884983 systemd[1]: run-containerd-runc-k8s.io-0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa-runc.BZ2Xfb.mount: Deactivated successfully. Sep 5 00:10:01.894232 systemd[1]: Started cri-containerd-0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa.scope - libcontainer container 0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa. Sep 5 00:10:01.928935 containerd[1466]: time="2025-09-05T00:10:01.928875525Z" level=info msg="StartContainer for \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\" returns successfully" Sep 5 00:10:01.940723 systemd[1]: cri-containerd-0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa.scope: Deactivated successfully. Sep 5 00:10:01.966780 kubelet[2525]: E0905 00:10:01.966733 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:02.321720 containerd[1466]: time="2025-09-05T00:10:02.294407620Z" level=info msg="shim disconnected" id=0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa namespace=k8s.io Sep 5 00:10:02.321720 containerd[1466]: time="2025-09-05T00:10:02.321703661Z" level=warning msg="cleaning up after shim disconnected" id=0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa namespace=k8s.io Sep 5 00:10:02.321720 containerd[1466]: time="2025-09-05T00:10:02.321732525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:10:02.820325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa-rootfs.mount: Deactivated successfully. Sep 5 00:10:02.967496 kubelet[2525]: E0905 00:10:02.967432 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:02.973812 containerd[1466]: time="2025-09-05T00:10:02.973754630Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:10:02.991022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101611487.mount: Deactivated successfully. Sep 5 00:10:02.992324 containerd[1466]: time="2025-09-05T00:10:02.992275712Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\"" Sep 5 00:10:02.992706 containerd[1466]: time="2025-09-05T00:10:02.992671519Z" level=info msg="StartContainer for \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\"" Sep 5 00:10:03.040272 systemd[1]: Started cri-containerd-a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f.scope - libcontainer container a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f. Sep 5 00:10:03.070855 containerd[1466]: time="2025-09-05T00:10:03.070712291Z" level=info msg="StartContainer for \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\" returns successfully" Sep 5 00:10:03.084077 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:10:03.084343 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:10:03.084633 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:10:03.090543 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:10:03.090839 systemd[1]: cri-containerd-a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f.scope: Deactivated successfully. Sep 5 00:10:03.118362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:10:03.121351 containerd[1466]: time="2025-09-05T00:10:03.121296025Z" level=info msg="shim disconnected" id=a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f namespace=k8s.io Sep 5 00:10:03.121351 containerd[1466]: time="2025-09-05T00:10:03.121348374Z" level=warning msg="cleaning up after shim disconnected" id=a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f namespace=k8s.io Sep 5 00:10:03.121478 containerd[1466]: time="2025-09-05T00:10:03.121356489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:10:03.820004 systemd[1]: run-containerd-runc-k8s.io-a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f-runc.MVnoHH.mount: Deactivated successfully. Sep 5 00:10:03.820144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f-rootfs.mount: Deactivated successfully. Sep 5 00:10:03.970536 kubelet[2525]: E0905 00:10:03.970483 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:04.010222 containerd[1466]: time="2025-09-05T00:10:04.009539388Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:10:04.141185 containerd[1466]: time="2025-09-05T00:10:04.141044298Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\"" Sep 5 00:10:04.141417 containerd[1466]: time="2025-09-05T00:10:04.141388336Z" level=info msg="StartContainer for \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\"" Sep 5 00:10:04.177252 systemd[1]: Started cri-containerd-b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce.scope - libcontainer container b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce. Sep 5 00:10:04.209386 systemd[1]: cri-containerd-b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce.scope: Deactivated successfully. Sep 5 00:10:04.211894 containerd[1466]: time="2025-09-05T00:10:04.211862651Z" level=info msg="StartContainer for \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\" returns successfully" Sep 5 00:10:04.261770 containerd[1466]: time="2025-09-05T00:10:04.261712012Z" level=info msg="shim disconnected" id=b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce namespace=k8s.io Sep 5 00:10:04.261770 containerd[1466]: time="2025-09-05T00:10:04.261763799Z" level=warning msg="cleaning up after shim disconnected" id=b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce namespace=k8s.io Sep 5 00:10:04.261770 containerd[1466]: time="2025-09-05T00:10:04.261772266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:10:04.820649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce-rootfs.mount: Deactivated successfully. Sep 5 00:10:04.974731 kubelet[2525]: E0905 00:10:04.974668 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:04.981572 containerd[1466]: time="2025-09-05T00:10:04.981516917Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:10:05.009507 containerd[1466]: time="2025-09-05T00:10:05.009456555Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\"" Sep 5 00:10:05.010117 containerd[1466]: time="2025-09-05T00:10:05.010092053Z" level=info msg="StartContainer for \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\"" Sep 5 00:10:05.043301 systemd[1]: Started cri-containerd-f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85.scope - libcontainer container f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85. Sep 5 00:10:05.071689 systemd[1]: cri-containerd-f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85.scope: Deactivated successfully. Sep 5 00:10:05.075603 containerd[1466]: time="2025-09-05T00:10:05.075565818Z" level=info msg="StartContainer for \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\" returns successfully" Sep 5 00:10:05.103002 containerd[1466]: time="2025-09-05T00:10:05.102919407Z" level=info msg="shim disconnected" id=f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85 namespace=k8s.io Sep 5 00:10:05.103002 containerd[1466]: time="2025-09-05T00:10:05.102989369Z" level=warning msg="cleaning up after shim disconnected" id=f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85 namespace=k8s.io Sep 5 00:10:05.103002 containerd[1466]: time="2025-09-05T00:10:05.103002304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:10:05.820242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85-rootfs.mount: Deactivated successfully. Sep 5 00:10:05.978921 kubelet[2525]: E0905 00:10:05.978851 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:05.986812 containerd[1466]: time="2025-09-05T00:10:05.986723312Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:10:06.027411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617346421.mount: Deactivated successfully. Sep 5 00:10:06.028607 containerd[1466]: time="2025-09-05T00:10:06.028545823Z" level=info msg="CreateContainer within sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\"" Sep 5 00:10:06.029380 containerd[1466]: time="2025-09-05T00:10:06.029317227Z" level=info msg="StartContainer for \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\"" Sep 5 00:10:06.058203 systemd[1]: Started cri-containerd-951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76.scope - libcontainer container 951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76. Sep 5 00:10:06.094609 containerd[1466]: time="2025-09-05T00:10:06.094466427Z" level=info msg="StartContainer for \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\" returns successfully" Sep 5 00:10:06.280958 kubelet[2525]: I0905 00:10:06.280918 2525 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:10:06.539294 systemd[1]: Created slice kubepods-burstable-poda8f6a4d3_7465_4b0e_9660_3b060ef0c4be.slice - libcontainer container kubepods-burstable-poda8f6a4d3_7465_4b0e_9660_3b060ef0c4be.slice. Sep 5 00:10:06.623556 systemd[1]: Created slice kubepods-burstable-pode77cbe95_df97_44dd_acd2_c32b5c8a772d.slice - libcontainer container kubepods-burstable-pode77cbe95_df97_44dd_acd2_c32b5c8a772d.slice. Sep 5 00:10:06.641817 kubelet[2525]: I0905 00:10:06.641765 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljv9\" (UniqueName: \"kubernetes.io/projected/a8f6a4d3-7465-4b0e-9660-3b060ef0c4be-kube-api-access-8ljv9\") pod \"coredns-674b8bbfcf-fhk4m\" (UID: \"a8f6a4d3-7465-4b0e-9660-3b060ef0c4be\") " pod="kube-system/coredns-674b8bbfcf-fhk4m" Sep 5 00:10:06.641817 kubelet[2525]: I0905 00:10:06.641807 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8f6a4d3-7465-4b0e-9660-3b060ef0c4be-config-volume\") pod \"coredns-674b8bbfcf-fhk4m\" (UID: \"a8f6a4d3-7465-4b0e-9660-3b060ef0c4be\") " pod="kube-system/coredns-674b8bbfcf-fhk4m" Sep 5 00:10:06.641971 kubelet[2525]: I0905 00:10:06.641835 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e77cbe95-df97-44dd-acd2-c32b5c8a772d-config-volume\") pod \"coredns-674b8bbfcf-k5bmg\" (UID: \"e77cbe95-df97-44dd-acd2-c32b5c8a772d\") " pod="kube-system/coredns-674b8bbfcf-k5bmg" Sep 5 00:10:06.641971 kubelet[2525]: I0905 00:10:06.641857 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfj5s\" (UniqueName: \"kubernetes.io/projected/e77cbe95-df97-44dd-acd2-c32b5c8a772d-kube-api-access-gfj5s\") pod \"coredns-674b8bbfcf-k5bmg\" (UID: \"e77cbe95-df97-44dd-acd2-c32b5c8a772d\") " pod="kube-system/coredns-674b8bbfcf-k5bmg" Sep 5 00:10:06.844563 kubelet[2525]: E0905 00:10:06.844490 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:06.845322 containerd[1466]: time="2025-09-05T00:10:06.845261001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhk4m,Uid:a8f6a4d3-7465-4b0e-9660-3b060ef0c4be,Namespace:kube-system,Attempt:0,}" Sep 5 00:10:06.927101 kubelet[2525]: E0905 00:10:06.927022 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:06.927668 containerd[1466]: time="2025-09-05T00:10:06.927632201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k5bmg,Uid:e77cbe95-df97-44dd-acd2-c32b5c8a772d,Namespace:kube-system,Attempt:0,}" Sep 5 00:10:06.983699 kubelet[2525]: E0905 00:10:06.983646 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:07.001946 kubelet[2525]: I0905 00:10:07.001807 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6tdz5" podStartSLOduration=6.581271852 podStartE2EDuration="18.001783564s" podCreationTimestamp="2025-09-05 00:09:49 +0000 UTC" firstStartedPulling="2025-09-05 00:09:50.379133772 +0000 UTC m=+7.577120407" lastFinishedPulling="2025-09-05 00:10:01.799645483 +0000 UTC m=+18.997632119" observedRunningTime="2025-09-05 00:10:07.001340218 +0000 UTC m=+24.199326853" watchObservedRunningTime="2025-09-05 00:10:07.001783564 +0000 UTC m=+24.199770199" Sep 5 00:10:07.986025 kubelet[2525]: E0905 00:10:07.985984 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:08.329725 systemd-networkd[1409]: cilium_host: Link UP Sep 5 00:10:08.329901 systemd-networkd[1409]: cilium_net: Link UP Sep 5 00:10:08.330130 systemd-networkd[1409]: cilium_net: Gained carrier Sep 5 00:10:08.330315 systemd-networkd[1409]: cilium_host: Gained carrier Sep 5 00:10:08.446158 systemd-networkd[1409]: cilium_vxlan: Link UP Sep 5 00:10:08.446172 systemd-networkd[1409]: cilium_vxlan: Gained carrier Sep 5 00:10:08.675107 kernel: NET: Registered PF_ALG protocol family Sep 5 00:10:08.711290 systemd-networkd[1409]: cilium_host: Gained IPv6LL Sep 5 00:10:08.967268 systemd-networkd[1409]: cilium_net: Gained IPv6LL Sep 5 00:10:08.988222 kubelet[2525]: E0905 00:10:08.988179 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:09.369631 systemd-networkd[1409]: lxc_health: Link UP Sep 5 00:10:09.380637 systemd-networkd[1409]: lxc_health: Gained carrier Sep 5 00:10:09.474578 systemd-networkd[1409]: lxc1b82a82b965c: Link UP Sep 5 00:10:09.485093 kernel: eth0: renamed from tmp8d677 Sep 5 00:10:09.496143 systemd-networkd[1409]: lxc1b82a82b965c: Gained carrier Sep 5 00:10:09.800289 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Sep 5 00:10:09.916537 systemd-networkd[1409]: lxc347e96429ae8: Link UP Sep 5 00:10:09.926647 kernel: eth0: renamed from tmp7af49 Sep 5 00:10:09.935932 systemd-networkd[1409]: lxc347e96429ae8: Gained carrier Sep 5 00:10:10.308779 kubelet[2525]: E0905 00:10:10.308664 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:10.695298 systemd-networkd[1409]: lxc1b82a82b965c: Gained IPv6LL Sep 5 00:10:10.991863 kubelet[2525]: E0905 00:10:10.991739 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:11.143494 systemd-networkd[1409]: lxc347e96429ae8: Gained IPv6LL Sep 5 00:10:11.335405 systemd-networkd[1409]: lxc_health: Gained IPv6LL Sep 5 00:10:11.994660 kubelet[2525]: E0905 00:10:11.994587 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:13.976470 containerd[1466]: time="2025-09-05T00:10:13.976011235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:10:13.976470 containerd[1466]: time="2025-09-05T00:10:13.976161037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:10:13.976470 containerd[1466]: time="2025-09-05T00:10:13.976187406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:13.976470 containerd[1466]: time="2025-09-05T00:10:13.976359570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:13.995113 containerd[1466]: time="2025-09-05T00:10:13.991175572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:10:13.995113 containerd[1466]: time="2025-09-05T00:10:13.991403049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:10:13.995113 containerd[1466]: time="2025-09-05T00:10:13.991451772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:13.995113 containerd[1466]: time="2025-09-05T00:10:13.991763258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:14.021636 systemd[1]: Started cri-containerd-8d677055735f81d06a58e1797e4d2e2cbd25cc0a007ca730bd0cb533fb3075be.scope - libcontainer container 8d677055735f81d06a58e1797e4d2e2cbd25cc0a007ca730bd0cb533fb3075be. Sep 5 00:10:14.035931 systemd[1]: Started cri-containerd-7af49033e47c0d59749bae53e53faf371f9d00740ae576726176002f8dff98f3.scope - libcontainer container 7af49033e47c0d59749bae53e53faf371f9d00740ae576726176002f8dff98f3. Sep 5 00:10:14.050242 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:10:14.060991 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:10:14.096631 containerd[1466]: time="2025-09-05T00:10:14.096567708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k5bmg,Uid:e77cbe95-df97-44dd-acd2-c32b5c8a772d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d677055735f81d06a58e1797e4d2e2cbd25cc0a007ca730bd0cb533fb3075be\"" Sep 5 00:10:14.098339 kubelet[2525]: E0905 00:10:14.097961 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:14.108672 containerd[1466]: time="2025-09-05T00:10:14.108452489Z" level=info msg="CreateContainer within sandbox \"8d677055735f81d06a58e1797e4d2e2cbd25cc0a007ca730bd0cb533fb3075be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:10:14.115578 containerd[1466]: time="2025-09-05T00:10:14.115497204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhk4m,Uid:a8f6a4d3-7465-4b0e-9660-3b060ef0c4be,Namespace:kube-system,Attempt:0,} returns sandbox id \"7af49033e47c0d59749bae53e53faf371f9d00740ae576726176002f8dff98f3\"" Sep 5 00:10:14.116522 kubelet[2525]: E0905 00:10:14.116433 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:14.124682 containerd[1466]: time="2025-09-05T00:10:14.124602776Z" level=info msg="CreateContainer within sandbox \"7af49033e47c0d59749bae53e53faf371f9d00740ae576726176002f8dff98f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:10:14.156047 containerd[1466]: time="2025-09-05T00:10:14.155951067Z" level=info msg="CreateContainer within sandbox \"8d677055735f81d06a58e1797e4d2e2cbd25cc0a007ca730bd0cb533fb3075be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9e3789356ba0f0ca94abb14ae3c73110e8087de5fb3b458b868455171d45106\"" Sep 5 00:10:14.156806 containerd[1466]: time="2025-09-05T00:10:14.156752144Z" level=info msg="StartContainer for \"f9e3789356ba0f0ca94abb14ae3c73110e8087de5fb3b458b868455171d45106\"" Sep 5 00:10:14.162731 containerd[1466]: time="2025-09-05T00:10:14.162645062Z" level=info msg="CreateContainer within sandbox \"7af49033e47c0d59749bae53e53faf371f9d00740ae576726176002f8dff98f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d490120d3c9c5e6d12f16333bcc6e65f9ea232b95d838b0b39735715510984b\"" Sep 5 00:10:14.163799 containerd[1466]: time="2025-09-05T00:10:14.163747556Z" level=info msg="StartContainer for \"9d490120d3c9c5e6d12f16333bcc6e65f9ea232b95d838b0b39735715510984b\"" Sep 5 00:10:14.211437 systemd[1]: Started cri-containerd-f9e3789356ba0f0ca94abb14ae3c73110e8087de5fb3b458b868455171d45106.scope - libcontainer container f9e3789356ba0f0ca94abb14ae3c73110e8087de5fb3b458b868455171d45106. Sep 5 00:10:14.217034 systemd[1]: Started cri-containerd-9d490120d3c9c5e6d12f16333bcc6e65f9ea232b95d838b0b39735715510984b.scope - libcontainer container 9d490120d3c9c5e6d12f16333bcc6e65f9ea232b95d838b0b39735715510984b. Sep 5 00:10:14.276794 containerd[1466]: time="2025-09-05T00:10:14.276608281Z" level=info msg="StartContainer for \"f9e3789356ba0f0ca94abb14ae3c73110e8087de5fb3b458b868455171d45106\" returns successfully" Sep 5 00:10:14.276794 containerd[1466]: time="2025-09-05T00:10:14.276726333Z" level=info msg="StartContainer for \"9d490120d3c9c5e6d12f16333bcc6e65f9ea232b95d838b0b39735715510984b\" returns successfully" Sep 5 00:10:15.026227 kubelet[2525]: E0905 00:10:15.026156 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:15.031497 kubelet[2525]: E0905 00:10:15.031439 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:15.043507 kubelet[2525]: I0905 00:10:15.043414 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k5bmg" podStartSLOduration=26.043343529 podStartE2EDuration="26.043343529s" podCreationTimestamp="2025-09-05 00:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:10:15.042960538 +0000 UTC m=+32.240947183" watchObservedRunningTime="2025-09-05 00:10:15.043343529 +0000 UTC m=+32.241330164" Sep 5 00:10:15.072953 kubelet[2525]: I0905 00:10:15.069769 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fhk4m" podStartSLOduration=26.069744362 podStartE2EDuration="26.069744362s" podCreationTimestamp="2025-09-05 00:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:10:15.067904902 +0000 UTC m=+32.265891547" watchObservedRunningTime="2025-09-05 00:10:15.069744362 +0000 UTC m=+32.267731007" Sep 5 00:10:16.036165 kubelet[2525]: E0905 00:10:16.036056 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:16.040422 kubelet[2525]: E0905 00:10:16.038271 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:17.048957 kubelet[2525]: E0905 00:10:17.048846 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:17.052044 kubelet[2525]: E0905 00:10:17.049702 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:41.173593 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:34486.service - OpenSSH per-connection server daemon (10.0.0.1:34486). Sep 5 00:10:41.220895 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 34486 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:10:41.222706 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:41.226651 systemd-logind[1457]: New session 8 of user core. Sep 5 00:10:41.234220 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:10:41.615551 sshd[3936]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:41.619451 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:34486.service: Deactivated successfully. Sep 5 00:10:41.621398 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:10:41.621963 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:10:41.622842 systemd-logind[1457]: Removed session 8. Sep 5 00:10:46.626990 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:34502.service - OpenSSH per-connection server daemon (10.0.0.1:34502). Sep 5 00:10:46.666027 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 34502 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:10:46.667893 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:46.672099 systemd-logind[1457]: New session 9 of user core. Sep 5 00:10:46.678225 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:10:46.791868 sshd[3963]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:46.795624 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:34502.service: Deactivated successfully. Sep 5 00:10:46.797700 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:10:46.798307 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:10:46.799195 systemd-logind[1457]: Removed session 9. Sep 5 00:10:51.804677 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:53296.service - OpenSSH per-connection server daemon (10.0.0.1:53296). Sep 5 00:10:51.846021 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 53296 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:10:51.847974 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:51.852829 systemd-logind[1457]: New session 10 of user core. Sep 5 00:10:51.864417 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:10:51.970111 sshd[3981]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:51.974529 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:53296.service: Deactivated successfully. Sep 5 00:10:51.976712 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:10:51.977466 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:10:51.978418 systemd-logind[1457]: Removed session 10. Sep 5 00:10:54.908397 kubelet[2525]: E0905 00:10:54.908321 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:56.987488 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:53312.service - OpenSSH per-connection server daemon (10.0.0.1:53312). Sep 5 00:10:57.045641 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 53312 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:10:57.047798 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:57.052412 systemd-logind[1457]: New session 11 of user core. Sep 5 00:10:57.063211 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:10:57.187686 sshd[3996]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:57.192582 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:53312.service: Deactivated successfully. Sep 5 00:10:57.194794 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:10:57.195611 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:10:57.196667 systemd-logind[1457]: Removed session 11. Sep 5 00:11:02.201341 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:34242.service - OpenSSH per-connection server daemon (10.0.0.1:34242). Sep 5 00:11:02.245139 sshd[4011]: Accepted publickey for core from 10.0.0.1 port 34242 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:02.246652 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:02.250803 systemd-logind[1457]: New session 12 of user core. Sep 5 00:11:02.260202 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:11:02.380029 sshd[4011]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:02.391420 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:34242.service: Deactivated successfully. Sep 5 00:11:02.393616 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:11:02.395428 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:11:02.404362 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:34244.service - OpenSSH per-connection server daemon (10.0.0.1:34244). Sep 5 00:11:02.405570 systemd-logind[1457]: Removed session 12. Sep 5 00:11:02.439901 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 34244 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:02.441609 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:02.445711 systemd-logind[1457]: New session 13 of user core. Sep 5 00:11:02.456213 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:11:02.612178 sshd[4027]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:02.621458 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:34244.service: Deactivated successfully. Sep 5 00:11:02.626049 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:11:02.627234 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:11:02.641208 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:34250.service - OpenSSH per-connection server daemon (10.0.0.1:34250). Sep 5 00:11:02.642533 systemd-logind[1457]: Removed session 13. Sep 5 00:11:02.675907 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 34250 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:02.677788 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:02.682341 systemd-logind[1457]: New session 14 of user core. Sep 5 00:11:02.696199 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:11:02.815793 sshd[4039]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:02.820494 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:34250.service: Deactivated successfully. Sep 5 00:11:02.822757 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:11:02.823478 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:11:02.824590 systemd-logind[1457]: Removed session 14. Sep 5 00:11:05.910496 kubelet[2525]: E0905 00:11:05.910308 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:07.831608 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:34254.service - OpenSSH per-connection server daemon (10.0.0.1:34254). Sep 5 00:11:07.872085 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 34254 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:07.873736 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:07.878058 systemd-logind[1457]: New session 15 of user core. Sep 5 00:11:07.888206 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:11:07.999712 sshd[4055]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:08.003866 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:34254.service: Deactivated successfully. Sep 5 00:11:08.005976 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:11:08.006610 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:11:08.007735 systemd-logind[1457]: Removed session 15. Sep 5 00:11:12.907955 kubelet[2525]: E0905 00:11:12.907910 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:13.016716 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:43062.service - OpenSSH per-connection server daemon (10.0.0.1:43062). Sep 5 00:11:13.056371 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 43062 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:13.058224 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:13.062508 systemd-logind[1457]: New session 16 of user core. Sep 5 00:11:13.070253 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:11:13.191713 sshd[4070]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:13.203989 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:43062.service: Deactivated successfully. Sep 5 00:11:13.206037 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:11:13.207812 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:11:13.219503 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:43076.service - OpenSSH per-connection server daemon (10.0.0.1:43076). Sep 5 00:11:13.220847 systemd-logind[1457]: Removed session 16. Sep 5 00:11:13.253655 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 43076 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:13.255563 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:13.259636 systemd-logind[1457]: New session 17 of user core. Sep 5 00:11:13.272188 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:11:13.741889 sshd[4085]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:13.753986 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:43076.service: Deactivated successfully. Sep 5 00:11:13.755826 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:11:13.757239 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:11:13.767313 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:43086.service - OpenSSH per-connection server daemon (10.0.0.1:43086). Sep 5 00:11:13.768153 systemd-logind[1457]: Removed session 17. Sep 5 00:11:13.805285 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 43086 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:13.806833 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:13.810771 systemd-logind[1457]: New session 18 of user core. Sep 5 00:11:13.824180 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:11:14.446916 sshd[4098]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:14.457455 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:43086.service: Deactivated successfully. Sep 5 00:11:14.461192 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:11:14.463320 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:11:14.470563 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:43090.service - OpenSSH per-connection server daemon (10.0.0.1:43090). Sep 5 00:11:14.471820 systemd-logind[1457]: Removed session 18. Sep 5 00:11:14.504563 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 43090 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:14.506180 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:14.510475 systemd-logind[1457]: New session 19 of user core. Sep 5 00:11:14.526335 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:11:14.848494 sshd[4119]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:14.862803 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:43090.service: Deactivated successfully. Sep 5 00:11:14.865239 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:11:14.867055 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:11:14.876449 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:43100.service - OpenSSH per-connection server daemon (10.0.0.1:43100). Sep 5 00:11:14.877546 systemd-logind[1457]: Removed session 19. Sep 5 00:11:14.910474 kubelet[2525]: E0905 00:11:14.910442 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:14.911591 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 43100 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:14.913716 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:14.918004 systemd-logind[1457]: New session 20 of user core. Sep 5 00:11:14.926273 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:11:15.040129 sshd[4132]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:15.044681 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:43100.service: Deactivated successfully. Sep 5 00:11:15.047104 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:11:15.047858 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:11:15.048828 systemd-logind[1457]: Removed session 20. Sep 5 00:11:20.056940 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:52190.service - OpenSSH per-connection server daemon (10.0.0.1:52190). Sep 5 00:11:20.097853 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 52190 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:20.099673 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:20.103819 systemd-logind[1457]: New session 21 of user core. Sep 5 00:11:20.111219 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:11:20.228733 sshd[4146]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:20.233415 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:52190.service: Deactivated successfully. Sep 5 00:11:20.235741 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:11:20.236449 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:11:20.237461 systemd-logind[1457]: Removed session 21. Sep 5 00:11:23.907980 kubelet[2525]: E0905 00:11:23.907936 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:23.907980 kubelet[2525]: E0905 00:11:23.907936 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:25.240228 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:52192.service - OpenSSH per-connection server daemon (10.0.0.1:52192). Sep 5 00:11:25.278304 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 52192 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:25.279778 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:25.283693 systemd-logind[1457]: New session 22 of user core. Sep 5 00:11:25.300235 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:11:25.412311 sshd[4165]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:25.416412 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:52192.service: Deactivated successfully. Sep 5 00:11:25.418517 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:11:25.419257 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:11:25.420275 systemd-logind[1457]: Removed session 22. Sep 5 00:11:30.423349 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:42570.service - OpenSSH per-connection server daemon (10.0.0.1:42570). Sep 5 00:11:30.464516 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 42570 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:30.466369 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:30.470398 systemd-logind[1457]: New session 23 of user core. Sep 5 00:11:30.480230 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:11:30.590054 sshd[4180]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:30.606988 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:42570.service: Deactivated successfully. Sep 5 00:11:30.609553 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:11:30.611243 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:11:30.617394 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:42574.service - OpenSSH per-connection server daemon (10.0.0.1:42574). Sep 5 00:11:30.618588 systemd-logind[1457]: Removed session 23. Sep 5 00:11:30.656037 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 42574 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:30.657916 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:30.663338 systemd-logind[1457]: New session 24 of user core. Sep 5 00:11:30.673390 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:11:30.910099 kubelet[2525]: E0905 00:11:30.908400 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:32.013540 containerd[1466]: time="2025-09-05T00:11:32.013478916Z" level=info msg="StopContainer for \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\" with timeout 30 (s)" Sep 5 00:11:32.015188 containerd[1466]: time="2025-09-05T00:11:32.015138782Z" level=info msg="Stop container \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\" with signal terminated" Sep 5 00:11:32.049168 systemd[1]: cri-containerd-8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273.scope: Deactivated successfully. Sep 5 00:11:32.071638 containerd[1466]: time="2025-09-05T00:11:32.071537449Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:11:32.072585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273-rootfs.mount: Deactivated successfully. Sep 5 00:11:32.073313 containerd[1466]: time="2025-09-05T00:11:32.073233213Z" level=info msg="StopContainer for \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\" with timeout 2 (s)" Sep 5 00:11:32.073609 containerd[1466]: time="2025-09-05T00:11:32.073583176Z" level=info msg="Stop container \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\" with signal terminated" Sep 5 00:11:32.083182 systemd-networkd[1409]: lxc_health: Link DOWN Sep 5 00:11:32.083190 systemd-networkd[1409]: lxc_health: Lost carrier Sep 5 00:11:32.087697 containerd[1466]: time="2025-09-05T00:11:32.087626414Z" level=info msg="shim disconnected" id=8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273 namespace=k8s.io Sep 5 00:11:32.087804 containerd[1466]: time="2025-09-05T00:11:32.087696456Z" level=warning msg="cleaning up after shim disconnected" id=8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273 namespace=k8s.io Sep 5 00:11:32.087804 containerd[1466]: time="2025-09-05T00:11:32.087706907Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:32.109573 containerd[1466]: time="2025-09-05T00:11:32.109516083Z" level=info msg="StopContainer for \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\" returns successfully" Sep 5 00:11:32.110263 containerd[1466]: time="2025-09-05T00:11:32.110216701Z" level=info msg="StopPodSandbox for \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\"" Sep 5 00:11:32.110263 containerd[1466]: time="2025-09-05T00:11:32.110256025Z" level=info msg="Container to stop \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:11:32.112251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1-shm.mount: Deactivated successfully. Sep 5 00:11:32.112934 systemd[1]: cri-containerd-951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76.scope: Deactivated successfully. Sep 5 00:11:32.113745 systemd[1]: cri-containerd-951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76.scope: Consumed 8.433s CPU time. Sep 5 00:11:32.123553 systemd[1]: cri-containerd-3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1.scope: Deactivated successfully. Sep 5 00:11:32.135038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76-rootfs.mount: Deactivated successfully. Sep 5 00:11:32.140438 containerd[1466]: time="2025-09-05T00:11:32.140364101Z" level=info msg="shim disconnected" id=951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76 namespace=k8s.io Sep 5 00:11:32.140438 containerd[1466]: time="2025-09-05T00:11:32.140430608Z" level=warning msg="cleaning up after shim disconnected" id=951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76 namespace=k8s.io Sep 5 00:11:32.140438 containerd[1466]: time="2025-09-05T00:11:32.140440006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:32.152470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1-rootfs.mount: Deactivated successfully. Sep 5 00:11:32.156389 containerd[1466]: time="2025-09-05T00:11:32.156194346Z" level=info msg="shim disconnected" id=3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1 namespace=k8s.io Sep 5 00:11:32.156389 containerd[1466]: time="2025-09-05T00:11:32.156245523Z" level=warning msg="cleaning up after shim disconnected" id=3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1 namespace=k8s.io Sep 5 00:11:32.156389 containerd[1466]: time="2025-09-05T00:11:32.156254571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:32.166031 containerd[1466]: time="2025-09-05T00:11:32.165889893Z" level=info msg="StopContainer for \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\" returns successfully" Sep 5 00:11:32.166577 containerd[1466]: time="2025-09-05T00:11:32.166398076Z" level=info msg="StopPodSandbox for \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\"" Sep 5 00:11:32.166577 containerd[1466]: time="2025-09-05T00:11:32.166440126Z" level=info msg="Container to stop \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:11:32.166577 containerd[1466]: time="2025-09-05T00:11:32.166454283Z" level=info msg="Container to stop \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:11:32.166577 containerd[1466]: time="2025-09-05T00:11:32.166463721Z" level=info msg="Container to stop \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:11:32.166577 containerd[1466]: time="2025-09-05T00:11:32.166473690Z" level=info msg="Container to stop \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:11:32.166577 containerd[1466]: time="2025-09-05T00:11:32.166483017Z" level=info msg="Container to stop \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:11:32.174178 systemd[1]: cri-containerd-cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3.scope: Deactivated successfully. Sep 5 00:11:32.181592 containerd[1466]: time="2025-09-05T00:11:32.181545907Z" level=info msg="TearDown network for sandbox \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\" successfully" Sep 5 00:11:32.181592 containerd[1466]: time="2025-09-05T00:11:32.181588258Z" level=info msg="StopPodSandbox for \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\" returns successfully" Sep 5 00:11:32.201453 containerd[1466]: time="2025-09-05T00:11:32.201324235Z" level=info msg="shim disconnected" id=cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3 namespace=k8s.io Sep 5 00:11:32.201453 containerd[1466]: time="2025-09-05T00:11:32.201381804Z" level=warning msg="cleaning up after shim disconnected" id=cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3 namespace=k8s.io Sep 5 00:11:32.201453 containerd[1466]: time="2025-09-05T00:11:32.201391252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:32.216632 containerd[1466]: time="2025-09-05T00:11:32.216575231Z" level=info msg="TearDown network for sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" successfully" Sep 5 00:11:32.216632 containerd[1466]: time="2025-09-05T00:11:32.216614315Z" level=info msg="StopPodSandbox for \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" returns successfully" Sep 5 00:11:32.233323 kubelet[2525]: I0905 00:11:32.233289 2525 scope.go:117] "RemoveContainer" containerID="8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273" Sep 5 00:11:32.236099 containerd[1466]: time="2025-09-05T00:11:32.235351559Z" level=info msg="RemoveContainer for \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\"" Sep 5 00:11:32.240871 containerd[1466]: time="2025-09-05T00:11:32.240830364Z" level=info msg="RemoveContainer for \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\" returns successfully" Sep 5 00:11:32.241124 kubelet[2525]: I0905 00:11:32.241043 2525 scope.go:117] "RemoveContainer" containerID="8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273" Sep 5 00:11:32.244313 containerd[1466]: time="2025-09-05T00:11:32.244260275Z" level=error msg="ContainerStatus for \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\": not found" Sep 5 00:11:32.244486 kubelet[2525]: E0905 00:11:32.244450 2525 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\": not found" containerID="8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273" Sep 5 00:11:32.244556 kubelet[2525]: I0905 00:11:32.244498 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273"} err="failed to get container status \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a143dfc19e66e03d8837c6797fdaf4361dc1f17ff911799064f7cc72593c273\": not found" Sep 5 00:11:32.244586 kubelet[2525]: I0905 00:11:32.244560 2525 scope.go:117] "RemoveContainer" containerID="951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76" Sep 5 00:11:32.245524 containerd[1466]: time="2025-09-05T00:11:32.245500706Z" level=info msg="RemoveContainer for \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\"" Sep 5 00:11:32.249212 containerd[1466]: time="2025-09-05T00:11:32.249176323Z" level=info msg="RemoveContainer for \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\" returns successfully" Sep 5 00:11:32.249364 kubelet[2525]: I0905 00:11:32.249330 2525 scope.go:117] "RemoveContainer" containerID="f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85" Sep 5 00:11:32.250370 containerd[1466]: time="2025-09-05T00:11:32.250338055Z" level=info msg="RemoveContainer for \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\"" Sep 5 00:11:32.253783 containerd[1466]: time="2025-09-05T00:11:32.253748108Z" level=info msg="RemoveContainer for \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\" returns successfully" Sep 5 00:11:32.253943 kubelet[2525]: I0905 00:11:32.253905 2525 scope.go:117] "RemoveContainer" containerID="b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce" Sep 5 00:11:32.254979 containerd[1466]: time="2025-09-05T00:11:32.254942762Z" level=info msg="RemoveContainer for \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\"" Sep 5 00:11:32.258120 containerd[1466]: time="2025-09-05T00:11:32.258089376Z" level=info msg="RemoveContainer for \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\" returns successfully" Sep 5 00:11:32.258262 kubelet[2525]: I0905 00:11:32.258227 2525 scope.go:117] "RemoveContainer" containerID="a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f" Sep 5 00:11:32.259038 containerd[1466]: time="2025-09-05T00:11:32.259016163Z" level=info msg="RemoveContainer for \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\"" Sep 5 00:11:32.262437 containerd[1466]: time="2025-09-05T00:11:32.262399766Z" level=info msg="RemoveContainer for \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\" returns successfully" Sep 5 00:11:32.262551 kubelet[2525]: I0905 00:11:32.262522 2525 scope.go:117] "RemoveContainer" containerID="0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa" Sep 5 00:11:32.263423 containerd[1466]: time="2025-09-05T00:11:32.263386397Z" level=info msg="RemoveContainer for \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\"" Sep 5 00:11:32.266613 containerd[1466]: time="2025-09-05T00:11:32.266535165Z" level=info msg="RemoveContainer for \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\" returns successfully" Sep 5 00:11:32.266711 kubelet[2525]: I0905 00:11:32.266686 2525 scope.go:117] "RemoveContainer" containerID="951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76" Sep 5 00:11:32.266891 containerd[1466]: time="2025-09-05T00:11:32.266856814Z" level=error msg="ContainerStatus for \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\": not found" Sep 5 00:11:32.267146 kubelet[2525]: E0905 00:11:32.267032 2525 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\": not found" containerID="951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76" Sep 5 00:11:32.267195 kubelet[2525]: I0905 00:11:32.267170 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76"} err="failed to get container status \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\": rpc error: code = NotFound desc = an error occurred when try to find container \"951a991952ccafe3c13ac438476f55f29494ed3595334154be1010e30bb13f76\": not found" Sep 5 00:11:32.267224 kubelet[2525]: I0905 00:11:32.267194 2525 scope.go:117] "RemoveContainer" containerID="f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85" Sep 5 00:11:32.267385 containerd[1466]: time="2025-09-05T00:11:32.267350801Z" level=error msg="ContainerStatus for \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\": not found" Sep 5 00:11:32.267501 kubelet[2525]: E0905 00:11:32.267478 2525 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\": not found" containerID="f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85" Sep 5 00:11:32.267545 kubelet[2525]: I0905 00:11:32.267500 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85"} err="failed to get container status \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8225b4cef8438306e2314049b88913eb5252fa81ef31237a35a2c7f30474f85\": not found" Sep 5 00:11:32.267545 kubelet[2525]: I0905 00:11:32.267517 2525 scope.go:117] "RemoveContainer" containerID="b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce" Sep 5 00:11:32.267695 containerd[1466]: time="2025-09-05T00:11:32.267664705Z" level=error msg="ContainerStatus for \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\": not found" Sep 5 00:11:32.267787 kubelet[2525]: E0905 00:11:32.267764 2525 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\": not found" containerID="b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce" Sep 5 00:11:32.267854 kubelet[2525]: I0905 00:11:32.267784 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce"} err="failed to get container status \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1f520eaf3b358a0b3c4d9bba5a79af91ad2819393fbb6c2fd92fe6d65af2fce\": not found" Sep 5 00:11:32.267854 kubelet[2525]: I0905 00:11:32.267803 2525 scope.go:117] "RemoveContainer" containerID="a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f" Sep 5 00:11:32.268000 containerd[1466]: time="2025-09-05T00:11:32.267943344Z" level=error msg="ContainerStatus for \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\": not found" Sep 5 00:11:32.268147 kubelet[2525]: E0905 00:11:32.268104 2525 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\": not found" containerID="a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f" Sep 5 00:11:32.268205 kubelet[2525]: I0905 00:11:32.268155 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f"} err="failed to get container status \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9d9e439327bc7cb36a6eedbcafde172389375e3f4de9fb8687a1acd8976b39f\": not found" Sep 5 00:11:32.268205 kubelet[2525]: I0905 00:11:32.268179 2525 scope.go:117] "RemoveContainer" containerID="0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa" Sep 5 00:11:32.268468 containerd[1466]: time="2025-09-05T00:11:32.268332361Z" level=error msg="ContainerStatus for \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\": not found" Sep 5 00:11:32.268679 kubelet[2525]: E0905 00:11:32.268644 2525 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\": not found" containerID="0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa" Sep 5 00:11:32.268741 kubelet[2525]: I0905 00:11:32.268680 2525 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa"} err="failed to get container status \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c99e3a43533b46f49999cd971ef479814195be0735e13c02d3922a4ba9abafa\": not found" Sep 5 00:11:32.284092 kubelet[2525]: I0905 00:11:32.284046 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-clustermesh-secrets\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284151 kubelet[2525]: I0905 00:11:32.284096 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-hostproc\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284151 kubelet[2525]: I0905 00:11:32.284112 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-cgroup\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284151 kubelet[2525]: I0905 00:11:32.284127 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-host-proc-sys-kernel\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284151 kubelet[2525]: I0905 00:11:32.284144 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-etc-cni-netd\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284265 kubelet[2525]: I0905 00:11:32.284158 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-hubble-tls\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284265 kubelet[2525]: I0905 00:11:32.284174 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-run\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284265 kubelet[2525]: I0905 00:11:32.284163 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.284265 kubelet[2525]: I0905 00:11:32.284199 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hslp\" (UniqueName: \"kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-kube-api-access-7hslp\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284265 kubelet[2525]: I0905 00:11:32.284224 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad614057-908d-4d5e-8641-8907f6534e2c-cilium-config-path\") pod \"ad614057-908d-4d5e-8641-8907f6534e2c\" (UID: \"ad614057-908d-4d5e-8641-8907f6534e2c\") " Sep 5 00:11:32.284265 kubelet[2525]: I0905 00:11:32.284237 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.284436 kubelet[2525]: I0905 00:11:32.284243 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-lib-modules\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284436 kubelet[2525]: I0905 00:11:32.284278 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.284436 kubelet[2525]: I0905 00:11:32.284288 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-host-proc-sys-net\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284436 kubelet[2525]: I0905 00:11:32.284314 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-xtables-lock\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284436 kubelet[2525]: I0905 00:11:32.284333 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cni-path\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284436 kubelet[2525]: I0905 00:11:32.284357 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-bpf-maps\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284593 kubelet[2525]: I0905 00:11:32.284380 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-config-path\") pod \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\" (UID: \"fc3cdb4d-38d2-4705-b8b2-b93403e180e0\") " Sep 5 00:11:32.284593 kubelet[2525]: I0905 00:11:32.284400 2525 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgq8q\" (UniqueName: \"kubernetes.io/projected/ad614057-908d-4d5e-8641-8907f6534e2c-kube-api-access-fgq8q\") pod \"ad614057-908d-4d5e-8641-8907f6534e2c\" (UID: \"ad614057-908d-4d5e-8641-8907f6534e2c\") " Sep 5 00:11:32.284593 kubelet[2525]: I0905 00:11:32.284454 2525 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.284593 kubelet[2525]: I0905 00:11:32.284466 2525 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.284593 kubelet[2525]: I0905 00:11:32.284478 2525 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.285666 kubelet[2525]: I0905 00:11:32.285625 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.285666 kubelet[2525]: I0905 00:11:32.285645 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.285666 kubelet[2525]: I0905 00:11:32.285658 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.285805 kubelet[2525]: I0905 00:11:32.285680 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.285805 kubelet[2525]: I0905 00:11:32.285697 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.285805 kubelet[2525]: I0905 00:11:32.285714 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.285805 kubelet[2525]: I0905 00:11:32.285728 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:11:32.290142 kubelet[2525]: I0905 00:11:32.289943 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad614057-908d-4d5e-8641-8907f6534e2c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad614057-908d-4d5e-8641-8907f6534e2c" (UID: "ad614057-908d-4d5e-8641-8907f6534e2c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:11:32.290870 kubelet[2525]: I0905 00:11:32.290844 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-kube-api-access-7hslp" (OuterVolumeSpecName: "kube-api-access-7hslp") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "kube-api-access-7hslp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:11:32.291240 kubelet[2525]: I0905 00:11:32.291213 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:11:32.291345 kubelet[2525]: I0905 00:11:32.291264 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:11:32.292204 kubelet[2525]: I0905 00:11:32.292171 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fc3cdb4d-38d2-4705-b8b2-b93403e180e0" (UID: "fc3cdb4d-38d2-4705-b8b2-b93403e180e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:11:32.292515 kubelet[2525]: I0905 00:11:32.292487 2525 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad614057-908d-4d5e-8641-8907f6534e2c-kube-api-access-fgq8q" (OuterVolumeSpecName: "kube-api-access-fgq8q") pod "ad614057-908d-4d5e-8641-8907f6534e2c" (UID: "ad614057-908d-4d5e-8641-8907f6534e2c"). InnerVolumeSpecName "kube-api-access-fgq8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:11:32.385250 kubelet[2525]: I0905 00:11:32.385207 2525 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385250 kubelet[2525]: I0905 00:11:32.385241 2525 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385250 kubelet[2525]: I0905 00:11:32.385254 2525 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385250 kubelet[2525]: I0905 00:11:32.385263 2525 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385495 kubelet[2525]: I0905 00:11:32.385272 2525 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385495 kubelet[2525]: I0905 00:11:32.385281 2525 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fgq8q\" (UniqueName: \"kubernetes.io/projected/ad614057-908d-4d5e-8641-8907f6534e2c-kube-api-access-fgq8q\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385495 kubelet[2525]: I0905 00:11:32.385289 2525 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385495 kubelet[2525]: I0905 00:11:32.385297 2525 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385495 kubelet[2525]: I0905 00:11:32.385306 2525 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385495 kubelet[2525]: I0905 00:11:32.385314 2525 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385495 kubelet[2525]: I0905 00:11:32.385323 2525 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385495 kubelet[2525]: I0905 00:11:32.385331 2525 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7hslp\" (UniqueName: \"kubernetes.io/projected/fc3cdb4d-38d2-4705-b8b2-b93403e180e0-kube-api-access-7hslp\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.385678 kubelet[2525]: I0905 00:11:32.385340 2525 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad614057-908d-4d5e-8641-8907f6534e2c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:32.541432 systemd[1]: Removed slice kubepods-besteffort-podad614057_908d_4d5e_8641_8907f6534e2c.slice - libcontainer container kubepods-besteffort-podad614057_908d_4d5e_8641_8907f6534e2c.slice. Sep 5 00:11:32.545730 systemd[1]: Removed slice kubepods-burstable-podfc3cdb4d_38d2_4705_b8b2_b93403e180e0.slice - libcontainer container kubepods-burstable-podfc3cdb4d_38d2_4705_b8b2_b93403e180e0.slice. Sep 5 00:11:32.545817 systemd[1]: kubepods-burstable-podfc3cdb4d_38d2_4705_b8b2_b93403e180e0.slice: Consumed 8.544s CPU time. Sep 5 00:11:32.909583 kubelet[2525]: I0905 00:11:32.909526 2525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad614057-908d-4d5e-8641-8907f6534e2c" path="/var/lib/kubelet/pods/ad614057-908d-4d5e-8641-8907f6534e2c/volumes" Sep 5 00:11:32.910268 kubelet[2525]: I0905 00:11:32.910235 2525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc3cdb4d-38d2-4705-b8b2-b93403e180e0" path="/var/lib/kubelet/pods/fc3cdb4d-38d2-4705-b8b2-b93403e180e0/volumes" Sep 5 00:11:33.046560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3-rootfs.mount: Deactivated successfully. Sep 5 00:11:33.046705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3-shm.mount: Deactivated successfully. Sep 5 00:11:33.046782 systemd[1]: var-lib-kubelet-pods-fc3cdb4d\x2d38d2\x2d4705\x2db8b2\x2db93403e180e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7hslp.mount: Deactivated successfully. Sep 5 00:11:33.046864 systemd[1]: var-lib-kubelet-pods-ad614057\x2d908d\x2d4d5e\x2d8641\x2d8907f6534e2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfgq8q.mount: Deactivated successfully. Sep 5 00:11:33.046952 systemd[1]: var-lib-kubelet-pods-fc3cdb4d\x2d38d2\x2d4705\x2db8b2\x2db93403e180e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 5 00:11:33.047040 systemd[1]: var-lib-kubelet-pods-fc3cdb4d\x2d38d2\x2d4705\x2db8b2\x2db93403e180e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 5 00:11:33.983735 sshd[4194]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:33.994883 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:42574.service: Deactivated successfully. Sep 5 00:11:33.997375 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:11:33.999167 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:11:34.008592 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:42578.service - OpenSSH per-connection server daemon (10.0.0.1:42578). Sep 5 00:11:34.009939 systemd-logind[1457]: Removed session 24. Sep 5 00:11:34.049936 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 42578 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:34.051819 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:34.056641 systemd-logind[1457]: New session 25 of user core. Sep 5 00:11:34.072351 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:11:34.128173 kubelet[2525]: E0905 00:11:34.128126 2525 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:11:34.615832 sshd[4359]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:34.630460 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:42578.service: Deactivated successfully. Sep 5 00:11:34.632999 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:11:34.635023 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:11:34.647538 systemd[1]: Started sshd@25-10.0.0.38:22-10.0.0.1:42588.service - OpenSSH per-connection server daemon (10.0.0.1:42588). Sep 5 00:11:34.651510 systemd-logind[1457]: Removed session 25. Sep 5 00:11:34.667362 systemd[1]: Created slice kubepods-burstable-podfb14332b_f6a9_446e_b10c_2df8a98e0ed9.slice - libcontainer container kubepods-burstable-podfb14332b_f6a9_446e_b10c_2df8a98e0ed9.slice. Sep 5 00:11:34.698013 kubelet[2525]: I0905 00:11:34.697929 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-cni-path\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698013 kubelet[2525]: I0905 00:11:34.697997 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-lib-modules\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698013 kubelet[2525]: I0905 00:11:34.698024 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-cilium-config-path\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698353 kubelet[2525]: I0905 00:11:34.698048 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-etc-cni-netd\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698353 kubelet[2525]: I0905 00:11:34.698086 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-hostproc\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698353 kubelet[2525]: I0905 00:11:34.698107 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-cilium-ipsec-secrets\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698353 kubelet[2525]: I0905 00:11:34.698126 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-host-proc-sys-kernel\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698353 kubelet[2525]: I0905 00:11:34.698144 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-bpf-maps\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698353 kubelet[2525]: I0905 00:11:34.698161 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-hubble-tls\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698539 kubelet[2525]: I0905 00:11:34.698182 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b4nm\" (UniqueName: \"kubernetes.io/projected/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-kube-api-access-2b4nm\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698539 kubelet[2525]: I0905 00:11:34.698205 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-cilium-run\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698539 kubelet[2525]: I0905 00:11:34.698228 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-xtables-lock\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698539 kubelet[2525]: I0905 00:11:34.698246 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-clustermesh-secrets\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698539 kubelet[2525]: I0905 00:11:34.698271 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-cilium-cgroup\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.698539 kubelet[2525]: I0905 00:11:34.698291 2525 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb14332b-f6a9-446e-b10c-2df8a98e0ed9-host-proc-sys-net\") pod \"cilium-grth9\" (UID: \"fb14332b-f6a9-446e-b10c-2df8a98e0ed9\") " pod="kube-system/cilium-grth9" Sep 5 00:11:34.702095 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 42588 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:34.703641 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:34.712246 systemd-logind[1457]: New session 26 of user core. Sep 5 00:11:34.720301 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:11:34.773686 sshd[4372]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:34.789367 systemd[1]: sshd@25-10.0.0.38:22-10.0.0.1:42588.service: Deactivated successfully. Sep 5 00:11:34.791574 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:11:34.793264 systemd-logind[1457]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:11:34.800504 systemd[1]: Started sshd@26-10.0.0.38:22-10.0.0.1:42596.service - OpenSSH per-connection server daemon (10.0.0.1:42596). Sep 5 00:11:34.803441 systemd-logind[1457]: Removed session 26. Sep 5 00:11:34.843141 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 42596 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:11:34.844771 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:34.848780 systemd-logind[1457]: New session 27 of user core. Sep 5 00:11:34.861254 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:11:34.980121 kubelet[2525]: E0905 00:11:34.979960 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:34.980696 containerd[1466]: time="2025-09-05T00:11:34.980633376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-grth9,Uid:fb14332b-f6a9-446e-b10c-2df8a98e0ed9,Namespace:kube-system,Attempt:0,}" Sep 5 00:11:35.008101 containerd[1466]: time="2025-09-05T00:11:35.004890503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:35.008101 containerd[1466]: time="2025-09-05T00:11:35.007861640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:35.008101 containerd[1466]: time="2025-09-05T00:11:35.007887159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:35.008732 containerd[1466]: time="2025-09-05T00:11:35.008661245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:35.027215 systemd[1]: Started cri-containerd-c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9.scope - libcontainer container c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9. Sep 5 00:11:35.055707 containerd[1466]: time="2025-09-05T00:11:35.055642499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-grth9,Uid:fb14332b-f6a9-446e-b10c-2df8a98e0ed9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\"" Sep 5 00:11:35.056569 kubelet[2525]: E0905 00:11:35.056513 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:35.062399 containerd[1466]: time="2025-09-05T00:11:35.062345716Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:11:35.075139 containerd[1466]: time="2025-09-05T00:11:35.075054520Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f295db7cf639e279c1d6e56eca5f5232a191beb6ca4f5135876176b04b82bfa7\"" Sep 5 00:11:35.075700 containerd[1466]: time="2025-09-05T00:11:35.075653865Z" level=info msg="StartContainer for \"f295db7cf639e279c1d6e56eca5f5232a191beb6ca4f5135876176b04b82bfa7\"" Sep 5 00:11:35.108267 systemd[1]: Started cri-containerd-f295db7cf639e279c1d6e56eca5f5232a191beb6ca4f5135876176b04b82bfa7.scope - libcontainer container f295db7cf639e279c1d6e56eca5f5232a191beb6ca4f5135876176b04b82bfa7. Sep 5 00:11:35.136955 containerd[1466]: time="2025-09-05T00:11:35.136898056Z" level=info msg="StartContainer for \"f295db7cf639e279c1d6e56eca5f5232a191beb6ca4f5135876176b04b82bfa7\" returns successfully" Sep 5 00:11:35.147841 systemd[1]: cri-containerd-f295db7cf639e279c1d6e56eca5f5232a191beb6ca4f5135876176b04b82bfa7.scope: Deactivated successfully. Sep 5 00:11:35.181106 containerd[1466]: time="2025-09-05T00:11:35.181018992Z" level=info msg="shim disconnected" id=f295db7cf639e279c1d6e56eca5f5232a191beb6ca4f5135876176b04b82bfa7 namespace=k8s.io Sep 5 00:11:35.181106 containerd[1466]: time="2025-09-05T00:11:35.181100566Z" level=warning msg="cleaning up after shim disconnected" id=f295db7cf639e279c1d6e56eca5f5232a191beb6ca4f5135876176b04b82bfa7 namespace=k8s.io Sep 5 00:11:35.181106 containerd[1466]: time="2025-09-05T00:11:35.181110154Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:35.226625 kubelet[2525]: I0905 00:11:35.226569 2525 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-05T00:11:35Z","lastTransitionTime":"2025-09-05T00:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 5 00:11:35.247423 kubelet[2525]: E0905 00:11:35.247315 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:35.254344 containerd[1466]: time="2025-09-05T00:11:35.254297418Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:11:35.266814 containerd[1466]: time="2025-09-05T00:11:35.266758081Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bfd8480c767a225737f54140724eba9cae440d1a8d1c4b79ceae2753ee5b5aa2\"" Sep 5 00:11:35.267378 containerd[1466]: time="2025-09-05T00:11:35.267325466Z" level=info msg="StartContainer for \"bfd8480c767a225737f54140724eba9cae440d1a8d1c4b79ceae2753ee5b5aa2\"" Sep 5 00:11:35.298305 systemd[1]: Started cri-containerd-bfd8480c767a225737f54140724eba9cae440d1a8d1c4b79ceae2753ee5b5aa2.scope - libcontainer container bfd8480c767a225737f54140724eba9cae440d1a8d1c4b79ceae2753ee5b5aa2. Sep 5 00:11:35.327680 containerd[1466]: time="2025-09-05T00:11:35.327611382Z" level=info msg="StartContainer for \"bfd8480c767a225737f54140724eba9cae440d1a8d1c4b79ceae2753ee5b5aa2\" returns successfully" Sep 5 00:11:35.335422 systemd[1]: cri-containerd-bfd8480c767a225737f54140724eba9cae440d1a8d1c4b79ceae2753ee5b5aa2.scope: Deactivated successfully. Sep 5 00:11:35.359485 containerd[1466]: time="2025-09-05T00:11:35.359417330Z" level=info msg="shim disconnected" id=bfd8480c767a225737f54140724eba9cae440d1a8d1c4b79ceae2753ee5b5aa2 namespace=k8s.io Sep 5 00:11:35.359485 containerd[1466]: time="2025-09-05T00:11:35.359476272Z" level=warning msg="cleaning up after shim disconnected" id=bfd8480c767a225737f54140724eba9cae440d1a8d1c4b79ceae2753ee5b5aa2 namespace=k8s.io Sep 5 00:11:35.359485 containerd[1466]: time="2025-09-05T00:11:35.359487583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:36.251574 kubelet[2525]: E0905 00:11:36.251522 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:36.256917 containerd[1466]: time="2025-09-05T00:11:36.256837894Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:11:36.289483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383814534.mount: Deactivated successfully. Sep 5 00:11:36.291803 containerd[1466]: time="2025-09-05T00:11:36.291758028Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"871e964c714c0d28f805a38cb96637665c69555e9532e5817be33a2899f2cd13\"" Sep 5 00:11:36.292709 containerd[1466]: time="2025-09-05T00:11:36.292668452Z" level=info msg="StartContainer for \"871e964c714c0d28f805a38cb96637665c69555e9532e5817be33a2899f2cd13\"" Sep 5 00:11:36.329233 systemd[1]: Started cri-containerd-871e964c714c0d28f805a38cb96637665c69555e9532e5817be33a2899f2cd13.scope - libcontainer container 871e964c714c0d28f805a38cb96637665c69555e9532e5817be33a2899f2cd13. Sep 5 00:11:36.363343 systemd[1]: cri-containerd-871e964c714c0d28f805a38cb96637665c69555e9532e5817be33a2899f2cd13.scope: Deactivated successfully. Sep 5 00:11:36.365769 containerd[1466]: time="2025-09-05T00:11:36.365730793Z" level=info msg="StartContainer for \"871e964c714c0d28f805a38cb96637665c69555e9532e5817be33a2899f2cd13\" returns successfully" Sep 5 00:11:36.390580 containerd[1466]: time="2025-09-05T00:11:36.390522205Z" level=info msg="shim disconnected" id=871e964c714c0d28f805a38cb96637665c69555e9532e5817be33a2899f2cd13 namespace=k8s.io Sep 5 00:11:36.390580 containerd[1466]: time="2025-09-05T00:11:36.390573692Z" level=warning msg="cleaning up after shim disconnected" id=871e964c714c0d28f805a38cb96637665c69555e9532e5817be33a2899f2cd13 namespace=k8s.io Sep 5 00:11:36.390580 containerd[1466]: time="2025-09-05T00:11:36.390582859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:36.806542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-871e964c714c0d28f805a38cb96637665c69555e9532e5817be33a2899f2cd13-rootfs.mount: Deactivated successfully. Sep 5 00:11:37.255795 kubelet[2525]: E0905 00:11:37.255629 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:37.259928 containerd[1466]: time="2025-09-05T00:11:37.259874001Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:11:37.275178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103087829.mount: Deactivated successfully. Sep 5 00:11:37.276752 containerd[1466]: time="2025-09-05T00:11:37.276712685Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3043c4cbd79a72d12a197a30e531440f515680bae87774def34e3bcdc58e4cf5\"" Sep 5 00:11:37.277304 containerd[1466]: time="2025-09-05T00:11:37.277221859Z" level=info msg="StartContainer for \"3043c4cbd79a72d12a197a30e531440f515680bae87774def34e3bcdc58e4cf5\"" Sep 5 00:11:37.309195 systemd[1]: Started cri-containerd-3043c4cbd79a72d12a197a30e531440f515680bae87774def34e3bcdc58e4cf5.scope - libcontainer container 3043c4cbd79a72d12a197a30e531440f515680bae87774def34e3bcdc58e4cf5. Sep 5 00:11:37.335963 systemd[1]: cri-containerd-3043c4cbd79a72d12a197a30e531440f515680bae87774def34e3bcdc58e4cf5.scope: Deactivated successfully. Sep 5 00:11:37.337478 containerd[1466]: time="2025-09-05T00:11:37.337443039Z" level=info msg="StartContainer for \"3043c4cbd79a72d12a197a30e531440f515680bae87774def34e3bcdc58e4cf5\" returns successfully" Sep 5 00:11:37.364215 containerd[1466]: time="2025-09-05T00:11:37.364147756Z" level=info msg="shim disconnected" id=3043c4cbd79a72d12a197a30e531440f515680bae87774def34e3bcdc58e4cf5 namespace=k8s.io Sep 5 00:11:37.364215 containerd[1466]: time="2025-09-05T00:11:37.364198461Z" level=warning msg="cleaning up after shim disconnected" id=3043c4cbd79a72d12a197a30e531440f515680bae87774def34e3bcdc58e4cf5 namespace=k8s.io Sep 5 00:11:37.364215 containerd[1466]: time="2025-09-05T00:11:37.364206838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:11:37.806609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3043c4cbd79a72d12a197a30e531440f515680bae87774def34e3bcdc58e4cf5-rootfs.mount: Deactivated successfully. Sep 5 00:11:38.259633 kubelet[2525]: E0905 00:11:38.259497 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:38.267016 containerd[1466]: time="2025-09-05T00:11:38.266961429Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:11:38.281516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886895736.mount: Deactivated successfully. Sep 5 00:11:38.285940 containerd[1466]: time="2025-09-05T00:11:38.285864857Z" level=info msg="CreateContainer within sandbox \"c829386d78d12a092d9286357d86b8193f3923d0e595842b3ef1ebd4c77418f9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5d0ae1bc3eda3fcc54ab98447c44f4908e356112512c9e51a2494661504feb4b\"" Sep 5 00:11:38.286474 containerd[1466]: time="2025-09-05T00:11:38.286450125Z" level=info msg="StartContainer for \"5d0ae1bc3eda3fcc54ab98447c44f4908e356112512c9e51a2494661504feb4b\"" Sep 5 00:11:38.324210 systemd[1]: Started cri-containerd-5d0ae1bc3eda3fcc54ab98447c44f4908e356112512c9e51a2494661504feb4b.scope - libcontainer container 5d0ae1bc3eda3fcc54ab98447c44f4908e356112512c9e51a2494661504feb4b. Sep 5 00:11:38.358470 containerd[1466]: time="2025-09-05T00:11:38.358420809Z" level=info msg="StartContainer for \"5d0ae1bc3eda3fcc54ab98447c44f4908e356112512c9e51a2494661504feb4b\" returns successfully" Sep 5 00:11:38.805109 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 5 00:11:38.907467 kubelet[2525]: E0905 00:11:38.907395 2525 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-k5bmg" podUID="e77cbe95-df97-44dd-acd2-c32b5c8a772d" Sep 5 00:11:39.264415 kubelet[2525]: E0905 00:11:39.264219 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:39.277481 kubelet[2525]: I0905 00:11:39.277396 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-grth9" podStartSLOduration=5.277359266 podStartE2EDuration="5.277359266s" podCreationTimestamp="2025-09-05 00:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:11:39.276741766 +0000 UTC m=+116.474728402" watchObservedRunningTime="2025-09-05 00:11:39.277359266 +0000 UTC m=+116.475345901" Sep 5 00:11:40.908112 kubelet[2525]: E0905 00:11:40.908034 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:40.981415 kubelet[2525]: E0905 00:11:40.981327 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:41.950987 systemd-networkd[1409]: lxc_health: Link UP Sep 5 00:11:41.966744 systemd-networkd[1409]: lxc_health: Gained carrier Sep 5 00:11:42.894573 containerd[1466]: time="2025-09-05T00:11:42.894534560Z" level=info msg="StopPodSandbox for \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\"" Sep 5 00:11:42.895131 containerd[1466]: time="2025-09-05T00:11:42.895108776Z" level=info msg="TearDown network for sandbox \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\" successfully" Sep 5 00:11:42.895131 containerd[1466]: time="2025-09-05T00:11:42.895126940Z" level=info msg="StopPodSandbox for \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\" returns successfully" Sep 5 00:11:42.895481 containerd[1466]: time="2025-09-05T00:11:42.895459250Z" level=info msg="RemovePodSandbox for \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\"" Sep 5 00:11:42.895527 containerd[1466]: time="2025-09-05T00:11:42.895484968Z" level=info msg="Forcibly stopping sandbox \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\"" Sep 5 00:11:42.895553 containerd[1466]: time="2025-09-05T00:11:42.895535113Z" level=info msg="TearDown network for sandbox \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\" successfully" Sep 5 00:11:42.899012 containerd[1466]: time="2025-09-05T00:11:42.898987735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:42.899148 containerd[1466]: time="2025-09-05T00:11:42.899029535Z" level=info msg="RemovePodSandbox \"3ae4f820c3ee097fb7f3eba188594e266a6df7de8b258d97b6a5fa8dfa0dfef1\" returns successfully" Sep 5 00:11:42.899396 containerd[1466]: time="2025-09-05T00:11:42.899372503Z" level=info msg="StopPodSandbox for \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\"" Sep 5 00:11:42.899455 containerd[1466]: time="2025-09-05T00:11:42.899440492Z" level=info msg="TearDown network for sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" successfully" Sep 5 00:11:42.899488 containerd[1466]: time="2025-09-05T00:11:42.899452315Z" level=info msg="StopPodSandbox for \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" returns successfully" Sep 5 00:11:42.899713 containerd[1466]: time="2025-09-05T00:11:42.899691457Z" level=info msg="RemovePodSandbox for \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\"" Sep 5 00:11:42.899781 containerd[1466]: time="2025-09-05T00:11:42.899717696Z" level=info msg="Forcibly stopping sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\"" Sep 5 00:11:42.899830 containerd[1466]: time="2025-09-05T00:11:42.899810953Z" level=info msg="TearDown network for sandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" successfully" Sep 5 00:11:42.903688 containerd[1466]: time="2025-09-05T00:11:42.903587868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:42.903688 containerd[1466]: time="2025-09-05T00:11:42.903631602Z" level=info msg="RemovePodSandbox \"cf60c0f94a7649a522192cc5af419a99da695f6e075346f7393c49bd521ed1f3\" returns successfully" Sep 5 00:11:42.983093 kubelet[2525]: E0905 00:11:42.982035 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:43.271744 kubelet[2525]: E0905 00:11:43.271592 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:43.432018 systemd-networkd[1409]: lxc_health: Gained IPv6LL Sep 5 00:11:44.273477 kubelet[2525]: E0905 00:11:44.273422 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:49.840156 sshd[4380]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:49.844558 systemd[1]: sshd@26-10.0.0.38:22-10.0.0.1:42596.service: Deactivated successfully. Sep 5 00:11:49.846704 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:11:49.847411 systemd-logind[1457]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:11:49.848344 systemd-logind[1457]: Removed session 27.