Sep 12 17:07:06.056715 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:35:29 -00 2025 Sep 12 17:07:06.056743 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 17:07:06.056755 kernel: BIOS-provided physical RAM map: Sep 12 17:07:06.056762 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:07:06.056769 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 17:07:06.056775 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 17:07:06.056783 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 17:07:06.056790 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 17:07:06.056796 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 17:07:06.056803 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 17:07:06.056810 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 12 17:07:06.056819 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 17:07:06.056828 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 17:07:06.056835 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 17:07:06.056846 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 17:07:06.056853 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 17:07:06.056863 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 12 17:07:06.056870 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 12 17:07:06.056877 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 12 17:07:06.056885 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 12 17:07:06.056892 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 17:07:06.056899 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 17:07:06.056906 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:07:06.056913 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:07:06.056920 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 17:07:06.056927 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:07:06.056935 kernel: NX (Execute Disable) protection: active Sep 12 17:07:06.056944 kernel: APIC: Static calls initialized Sep 12 17:07:06.056952 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 12 17:07:06.056959 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 12 17:07:06.056966 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 12 17:07:06.056973 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 12 17:07:06.056980 kernel: extended physical RAM map: Sep 12 17:07:06.056987 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:07:06.056995 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 17:07:06.057002 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 17:07:06.057009 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 17:07:06.057016 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 17:07:06.057024 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 17:07:06.057034 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 17:07:06.057045 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Sep 12 17:07:06.057056 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Sep 12 17:07:06.057067 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Sep 12 17:07:06.057079 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Sep 12 17:07:06.057092 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Sep 12 17:07:06.057116 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 17:07:06.057129 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 17:07:06.057141 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 17:07:06.057154 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 17:07:06.057167 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 17:07:06.057185 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 12 17:07:06.057202 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 12 17:07:06.057219 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 12 17:07:06.057228 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 12 17:07:06.057240 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 17:07:06.057249 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 17:07:06.057257 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:07:06.057266 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:07:06.057278 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 17:07:06.057286 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:07:06.057302 kernel: efi: EFI v2.7 by EDK II Sep 12 17:07:06.057311 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Sep 12 17:07:06.057320 kernel: random: crng init done Sep 12 17:07:06.057329 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 12 17:07:06.057338 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 12 17:07:06.057351 kernel: secureboot: Secure boot disabled Sep 12 17:07:06.057369 kernel: SMBIOS 2.8 present. Sep 12 17:07:06.057379 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 17:07:06.057388 kernel: Hypervisor detected: KVM Sep 12 17:07:06.057416 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:07:06.057425 kernel: kvm-clock: using sched offset of 4221950442 cycles Sep 12 17:07:06.057434 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:07:06.057444 kernel: tsc: Detected 2794.748 MHz processor Sep 12 17:07:06.057454 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:07:06.057464 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:07:06.057474 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 12 17:07:06.057486 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:07:06.057493 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:07:06.057501 kernel: Using GB pages for direct mapping Sep 12 17:07:06.057519 kernel: ACPI: Early table checksum verification disabled Sep 12 17:07:06.057528 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 17:07:06.057553 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:07:06.057561 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:07:06.057569 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:07:06.057576 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 17:07:06.057589 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:07:06.057596 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:07:06.057604 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:07:06.057612 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:07:06.057619 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:07:06.057627 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 17:07:06.057634 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 17:07:06.057642 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 17:07:06.057650 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 17:07:06.057660 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 17:07:06.057668 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 17:07:06.057675 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 17:07:06.057683 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 17:07:06.057690 kernel: No NUMA configuration found Sep 12 17:07:06.057698 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 12 17:07:06.057705 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Sep 12 17:07:06.057713 kernel: Zone ranges: Sep 12 17:07:06.057727 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:07:06.057743 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 12 17:07:06.057751 kernel: Normal empty Sep 12 17:07:06.057762 kernel: Movable zone start for each node Sep 12 17:07:06.057770 kernel: Early memory node ranges Sep 12 17:07:06.057777 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:07:06.057791 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 17:07:06.057798 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 17:07:06.057809 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 12 17:07:06.057816 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 12 17:07:06.057824 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 12 17:07:06.057835 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Sep 12 17:07:06.057843 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Sep 12 17:07:06.057850 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 12 17:07:06.057858 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:07:06.057866 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:07:06.057881 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 17:07:06.057892 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:07:06.057903 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 12 17:07:06.057917 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 12 17:07:06.057925 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 17:07:06.057936 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 17:07:06.057947 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 12 17:07:06.057955 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:07:06.057963 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:07:06.057971 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:07:06.057979 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:07:06.057994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:07:06.058003 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:07:06.058011 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:07:06.058019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:07:06.058027 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:07:06.058044 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:07:06.058053 kernel: TSC deadline timer available Sep 12 17:07:06.058066 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 17:07:06.058075 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:07:06.058086 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:07:06.058094 kernel: kvm-guest: setup PV sched yield Sep 12 17:07:06.058102 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 17:07:06.058116 kernel: Booting paravirtualized kernel on KVM Sep 12 17:07:06.058124 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:07:06.058132 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:07:06.058140 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 17:07:06.058148 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 17:07:06.058156 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:07:06.058170 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:07:06.058178 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:07:06.058187 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 17:07:06.058195 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:07:06.058203 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:07:06.058214 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:07:06.058222 kernel: Fallback order for Node 0: 0 Sep 12 17:07:06.058230 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Sep 12 17:07:06.058238 kernel: Policy zone: DMA32 Sep 12 17:07:06.058248 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:07:06.058256 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2293K rwdata, 22872K rodata, 43520K init, 1556K bss, 177824K reserved, 0K cma-reserved) Sep 12 17:07:06.058265 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:07:06.058273 kernel: ftrace: allocating 37948 entries in 149 pages Sep 12 17:07:06.058281 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:07:06.058289 kernel: Dynamic Preempt: voluntary Sep 12 17:07:06.058297 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:07:06.058305 kernel: rcu: RCU event tracing is enabled. Sep 12 17:07:06.058316 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:07:06.058324 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:07:06.058332 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:07:06.058340 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:07:06.058348 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:07:06.058356 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:07:06.058364 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:07:06.058372 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:07:06.058380 kernel: Console: colour dummy device 80x25 Sep 12 17:07:06.058389 kernel: printk: console [ttyS0] enabled Sep 12 17:07:06.058433 kernel: ACPI: Core revision 20230628 Sep 12 17:07:06.058441 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:07:06.058449 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:07:06.058457 kernel: x2apic enabled Sep 12 17:07:06.058465 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:07:06.058475 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:07:06.058484 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:07:06.058492 kernel: kvm-guest: setup PV IPIs Sep 12 17:07:06.058500 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:07:06.058511 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:07:06.058519 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 17:07:06.058527 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:07:06.058535 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:07:06.058550 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:07:06.058559 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:07:06.058567 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:07:06.058575 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:07:06.058583 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:07:06.058594 kernel: active return thunk: retbleed_return_thunk Sep 12 17:07:06.058602 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:07:06.058610 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:07:06.058618 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:07:06.058626 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:07:06.058635 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:07:06.058645 kernel: active return thunk: srso_return_thunk Sep 12 17:07:06.058653 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:07:06.058664 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:07:06.058672 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:07:06.058680 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:07:06.058687 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:07:06.058696 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:07:06.058703 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:07:06.058711 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:07:06.058719 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:07:06.058727 kernel: landlock: Up and running. Sep 12 17:07:06.058738 kernel: SELinux: Initializing. Sep 12 17:07:06.058746 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:07:06.058754 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:07:06.058762 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:07:06.058770 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:07:06.058778 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:07:06.058786 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:07:06.058794 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:07:06.058802 kernel: ... version: 0 Sep 12 17:07:06.058812 kernel: ... bit width: 48 Sep 12 17:07:06.058820 kernel: ... generic registers: 6 Sep 12 17:07:06.058828 kernel: ... value mask: 0000ffffffffffff Sep 12 17:07:06.058836 kernel: ... max period: 00007fffffffffff Sep 12 17:07:06.058844 kernel: ... fixed-purpose events: 0 Sep 12 17:07:06.058851 kernel: ... event mask: 000000000000003f Sep 12 17:07:06.058859 kernel: signal: max sigframe size: 1776 Sep 12 17:07:06.058867 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:07:06.058875 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:07:06.058885 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:07:06.058893 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:07:06.058901 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:07:06.058908 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:07:06.058916 kernel: smpboot: Max logical packages: 1 Sep 12 17:07:06.058924 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 17:07:06.058932 kernel: devtmpfs: initialized Sep 12 17:07:06.058940 kernel: x86/mm: Memory block size: 128MB Sep 12 17:07:06.058948 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 17:07:06.058958 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 17:07:06.058966 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 12 17:07:06.058975 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 17:07:06.058983 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Sep 12 17:07:06.058991 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 17:07:06.058999 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:07:06.059007 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:07:06.059015 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:07:06.059023 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:07:06.059033 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:07:06.059041 kernel: audit: type=2000 audit(1757696824.661:1): state=initialized audit_enabled=0 res=1 Sep 12 17:07:06.059049 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:07:06.059057 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:07:06.059065 kernel: cpuidle: using governor menu Sep 12 17:07:06.059073 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:07:06.059081 kernel: dca service started, version 1.12.1 Sep 12 17:07:06.059089 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 12 17:07:06.059097 kernel: PCI: Using configuration type 1 for base access Sep 12 17:07:06.059108 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:07:06.059116 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:07:06.059124 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:07:06.059132 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:07:06.059140 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:07:06.059148 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:07:06.059156 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:07:06.059163 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:07:06.059171 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:07:06.059182 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:07:06.059190 kernel: ACPI: Interpreter enabled Sep 12 17:07:06.059198 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:07:06.059205 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:07:06.059213 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:07:06.059221 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:07:06.059229 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:07:06.059237 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:07:06.059497 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:07:06.059675 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:07:06.059872 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:07:06.059885 kernel: PCI host bridge to bus 0000:00 Sep 12 17:07:06.060037 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:07:06.060181 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:07:06.060307 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:07:06.060461 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 17:07:06.060597 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 17:07:06.060724 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 17:07:06.060926 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:07:06.061149 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:07:06.061340 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 17:07:06.061551 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 12 17:07:06.061722 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 12 17:07:06.061888 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 17:07:06.062059 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 12 17:07:06.062212 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:07:06.062366 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:07:06.062519 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 12 17:07:06.062669 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 12 17:07:06.062810 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 17:07:06.062962 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:07:06.063097 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 12 17:07:06.063229 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 12 17:07:06.063360 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 17:07:06.063563 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:07:06.063703 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 12 17:07:06.063834 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 12 17:07:06.063964 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 17:07:06.064096 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 12 17:07:06.064247 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:07:06.064382 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:07:06.064554 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:07:06.064694 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 12 17:07:06.064828 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 12 17:07:06.064979 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:07:06.065112 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 12 17:07:06.065123 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:07:06.065131 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:07:06.065139 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:07:06.065151 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:07:06.065160 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:07:06.065168 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:07:06.065176 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:07:06.065184 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:07:06.065192 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:07:06.065200 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:07:06.065208 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:07:06.065216 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:07:06.065227 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:07:06.065234 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:07:06.065242 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:07:06.065250 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:07:06.065258 kernel: iommu: Default domain type: Translated Sep 12 17:07:06.065266 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:07:06.065274 kernel: efivars: Registered efivars operations Sep 12 17:07:06.065282 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:07:06.065290 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:07:06.065301 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 17:07:06.065309 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 12 17:07:06.065316 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Sep 12 17:07:06.065324 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Sep 12 17:07:06.065332 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 12 17:07:06.065340 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 12 17:07:06.065348 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Sep 12 17:07:06.065356 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 12 17:07:06.065508 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:07:06.065655 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:07:06.065786 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:07:06.065797 kernel: vgaarb: loaded Sep 12 17:07:06.065805 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:07:06.065813 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:07:06.065821 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:07:06.065830 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:07:06.065838 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:07:06.065850 kernel: pnp: PnP ACPI init Sep 12 17:07:06.066035 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 17:07:06.066050 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:07:06.066060 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:07:06.066069 kernel: NET: Registered PF_INET protocol family Sep 12 17:07:06.066101 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:07:06.066114 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:07:06.066124 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:07:06.066136 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:07:06.066146 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:07:06.066155 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:07:06.066165 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:07:06.066175 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:07:06.066185 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:07:06.066194 kernel: NET: Registered PF_XDP protocol family Sep 12 17:07:06.066339 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 12 17:07:06.066511 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 12 17:07:06.066664 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:07:06.066794 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:07:06.066933 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:07:06.067070 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 17:07:06.067198 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 17:07:06.067325 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 17:07:06.067337 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:07:06.067347 kernel: Initialise system trusted keyrings Sep 12 17:07:06.067362 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:07:06.067372 kernel: Key type asymmetric registered Sep 12 17:07:06.067382 kernel: Asymmetric key parser 'x509' registered Sep 12 17:07:06.067391 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:07:06.067428 kernel: io scheduler mq-deadline registered Sep 12 17:07:06.067438 kernel: io scheduler kyber registered Sep 12 17:07:06.067447 kernel: io scheduler bfq registered Sep 12 17:07:06.067457 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:07:06.067467 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:07:06.067481 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:07:06.067493 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:07:06.067503 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:07:06.067513 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:07:06.067523 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:07:06.067533 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:07:06.067555 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:07:06.067721 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:07:06.067735 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:07:06.067981 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:07:06.068115 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:07:05 UTC (1757696825) Sep 12 17:07:06.068247 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 17:07:06.068260 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:07:06.068273 kernel: efifb: probing for efifb Sep 12 17:07:06.068283 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 17:07:06.068293 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 17:07:06.068303 kernel: efifb: scrolling: redraw Sep 12 17:07:06.068312 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:07:06.068322 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 17:07:06.068332 kernel: fb0: EFI VGA frame buffer device Sep 12 17:07:06.068341 kernel: pstore: Using crash dump compression: deflate Sep 12 17:07:06.068351 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:07:06.068364 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:07:06.068373 kernel: Segment Routing with IPv6 Sep 12 17:07:06.068383 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:07:06.068392 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:07:06.068508 kernel: Key type dns_resolver registered Sep 12 17:07:06.068518 kernel: IPI shorthand broadcast: enabled Sep 12 17:07:06.068528 kernel: sched_clock: Marking stable (1235002609, 148753534)->(1414377237, -30621094) Sep 12 17:07:06.068545 kernel: registered taskstats version 1 Sep 12 17:07:06.068555 kernel: Loading compiled-in X.509 certificates Sep 12 17:07:06.068565 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: d1d9e065fdbec39026aa56a07626d6d91ab4fce4' Sep 12 17:07:06.068579 kernel: Key type .fscrypt registered Sep 12 17:07:06.068588 kernel: Key type fscrypt-provisioning registered Sep 12 17:07:06.068598 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:07:06.068608 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:07:06.068617 kernel: ima: No architecture policies found Sep 12 17:07:06.068627 kernel: clk: Disabling unused clocks Sep 12 17:07:06.068637 kernel: Freeing unused kernel image (initmem) memory: 43520K Sep 12 17:07:06.068647 kernel: Write protecting the kernel read-only data: 38912k Sep 12 17:07:06.068660 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Sep 12 17:07:06.068669 kernel: Run /init as init process Sep 12 17:07:06.068679 kernel: with arguments: Sep 12 17:07:06.068688 kernel: /init Sep 12 17:07:06.068698 kernel: with environment: Sep 12 17:07:06.068707 kernel: HOME=/ Sep 12 17:07:06.068717 kernel: TERM=linux Sep 12 17:07:06.068726 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:07:06.068737 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:07:06.068754 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:07:06.068765 systemd[1]: Detected virtualization kvm. Sep 12 17:07:06.068775 systemd[1]: Detected architecture x86-64. Sep 12 17:07:06.068784 systemd[1]: Running in initrd. Sep 12 17:07:06.068794 systemd[1]: No hostname configured, using default hostname. Sep 12 17:07:06.068805 systemd[1]: Hostname set to . Sep 12 17:07:06.068815 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:07:06.068828 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:07:06.068838 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:07:06.068848 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:07:06.068859 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:07:06.068870 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:07:06.068880 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:07:06.068892 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:07:06.068907 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:07:06.068917 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:07:06.068927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:07:06.068938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:07:06.068948 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:07:06.068958 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:07:06.068968 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:07:06.068978 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:07:06.068989 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:07:06.069002 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:07:06.069012 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:07:06.069023 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:07:06.069033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:07:06.069043 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:07:06.069053 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:07:06.069064 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:07:06.069074 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:07:06.069087 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:07:06.069097 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:07:06.069108 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:07:06.069118 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:07:06.069128 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:07:06.069139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:07:06.069149 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:07:06.069159 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:07:06.069173 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:07:06.069215 systemd-journald[193]: Collecting audit messages is disabled. Sep 12 17:07:06.069243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:07:06.069253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:06.069264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:07:06.069274 systemd-journald[193]: Journal started Sep 12 17:07:06.069298 systemd-journald[193]: Runtime Journal (/run/log/journal/ac170c8a5fb345169b66014408fc140c) is 6M, max 48.2M, 42.2M free. Sep 12 17:07:06.079958 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:07:06.058460 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 17:07:06.082465 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:07:06.084433 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:07:06.090611 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:07:06.091973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:07:06.102056 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:07:06.104104 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 17:07:06.106544 kernel: Bridge firewalling registered Sep 12 17:07:06.104430 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:07:06.106885 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:07:06.109384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:07:06.123555 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:07:06.125172 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:07:06.135113 dracut-cmdline[224]: dracut-dracut-053 Sep 12 17:07:06.137156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:07:06.139138 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 17:07:06.152551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:07:06.189221 systemd-resolved[248]: Positive Trust Anchors: Sep 12 17:07:06.189243 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:07:06.189273 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:07:06.195217 systemd-resolved[248]: Defaulting to hostname 'linux'. Sep 12 17:07:06.196677 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:07:06.198518 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:07:06.224422 kernel: SCSI subsystem initialized Sep 12 17:07:06.235419 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:07:06.246434 kernel: iscsi: registered transport (tcp) Sep 12 17:07:06.268447 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:07:06.268515 kernel: QLogic iSCSI HBA Driver Sep 12 17:07:06.321068 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:07:06.329624 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:07:06.355424 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:07:06.355455 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:07:06.357432 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:07:06.399421 kernel: raid6: avx2x4 gen() 30391 MB/s Sep 12 17:07:06.416422 kernel: raid6: avx2x2 gen() 31423 MB/s Sep 12 17:07:06.433466 kernel: raid6: avx2x1 gen() 24868 MB/s Sep 12 17:07:06.433486 kernel: raid6: using algorithm avx2x2 gen() 31423 MB/s Sep 12 17:07:06.451460 kernel: raid6: .... xor() 19761 MB/s, rmw enabled Sep 12 17:07:06.451480 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:07:06.472427 kernel: xor: automatically using best checksumming function avx Sep 12 17:07:06.628435 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:07:06.642168 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:07:06.653589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:07:06.669519 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 12 17:07:06.676029 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:07:06.681568 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:07:06.697211 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Sep 12 17:07:06.731091 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:07:06.738638 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:07:06.809914 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:07:06.817579 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:07:06.829147 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:07:06.832190 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:07:06.835035 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:07:06.837413 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:07:06.842422 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:07:06.844577 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:07:06.849860 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:07:06.856762 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:07:06.856804 kernel: GPT:9289727 != 19775487 Sep 12 17:07:06.856817 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:07:06.857433 kernel: GPT:9289727 != 19775487 Sep 12 17:07:06.858903 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:07:06.858926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:07:06.863468 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:07:06.875446 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:07:06.875474 kernel: libata version 3.00 loaded. Sep 12 17:07:06.885649 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:07:06.885861 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:07:06.889612 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:07:06.889918 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:07:06.894414 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:07:06.895447 kernel: AES CTR mode by8 optimization enabled Sep 12 17:07:06.901272 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:07:06.907846 kernel: BTRFS: device fsid 8328a8c6-e42c-42bb-93d2-f755d7523d53 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Sep 12 17:07:06.907890 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (472) Sep 12 17:07:06.902354 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:07:06.905475 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:07:06.906630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:07:06.906821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:06.909869 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:07:06.918424 kernel: scsi host0: ahci Sep 12 17:07:06.919721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:07:06.928578 kernel: scsi host1: ahci Sep 12 17:07:06.928829 kernel: scsi host2: ahci Sep 12 17:07:06.929002 kernel: scsi host3: ahci Sep 12 17:07:06.929162 kernel: scsi host4: ahci Sep 12 17:07:06.929317 kernel: scsi host5: ahci Sep 12 17:07:06.929544 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 12 17:07:06.929561 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 12 17:07:06.929574 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 12 17:07:06.929594 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 12 17:07:06.929608 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 12 17:07:06.929621 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 12 17:07:06.946143 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:07:06.972665 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:07:06.984356 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:07:06.993945 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:07:06.996425 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:07:07.011535 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:07:07.013772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:07:07.013832 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:07.017262 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:07:07.020010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:07:07.034549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:07.040574 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:07:07.043236 disk-uuid[555]: Primary Header is updated. Sep 12 17:07:07.043236 disk-uuid[555]: Secondary Entries is updated. Sep 12 17:07:07.043236 disk-uuid[555]: Secondary Header is updated. Sep 12 17:07:07.047424 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:07:07.052427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:07:07.071178 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:07:07.237429 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:07:07.239424 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:07:07.245433 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:07:07.245495 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:07:07.246425 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:07:07.246441 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:07:07.247433 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:07:07.248627 kernel: ata3.00: applying bridge limits Sep 12 17:07:07.248645 kernel: ata3.00: configured for UDMA/100 Sep 12 17:07:07.249430 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:07:07.293899 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:07:07.294173 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:07:07.312454 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:07:08.053466 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:07:08.053593 disk-uuid[561]: The operation has completed successfully. Sep 12 17:07:08.090733 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:07:08.090866 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:07:08.139722 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:07:08.145309 sh[597]: Success Sep 12 17:07:08.158507 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:07:08.198646 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:07:08.218203 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:07:08.222976 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:07:08.233031 kernel: BTRFS info (device dm-0): first mount of filesystem 8328a8c6-e42c-42bb-93d2-f755d7523d53 Sep 12 17:07:08.233080 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:07:08.233093 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:07:08.233995 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:07:08.234695 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:07:08.240134 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:07:08.242608 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:07:08.248566 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:07:08.250711 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:07:08.270801 kernel: BTRFS info (device vda6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:07:08.270837 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:07:08.270853 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:07:08.274427 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:07:08.280452 kernel: BTRFS info (device vda6): last unmount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:07:08.286435 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:07:08.291577 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:07:08.410836 ignition[681]: Ignition 2.20.0 Sep 12 17:07:08.410850 ignition[681]: Stage: fetch-offline Sep 12 17:07:08.410890 ignition[681]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:08.410900 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:07:08.411021 ignition[681]: parsed url from cmdline: "" Sep 12 17:07:08.411025 ignition[681]: no config URL provided Sep 12 17:07:08.411031 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:07:08.417718 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:07:08.411040 ignition[681]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:07:08.411067 ignition[681]: op(1): [started] loading QEMU firmware config module Sep 12 17:07:08.411073 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:07:08.419509 ignition[681]: op(1): [finished] loading QEMU firmware config module Sep 12 17:07:08.419540 ignition[681]: QEMU firmware config was not found. Ignoring... Sep 12 17:07:08.424682 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:07:08.462892 ignition[681]: parsing config with SHA512: fd4dc45052ce5e102571ef9dcac0cebd7b6a8b281d96ca843aacb417c2a6ff4ec45a249bc9fe5b5e838926734429a9c586e2d3c1ae25b75cbd8c2ea8ad9ce0ec Sep 12 17:07:08.469092 unknown[681]: fetched base config from "system" Sep 12 17:07:08.469109 unknown[681]: fetched user config from "qemu" Sep 12 17:07:08.474447 ignition[681]: fetch-offline: fetch-offline passed Sep 12 17:07:08.474700 ignition[681]: Ignition finished successfully Sep 12 17:07:08.478310 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:07:08.480176 systemd-networkd[782]: lo: Link UP Sep 12 17:07:08.480181 systemd-networkd[782]: lo: Gained carrier Sep 12 17:07:08.482101 systemd-networkd[782]: Enumeration completed Sep 12 17:07:08.482261 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:07:08.482537 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:08.482544 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:07:08.483522 systemd-networkd[782]: eth0: Link UP Sep 12 17:07:08.483527 systemd-networkd[782]: eth0: Gained carrier Sep 12 17:07:08.483534 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:08.484537 systemd[1]: Reached target network.target - Network. Sep 12 17:07:08.486279 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:07:08.499658 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:07:08.505501 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:07:08.516470 ignition[787]: Ignition 2.20.0 Sep 12 17:07:08.516491 ignition[787]: Stage: kargs Sep 12 17:07:08.516655 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:08.516667 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:07:08.517499 ignition[787]: kargs: kargs passed Sep 12 17:07:08.517550 ignition[787]: Ignition finished successfully Sep 12 17:07:08.524438 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:07:08.541595 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:07:08.614156 ignition[796]: Ignition 2.20.0 Sep 12 17:07:08.614168 ignition[796]: Stage: disks Sep 12 17:07:08.614408 ignition[796]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:08.614422 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:07:08.615348 ignition[796]: disks: disks passed Sep 12 17:07:08.615416 ignition[796]: Ignition finished successfully Sep 12 17:07:08.619362 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:07:08.620654 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:07:08.622432 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:07:08.623560 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:07:08.623873 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:07:08.624190 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:07:08.639551 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:07:08.654788 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:07:08.661221 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:07:08.669638 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:07:08.759434 kernel: EXT4-fs (vda9): mounted filesystem 5378802a-8117-4ea8-949a-cd38005ba44a r/w with ordered data mode. Quota mode: none. Sep 12 17:07:08.760689 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:07:08.762164 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:07:08.779554 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:07:08.781749 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:07:08.782078 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:07:08.782123 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:07:08.782151 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:07:08.793474 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:07:08.796735 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:07:08.799440 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (814) Sep 12 17:07:08.801425 kernel: BTRFS info (device vda6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:07:08.801502 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:07:08.801514 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:07:08.804418 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:07:08.806110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:07:08.838953 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:07:08.843688 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:07:08.848593 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:07:08.852765 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:07:08.950230 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:07:08.956587 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:07:08.959313 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:07:08.967490 kernel: BTRFS info (device vda6): last unmount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:07:08.989125 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:07:08.999127 ignition[927]: INFO : Ignition 2.20.0 Sep 12 17:07:08.999127 ignition[927]: INFO : Stage: mount Sep 12 17:07:09.001027 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:09.001027 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:07:09.003621 ignition[927]: INFO : mount: mount passed Sep 12 17:07:09.004374 ignition[927]: INFO : Ignition finished successfully Sep 12 17:07:09.007371 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:07:09.018586 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:07:09.232601 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:07:09.244774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:07:09.253331 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (941) Sep 12 17:07:09.253363 kernel: BTRFS info (device vda6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:07:09.253387 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:07:09.254212 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:07:09.257435 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:07:09.259357 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:07:09.286146 ignition[958]: INFO : Ignition 2.20.0 Sep 12 17:07:09.286146 ignition[958]: INFO : Stage: files Sep 12 17:07:09.288185 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:09.288185 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:07:09.288185 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:07:09.291568 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:07:09.291568 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:07:09.291568 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:07:09.291568 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:07:09.291568 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:07:09.291209 unknown[958]: wrote ssh authorized keys file for user: core Sep 12 17:07:09.298911 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:07:09.298911 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 17:07:09.334447 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:07:09.570355 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:07:09.570355 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:07:09.574223 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:07:09.688340 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:07:09.853631 systemd-networkd[782]: eth0: Gained IPv6LL Sep 12 17:07:09.997115 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:07:09.997115 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:07:10.000805 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 17:07:10.510936 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:07:11.735650 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:07:11.735650 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:07:11.739513 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:07:11.739513 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:07:11.739513 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:07:11.739513 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:07:11.739513 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:07:11.739513 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:07:11.739513 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:07:11.739513 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:07:11.773490 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:07:11.781721 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:07:11.783514 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:07:11.783514 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:07:11.783514 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:07:11.783514 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:07:11.783514 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:07:11.783514 ignition[958]: INFO : files: files passed Sep 12 17:07:11.783514 ignition[958]: INFO : Ignition finished successfully Sep 12 17:07:11.794475 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:07:11.807547 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:07:11.810208 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:07:11.812101 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:07:11.812240 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:07:11.840232 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:07:11.843357 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:07:11.843357 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:07:11.847952 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:07:11.846356 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:07:11.848811 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:07:11.859585 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:07:11.898048 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:07:11.898188 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:07:11.901478 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:07:11.902850 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:07:11.905072 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:07:11.906324 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:07:11.929719 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:07:11.941556 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:07:11.964112 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:07:11.964295 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:07:11.965001 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:07:11.965467 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:07:11.965636 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:07:11.966361 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:07:12.000916 ignition[1012]: INFO : Ignition 2.20.0 Sep 12 17:07:12.000916 ignition[1012]: INFO : Stage: umount Sep 12 17:07:12.000916 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:07:12.000916 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:07:11.966674 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:07:12.007789 ignition[1012]: INFO : umount: umount passed Sep 12 17:07:12.007789 ignition[1012]: INFO : Ignition finished successfully Sep 12 17:07:11.966974 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:07:11.967285 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:07:11.967764 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:07:11.968070 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:07:11.968370 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:07:11.968707 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:07:11.969000 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:07:11.969303 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:07:11.969748 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:07:11.969868 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:07:11.970556 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:07:11.970867 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:07:11.971133 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:07:11.971232 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:07:11.971621 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:07:11.971734 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:07:11.972376 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:07:11.972519 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:07:11.972931 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:07:11.973159 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:07:11.973291 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:07:11.973665 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:07:11.973956 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:07:11.974266 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:07:11.974365 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:07:11.974765 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:07:11.974851 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:07:11.975231 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:07:11.975346 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:07:11.975688 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:07:11.975796 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:07:11.977104 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:07:11.978113 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:07:11.978634 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:07:11.978744 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:07:11.979160 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:07:11.979259 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:07:11.984615 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:07:11.984729 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:07:12.002833 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:07:12.002966 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:07:12.006755 systemd[1]: Stopped target network.target - Network. Sep 12 17:07:12.007794 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:07:12.007862 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:07:12.009503 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:07:12.009557 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:07:12.011591 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:07:12.011644 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:07:12.013612 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:07:12.013664 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:07:12.016155 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:07:12.018225 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:07:12.021523 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:07:12.027056 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:07:12.027206 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:07:12.032442 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:07:12.032768 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:07:12.032921 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:07:12.036937 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:07:12.038014 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:07:12.038107 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:07:12.047512 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:07:12.049257 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:07:12.049327 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:07:12.051548 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:07:12.051601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:07:12.054215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:07:12.054267 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:07:12.056447 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:07:12.056500 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:07:12.058940 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:07:12.062235 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:07:12.062308 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:07:12.071795 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:07:12.071940 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:07:12.076238 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:07:12.076443 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:07:12.078266 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:07:12.078316 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:07:12.080241 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:07:12.080283 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:07:12.082450 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:07:12.082503 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:07:12.084756 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:07:12.084806 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:07:12.086912 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:07:12.086966 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:07:12.097592 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:07:12.098874 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:07:12.098942 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:07:12.101445 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:07:12.101504 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:07:12.103900 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:07:12.103952 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:07:12.105413 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:07:12.105466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:12.108915 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:07:12.108984 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:07:12.109361 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:07:12.109504 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:07:12.587034 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:07:12.587190 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:07:12.589755 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:07:12.591764 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:07:12.591828 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:07:12.602725 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:07:12.611549 systemd[1]: Switching root. Sep 12 17:07:12.649100 systemd-journald[193]: Journal stopped Sep 12 17:07:14.254543 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 12 17:07:14.254646 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:07:14.254697 kernel: SELinux: policy capability open_perms=1 Sep 12 17:07:14.254733 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:07:14.254753 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:07:14.254782 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:07:14.254799 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:07:14.254816 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:07:14.254849 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:07:14.254882 kernel: audit: type=1403 audit(1757696833.367:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:07:14.254903 systemd[1]: Successfully loaded SELinux policy in 48.251ms. Sep 12 17:07:14.254940 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.405ms. Sep 12 17:07:14.254960 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:07:14.254978 systemd[1]: Detected virtualization kvm. Sep 12 17:07:14.255000 systemd[1]: Detected architecture x86-64. Sep 12 17:07:14.255017 systemd[1]: Detected first boot. Sep 12 17:07:14.255033 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:07:14.255049 zram_generator::config[1061]: No configuration found. Sep 12 17:07:14.255066 kernel: Guest personality initialized and is inactive Sep 12 17:07:14.255083 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:07:14.255098 kernel: Initialized host personality Sep 12 17:07:14.255114 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:07:14.255129 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:07:14.255151 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:07:14.255168 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:07:14.255188 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:07:14.255205 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:07:14.255223 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:07:14.255240 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:07:14.255256 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:07:14.255280 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:07:14.255306 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:07:14.255332 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:07:14.255349 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:07:14.255365 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:07:14.255381 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:07:14.255427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:07:14.255446 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:07:14.255463 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:07:14.255479 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:07:14.255501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:07:14.255519 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:07:14.255537 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:07:14.255554 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:07:14.255571 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:07:14.255590 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:07:14.255611 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:07:14.255634 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:07:14.255652 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:07:14.255670 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:07:14.255687 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:07:14.255705 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:07:14.255720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:07:14.255734 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:07:14.255749 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:07:14.255763 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:07:14.255777 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:07:14.255797 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:07:14.255814 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:07:14.255830 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:07:14.255846 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:07:14.255860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:07:14.255875 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:07:14.255889 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:07:14.255905 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:07:14.255923 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:07:14.255977 systemd[1]: Reached target machines.target - Containers. Sep 12 17:07:14.255998 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:07:14.256015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:07:14.256032 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:07:14.256050 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:07:14.256067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:07:14.256085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:07:14.256102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:07:14.256130 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:07:14.256147 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:07:14.256164 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:07:14.256180 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:07:14.256197 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:07:14.256213 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:07:14.256229 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:07:14.256246 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:07:14.256268 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:07:14.256285 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:07:14.256308 kernel: fuse: init (API version 7.39) Sep 12 17:07:14.256334 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:07:14.256352 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:07:14.256370 kernel: loop: module loaded Sep 12 17:07:14.256395 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:07:14.256429 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:07:14.256447 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:07:14.256465 systemd[1]: Stopped verity-setup.service. Sep 12 17:07:14.256481 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:07:14.256498 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:07:14.256515 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:07:14.256532 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:07:14.256554 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:07:14.256584 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:07:14.256620 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:07:14.256651 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:07:14.256700 systemd-journald[1139]: Collecting audit messages is disabled. Sep 12 17:07:14.256742 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:07:14.256760 systemd-journald[1139]: Journal started Sep 12 17:07:14.256789 systemd-journald[1139]: Runtime Journal (/run/log/journal/ac170c8a5fb345169b66014408fc140c) is 6M, max 48.2M, 42.2M free. Sep 12 17:07:13.995156 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:07:14.007696 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:07:14.008220 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:07:14.259426 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:07:14.259458 kernel: ACPI: bus type drm_connector registered Sep 12 17:07:14.262668 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:07:14.262993 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:07:14.264709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:07:14.264991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:07:14.267101 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:07:14.267539 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:07:14.269244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:07:14.269661 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:07:14.271352 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:07:14.271834 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:07:14.273544 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:07:14.273840 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:07:14.275313 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:07:14.276889 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:07:14.278733 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:07:14.280631 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:07:14.297854 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:07:14.305494 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:07:14.307975 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:07:14.309099 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:07:14.309134 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:07:14.311215 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:07:14.313695 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:07:14.318571 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:07:14.320861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:07:14.323228 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:07:14.330593 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:07:14.332172 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:07:14.339562 systemd-journald[1139]: Time spent on flushing to /var/log/journal/ac170c8a5fb345169b66014408fc140c is 35.168ms for 1058 entries. Sep 12 17:07:14.339562 systemd-journald[1139]: System Journal (/var/log/journal/ac170c8a5fb345169b66014408fc140c) is 8M, max 195.6M, 187.6M free. Sep 12 17:07:14.432843 systemd-journald[1139]: Received client request to flush runtime journal. Sep 12 17:07:14.432895 kernel: loop0: detected capacity change from 0 to 229808 Sep 12 17:07:14.337371 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:07:14.338826 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:07:14.380809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:07:14.388654 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:07:14.393626 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:07:14.397558 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:07:14.400003 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:07:14.401588 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:07:14.404144 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:07:14.413163 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:07:14.419112 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:07:14.437617 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:07:14.440409 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:07:14.446265 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:07:14.445024 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:07:14.453282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:07:14.462709 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:07:14.470505 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 12 17:07:14.470524 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Sep 12 17:07:14.476522 kernel: loop1: detected capacity change from 0 to 147912 Sep 12 17:07:14.474345 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:07:14.478550 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:07:14.492766 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:07:14.521465 kernel: loop2: detected capacity change from 0 to 138176 Sep 12 17:07:14.528881 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:07:14.567862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:07:14.589448 kernel: loop3: detected capacity change from 0 to 229808 Sep 12 17:07:14.603509 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Sep 12 17:07:14.603545 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Sep 12 17:07:14.607513 kernel: loop4: detected capacity change from 0 to 147912 Sep 12 17:07:14.613336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:07:14.625427 kernel: loop5: detected capacity change from 0 to 138176 Sep 12 17:07:14.648826 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:07:14.649584 (sd-merge)[1208]: Merged extensions into '/usr'. Sep 12 17:07:14.672796 systemd[1]: Reload requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:07:14.672823 systemd[1]: Reloading... Sep 12 17:07:14.785667 zram_generator::config[1238]: No configuration found. Sep 12 17:07:15.023872 ldconfig[1176]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:07:15.051972 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:07:15.124867 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:07:15.125285 systemd[1]: Reloading finished in 451 ms. Sep 12 17:07:15.150016 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:07:15.151785 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:07:15.197629 systemd[1]: Starting ensure-sysext.service... Sep 12 17:07:15.200379 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:07:15.214540 systemd[1]: Reload requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:07:15.214559 systemd[1]: Reloading... Sep 12 17:07:15.264247 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:07:15.264966 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:07:15.266040 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:07:15.266432 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 12 17:07:15.269621 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 12 17:07:15.313484 zram_generator::config[1304]: No configuration found. Sep 12 17:07:15.317086 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:07:15.317101 systemd-tmpfiles[1276]: Skipping /boot Sep 12 17:07:15.347108 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:07:15.347128 systemd-tmpfiles[1276]: Skipping /boot Sep 12 17:07:15.464663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:07:15.535550 systemd[1]: Reloading finished in 320 ms. Sep 12 17:07:15.551595 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:07:15.571678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:07:15.593784 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:07:15.597767 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:07:15.600983 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:07:15.606599 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:07:15.611772 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:07:15.616637 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:07:15.620889 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:07:15.621080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:07:15.622593 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:07:15.630128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:07:15.632921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:07:15.634252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:07:15.634380 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:07:15.637655 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:07:15.638777 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:07:15.640087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:07:15.640387 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:07:15.647019 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:07:15.647516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:07:15.655887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:07:15.656177 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:07:15.658878 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:07:15.669271 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:07:15.670177 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Sep 12 17:07:15.674026 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:07:15.674237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:07:15.685038 augenrules[1378]: No rules Sep 12 17:07:15.688834 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:07:15.692103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:07:15.696753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:07:15.699596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:07:15.699775 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:07:15.701815 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:07:15.702972 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:07:15.704261 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:07:15.707174 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:07:15.707581 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:07:15.708987 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:07:15.711198 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:07:15.713061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:07:15.713377 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:07:15.716335 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:07:15.717266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:07:15.733888 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:07:15.734185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:07:15.744345 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:07:15.750552 systemd[1]: Finished ensure-sysext.service. Sep 12 17:07:15.760062 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:07:15.769630 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:07:15.771813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:07:15.775616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:07:15.779616 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:07:15.792609 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:07:15.796199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:07:15.796251 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:07:15.799577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:07:15.807593 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:07:15.808870 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:07:15.808909 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:07:15.809737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:07:15.809989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:07:15.811803 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:07:15.812039 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:07:15.813816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:07:15.814040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:07:15.824120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:07:15.824198 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:07:15.832919 augenrules[1418]: /sbin/augenrules: No change Sep 12 17:07:15.855426 augenrules[1446]: No rules Sep 12 17:07:15.865436 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1393) Sep 12 17:07:15.866217 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:07:15.866541 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:07:15.884966 systemd-resolved[1348]: Positive Trust Anchors: Sep 12 17:07:15.885483 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:07:15.885582 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:07:15.891064 systemd-resolved[1348]: Defaulting to hostname 'linux'. Sep 12 17:07:15.895004 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:07:15.896576 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:07:15.909791 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:07:15.912953 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:07:15.958437 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:07:15.986599 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 17:07:15.986999 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:07:15.988469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:07:15.990947 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:07:15.991326 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:07:15.997580 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:07:16.006692 systemd-networkd[1428]: lo: Link UP Sep 12 17:07:16.006706 systemd-networkd[1428]: lo: Gained carrier Sep 12 17:07:16.007864 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:07:16.008031 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:07:16.012525 systemd-networkd[1428]: Enumeration completed Sep 12 17:07:16.013525 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:07:16.013708 systemd[1]: Reached target network.target - Network. Sep 12 17:07:16.016890 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:16.016906 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:07:16.017657 systemd-networkd[1428]: eth0: Link UP Sep 12 17:07:16.017668 systemd-networkd[1428]: eth0: Gained carrier Sep 12 17:07:16.017682 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:07:16.019438 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:07:16.022583 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:07:16.026853 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:07:16.028645 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:07:16.032473 systemd-networkd[1428]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:07:16.033616 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Sep 12 17:07:16.034639 systemd-timesyncd[1429]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:07:16.034684 systemd-timesyncd[1429]: Initial clock synchronization to Fri 2025-09-12 17:07:16.277693 UTC. Sep 12 17:07:16.110787 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:07:16.134767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:07:16.141903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:07:16.142315 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:16.154458 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:07:16.158818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:07:16.168438 kernel: kvm_amd: TSC scaling supported Sep 12 17:07:16.168517 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:07:16.168563 kernel: kvm_amd: Nested Paging enabled Sep 12 17:07:16.168577 kernel: kvm_amd: LBR virtualization supported Sep 12 17:07:16.168594 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:07:16.169559 kernel: kvm_amd: Virtual GIF supported Sep 12 17:07:16.192495 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:07:16.227717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:07:16.239103 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:07:16.251785 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:07:16.262324 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:07:16.300280 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:07:16.302001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:07:16.303135 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:07:16.304344 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:07:16.305627 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:07:16.307116 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:07:16.308315 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:07:16.309543 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:07:16.310755 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:07:16.310795 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:07:16.311688 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:07:16.313806 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:07:16.317592 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:07:16.322862 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:07:16.324532 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:07:16.325866 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:07:16.335442 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:07:16.337176 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:07:16.340177 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:07:16.342174 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:07:16.343547 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:07:16.344657 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:07:16.344795 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:07:16.344833 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:07:16.346729 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:07:16.349429 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:07:16.351582 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:07:16.353116 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:07:16.357674 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:07:16.360116 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:07:16.364175 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:07:16.367475 jq[1485]: false Sep 12 17:07:16.372635 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:07:16.381671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:07:16.385388 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:07:16.391670 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:07:16.393351 extend-filesystems[1486]: Found loop3 Sep 12 17:07:16.393351 extend-filesystems[1486]: Found loop4 Sep 12 17:07:16.410558 extend-filesystems[1486]: Found loop5 Sep 12 17:07:16.410558 extend-filesystems[1486]: Found sr0 Sep 12 17:07:16.410558 extend-filesystems[1486]: Found vda Sep 12 17:07:16.410558 extend-filesystems[1486]: Found vda1 Sep 12 17:07:16.410558 extend-filesystems[1486]: Found vda2 Sep 12 17:07:16.410558 extend-filesystems[1486]: Found vda3 Sep 12 17:07:16.410558 extend-filesystems[1486]: Found usr Sep 12 17:07:16.410558 extend-filesystems[1486]: Found vda4 Sep 12 17:07:16.410558 extend-filesystems[1486]: Found vda6 Sep 12 17:07:16.410558 extend-filesystems[1486]: Found vda7 Sep 12 17:07:16.410558 extend-filesystems[1486]: Found vda9 Sep 12 17:07:16.410558 extend-filesystems[1486]: Checking size of /dev/vda9 Sep 12 17:07:16.483149 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:07:16.483193 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1400) Sep 12 17:07:16.483210 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:07:16.397220 dbus-daemon[1484]: [system] SELinux support is enabled Sep 12 17:07:16.395000 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:07:16.484969 extend-filesystems[1486]: Resized partition /dev/vda9 Sep 12 17:07:16.397485 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:07:16.487627 extend-filesystems[1508]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:07:16.487627 extend-filesystems[1508]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:07:16.487627 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:07:16.487627 extend-filesystems[1508]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:07:16.399282 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:07:16.495324 extend-filesystems[1486]: Resized filesystem in /dev/vda9 Sep 12 17:07:16.403598 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:07:16.497964 update_engine[1499]: I20250912 17:07:16.424623 1499 main.cc:92] Flatcar Update Engine starting Sep 12 17:07:16.497964 update_engine[1499]: I20250912 17:07:16.435286 1499 update_check_scheduler.cc:74] Next update check in 9m40s Sep 12 17:07:16.407696 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:07:16.498432 jq[1501]: true Sep 12 17:07:16.418654 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:07:16.422975 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:07:16.499112 tar[1509]: linux-amd64/LICENSE Sep 12 17:07:16.499112 tar[1509]: linux-amd64/helm Sep 12 17:07:16.423604 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:07:16.424132 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:07:16.500108 jq[1510]: true Sep 12 17:07:16.425272 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:07:16.435128 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:07:16.435532 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:07:16.471736 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:07:16.479951 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:07:16.485620 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:07:16.486536 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:07:16.494974 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:07:16.495001 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:07:16.499503 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:07:16.499523 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:07:16.502524 systemd-logind[1497]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:07:16.502558 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:07:16.504013 systemd-logind[1497]: New seat seat0. Sep 12 17:07:16.509618 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:07:16.515323 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:07:16.566162 sshd_keygen[1504]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:07:16.577526 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:07:16.580569 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:07:16.583715 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:07:16.592367 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:07:16.608957 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:07:16.616909 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:07:16.626527 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:07:16.626972 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:07:16.650724 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:07:16.669393 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:07:16.679738 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:07:16.687465 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:07:16.693858 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:07:16.849819 containerd[1511]: time="2025-09-12T17:07:16.849716865Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 17:07:16.875629 containerd[1511]: time="2025-09-12T17:07:16.875462326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:16.877868 containerd[1511]: time="2025-09-12T17:07:16.877829967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:16.877868 containerd[1511]: time="2025-09-12T17:07:16.877862869Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:07:16.877926 containerd[1511]: time="2025-09-12T17:07:16.877882245Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:07:16.878181 containerd[1511]: time="2025-09-12T17:07:16.878156209Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:07:16.878212 containerd[1511]: time="2025-09-12T17:07:16.878179192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:16.878304 containerd[1511]: time="2025-09-12T17:07:16.878280121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:16.878326 containerd[1511]: time="2025-09-12T17:07:16.878299498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:16.878685 containerd[1511]: time="2025-09-12T17:07:16.878651337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:16.878685 containerd[1511]: time="2025-09-12T17:07:16.878675603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:16.878737 containerd[1511]: time="2025-09-12T17:07:16.878693186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:16.878737 containerd[1511]: time="2025-09-12T17:07:16.878704908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:16.878859 containerd[1511]: time="2025-09-12T17:07:16.878838428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:16.879185 containerd[1511]: time="2025-09-12T17:07:16.879155944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:07:16.879434 containerd[1511]: time="2025-09-12T17:07:16.879386416Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:07:16.879434 containerd[1511]: time="2025-09-12T17:07:16.879422334Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:07:16.879584 containerd[1511]: time="2025-09-12T17:07:16.879562607Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:07:16.879669 containerd[1511]: time="2025-09-12T17:07:16.879649500Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:07:16.886180 containerd[1511]: time="2025-09-12T17:07:16.886140315Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:07:16.886238 containerd[1511]: time="2025-09-12T17:07:16.886190328Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:07:16.886238 containerd[1511]: time="2025-09-12T17:07:16.886224493Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:07:16.886285 containerd[1511]: time="2025-09-12T17:07:16.886244250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:07:16.886285 containerd[1511]: time="2025-09-12T17:07:16.886271170Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:07:16.886477 containerd[1511]: time="2025-09-12T17:07:16.886448893Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:07:16.886766 containerd[1511]: time="2025-09-12T17:07:16.886716295Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:07:16.886939 containerd[1511]: time="2025-09-12T17:07:16.886911161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:07:16.886972 containerd[1511]: time="2025-09-12T17:07:16.886935196Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:07:16.886972 containerd[1511]: time="2025-09-12T17:07:16.886954171Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:07:16.887011 containerd[1511]: time="2025-09-12T17:07:16.886972185Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:07:16.887011 containerd[1511]: time="2025-09-12T17:07:16.886989658Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:07:16.887011 containerd[1511]: time="2025-09-12T17:07:16.887004626Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:07:16.887083 containerd[1511]: time="2025-09-12T17:07:16.887020706Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:07:16.887083 containerd[1511]: time="2025-09-12T17:07:16.887037157Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:07:16.887083 containerd[1511]: time="2025-09-12T17:07:16.887052285Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:07:16.887148 containerd[1511]: time="2025-09-12T17:07:16.887082722Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:07:16.887148 containerd[1511]: time="2025-09-12T17:07:16.887098041Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:07:16.887148 containerd[1511]: time="2025-09-12T17:07:16.887122527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887148 containerd[1511]: time="2025-09-12T17:07:16.887139499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887223 containerd[1511]: time="2025-09-12T17:07:16.887155379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887223 containerd[1511]: time="2025-09-12T17:07:16.887171499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887223 containerd[1511]: time="2025-09-12T17:07:16.887185816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887223 containerd[1511]: time="2025-09-12T17:07:16.887212796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887315 containerd[1511]: time="2025-09-12T17:07:16.887228526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887315 containerd[1511]: time="2025-09-12T17:07:16.887244295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887315 containerd[1511]: time="2025-09-12T17:07:16.887271887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887315 containerd[1511]: time="2025-09-12T17:07:16.887290783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887315 containerd[1511]: time="2025-09-12T17:07:16.887304959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887415 containerd[1511]: time="2025-09-12T17:07:16.887319446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887415 containerd[1511]: time="2025-09-12T17:07:16.887333723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887415 containerd[1511]: time="2025-09-12T17:07:16.887350895Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:07:16.887415 containerd[1511]: time="2025-09-12T17:07:16.887374149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887415 containerd[1511]: time="2025-09-12T17:07:16.887389207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887516 containerd[1511]: time="2025-09-12T17:07:16.887417961Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:07:16.887516 containerd[1511]: time="2025-09-12T17:07:16.887464318Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:07:16.887516 containerd[1511]: time="2025-09-12T17:07:16.887481340Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:07:16.887516 containerd[1511]: time="2025-09-12T17:07:16.887495016Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:07:16.887516 containerd[1511]: time="2025-09-12T17:07:16.887509252Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:07:16.887609 containerd[1511]: time="2025-09-12T17:07:16.887521255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.887609 containerd[1511]: time="2025-09-12T17:07:16.887536634Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:07:16.887609 containerd[1511]: time="2025-09-12T17:07:16.887549438Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:07:16.887609 containerd[1511]: time="2025-09-12T17:07:16.887561821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:07:16.888037 containerd[1511]: time="2025-09-12T17:07:16.887954838Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:07:16.888037 containerd[1511]: time="2025-09-12T17:07:16.888031201Z" level=info msg="Connect containerd service" Sep 12 17:07:16.888285 containerd[1511]: time="2025-09-12T17:07:16.888083570Z" level=info msg="using legacy CRI server" Sep 12 17:07:16.888285 containerd[1511]: time="2025-09-12T17:07:16.888093779Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:07:16.888336 containerd[1511]: time="2025-09-12T17:07:16.888288384Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:07:16.889314 containerd[1511]: time="2025-09-12T17:07:16.889277569Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:07:16.889494 containerd[1511]: time="2025-09-12T17:07:16.889450163Z" level=info msg="Start subscribing containerd event" Sep 12 17:07:16.893138 containerd[1511]: time="2025-09-12T17:07:16.893106331Z" level=info msg="Start recovering state" Sep 12 17:07:16.893780 containerd[1511]: time="2025-09-12T17:07:16.893206990Z" level=info msg="Start event monitor" Sep 12 17:07:16.893780 containerd[1511]: time="2025-09-12T17:07:16.893225284Z" level=info msg="Start snapshots syncer" Sep 12 17:07:16.893780 containerd[1511]: time="2025-09-12T17:07:16.893235573Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:07:16.893780 containerd[1511]: time="2025-09-12T17:07:16.893244500Z" level=info msg="Start streaming server" Sep 12 17:07:16.893780 containerd[1511]: time="2025-09-12T17:07:16.893378832Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:07:16.893780 containerd[1511]: time="2025-09-12T17:07:16.893481915Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:07:16.893677 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:07:16.894048 containerd[1511]: time="2025-09-12T17:07:16.894031386Z" level=info msg="containerd successfully booted in 0.045782s" Sep 12 17:07:17.061207 tar[1509]: linux-amd64/README.md Sep 12 17:07:17.088971 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:07:17.923479 systemd-networkd[1428]: eth0: Gained IPv6LL Sep 12 17:07:17.927094 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:07:17.929160 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:07:17.947923 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:07:17.951055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:17.953592 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:07:17.976159 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:07:17.976503 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:07:17.978233 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:07:17.983269 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:07:19.094009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:19.095970 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:07:19.097310 systemd[1]: Startup finished in 1.370s (kernel) + 7.618s (initrd) + 5.776s (userspace) = 14.765s. Sep 12 17:07:19.100666 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:07:19.188126 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:07:19.189505 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:57972.service - OpenSSH per-connection server daemon (10.0.0.1:57972). Sep 12 17:07:19.249392 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 57972 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:19.250323 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:19.258371 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:07:19.266652 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:07:19.274085 systemd-logind[1497]: New session 1 of user core. Sep 12 17:07:19.288192 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:07:19.297791 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:07:19.300976 (systemd)[1615]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:07:19.303724 systemd-logind[1497]: New session c1 of user core. Sep 12 17:07:19.518904 systemd[1615]: Queued start job for default target default.target. Sep 12 17:07:19.528860 systemd[1615]: Created slice app.slice - User Application Slice. Sep 12 17:07:19.528888 systemd[1615]: Reached target paths.target - Paths. Sep 12 17:07:19.528933 systemd[1615]: Reached target timers.target - Timers. Sep 12 17:07:19.530859 systemd[1615]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:07:19.547803 systemd[1615]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:07:19.548030 systemd[1615]: Reached target sockets.target - Sockets. Sep 12 17:07:19.548079 systemd[1615]: Reached target basic.target - Basic System. Sep 12 17:07:19.548125 systemd[1615]: Reached target default.target - Main User Target. Sep 12 17:07:19.548161 systemd[1615]: Startup finished in 235ms. Sep 12 17:07:19.549196 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:07:19.558863 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:07:19.669512 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:57982.service - OpenSSH per-connection server daemon (10.0.0.1:57982). Sep 12 17:07:19.763580 kubelet[1599]: E0912 17:07:19.763494 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:07:19.768873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:07:19.769145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:07:19.769452 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 57982 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:19.769698 systemd[1]: kubelet.service: Consumed 1.617s CPU time, 269.8M memory peak. Sep 12 17:07:19.771527 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:19.776742 systemd-logind[1497]: New session 2 of user core. Sep 12 17:07:19.788633 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:07:19.845057 sshd[1630]: Connection closed by 10.0.0.1 port 57982 Sep 12 17:07:19.845535 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:19.865577 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:57982.service: Deactivated successfully. Sep 12 17:07:19.867721 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:07:19.869545 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:07:19.871141 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:47198.service - OpenSSH per-connection server daemon (10.0.0.1:47198). Sep 12 17:07:19.872277 systemd-logind[1497]: Removed session 2. Sep 12 17:07:19.915019 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 47198 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:19.917131 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:19.922277 systemd-logind[1497]: New session 3 of user core. Sep 12 17:07:19.931546 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:07:19.983613 sshd[1638]: Connection closed by 10.0.0.1 port 47198 Sep 12 17:07:19.984033 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:19.992441 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:47198.service: Deactivated successfully. Sep 12 17:07:19.994525 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:07:19.996222 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:07:20.007820 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:47206.service - OpenSSH per-connection server daemon (10.0.0.1:47206). Sep 12 17:07:20.008925 systemd-logind[1497]: Removed session 3. Sep 12 17:07:20.044590 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 47206 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:20.046069 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:20.050834 systemd-logind[1497]: New session 4 of user core. Sep 12 17:07:20.060609 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:07:20.117058 sshd[1646]: Connection closed by 10.0.0.1 port 47206 Sep 12 17:07:20.117472 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:20.133244 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:47206.service: Deactivated successfully. Sep 12 17:07:20.135256 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:07:20.136884 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:07:20.151681 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:47210.service - OpenSSH per-connection server daemon (10.0.0.1:47210). Sep 12 17:07:20.152740 systemd-logind[1497]: Removed session 4. Sep 12 17:07:20.189158 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 47210 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:20.190835 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:20.195352 systemd-logind[1497]: New session 5 of user core. Sep 12 17:07:20.205546 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:07:20.266035 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:07:20.266402 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:07:20.287853 sudo[1655]: pam_unix(sudo:session): session closed for user root Sep 12 17:07:20.289801 sshd[1654]: Connection closed by 10.0.0.1 port 47210 Sep 12 17:07:20.290243 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:20.304725 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:47210.service: Deactivated successfully. Sep 12 17:07:20.307092 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:07:20.308748 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:07:20.313682 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:47220.service - OpenSSH per-connection server daemon (10.0.0.1:47220). Sep 12 17:07:20.314607 systemd-logind[1497]: Removed session 5. Sep 12 17:07:20.354673 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 47220 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:20.356618 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:20.361158 systemd-logind[1497]: New session 6 of user core. Sep 12 17:07:20.370607 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:07:20.425768 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:07:20.426136 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:07:20.429992 sudo[1665]: pam_unix(sudo:session): session closed for user root Sep 12 17:07:20.436897 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:07:20.437242 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:07:20.458752 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:07:20.489827 augenrules[1687]: No rules Sep 12 17:07:20.491752 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:07:20.492086 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:07:20.493228 sudo[1664]: pam_unix(sudo:session): session closed for user root Sep 12 17:07:20.494690 sshd[1663]: Connection closed by 10.0.0.1 port 47220 Sep 12 17:07:20.495044 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:20.511576 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:47220.service: Deactivated successfully. Sep 12 17:07:20.513598 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:07:20.514973 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:07:20.520718 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:47222.service - OpenSSH per-connection server daemon (10.0.0.1:47222). Sep 12 17:07:20.523328 systemd-logind[1497]: Removed session 6. Sep 12 17:07:20.558731 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 47222 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:20.560280 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:20.564860 systemd-logind[1497]: New session 7 of user core. Sep 12 17:07:20.574645 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:07:20.629402 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:07:20.629770 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:07:20.938903 (dockerd)[1718]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:07:20.938919 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:07:21.230731 dockerd[1718]: time="2025-09-12T17:07:21.230564047Z" level=info msg="Starting up" Sep 12 17:07:22.000964 dockerd[1718]: time="2025-09-12T17:07:22.000911683Z" level=info msg="Loading containers: start." Sep 12 17:07:23.219468 kernel: Initializing XFRM netlink socket Sep 12 17:07:23.322132 systemd-networkd[1428]: docker0: Link UP Sep 12 17:07:23.370075 dockerd[1718]: time="2025-09-12T17:07:23.369996512Z" level=info msg="Loading containers: done." Sep 12 17:07:23.394082 dockerd[1718]: time="2025-09-12T17:07:23.394002321Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:07:23.394321 dockerd[1718]: time="2025-09-12T17:07:23.394133905Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 17:07:23.394402 dockerd[1718]: time="2025-09-12T17:07:23.394332160Z" level=info msg="Daemon has completed initialization" Sep 12 17:07:23.442337 dockerd[1718]: time="2025-09-12T17:07:23.442259854Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:07:23.442694 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:07:24.396007 containerd[1511]: time="2025-09-12T17:07:24.395930857Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:07:25.186341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918747406.mount: Deactivated successfully. Sep 12 17:07:27.087676 containerd[1511]: time="2025-09-12T17:07:27.087475254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:27.088284 containerd[1511]: time="2025-09-12T17:07:27.088221541Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 17:07:27.089764 containerd[1511]: time="2025-09-12T17:07:27.089731468Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:27.093549 containerd[1511]: time="2025-09-12T17:07:27.093484117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:27.094687 containerd[1511]: time="2025-09-12T17:07:27.094637734Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.698622888s" Sep 12 17:07:27.094687 containerd[1511]: time="2025-09-12T17:07:27.094682623Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 17:07:27.095542 containerd[1511]: time="2025-09-12T17:07:27.095486213Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:07:28.825436 containerd[1511]: time="2025-09-12T17:07:28.825361220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:28.826202 containerd[1511]: time="2025-09-12T17:07:28.826151452Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 17:07:28.827445 containerd[1511]: time="2025-09-12T17:07:28.827386916Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:28.830448 containerd[1511]: time="2025-09-12T17:07:28.830374336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:28.831470 containerd[1511]: time="2025-09-12T17:07:28.831421663Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.735873574s" Sep 12 17:07:28.831470 containerd[1511]: time="2025-09-12T17:07:28.831458578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 17:07:28.832022 containerd[1511]: time="2025-09-12T17:07:28.831997759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:07:30.019840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:07:30.029853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:30.285657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:30.302152 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:07:30.370691 kubelet[1984]: E0912 17:07:30.370614 1984 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:07:30.377944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:07:30.378166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:07:30.378607 systemd[1]: kubelet.service: Consumed 335ms CPU time, 109.2M memory peak. Sep 12 17:07:31.454328 containerd[1511]: time="2025-09-12T17:07:31.454267619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:31.455388 containerd[1511]: time="2025-09-12T17:07:31.455336651Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 17:07:31.457113 containerd[1511]: time="2025-09-12T17:07:31.457053233Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:31.461424 containerd[1511]: time="2025-09-12T17:07:31.460536315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:31.464148 containerd[1511]: time="2025-09-12T17:07:31.464095523Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.632063834s" Sep 12 17:07:31.464148 containerd[1511]: time="2025-09-12T17:07:31.464136591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 17:07:31.464770 containerd[1511]: time="2025-09-12T17:07:31.464722770Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:07:35.162656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546822827.mount: Deactivated successfully. Sep 12 17:07:35.733213 containerd[1511]: time="2025-09-12T17:07:35.733159314Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 17:07:35.733774 containerd[1511]: time="2025-09-12T17:07:35.733315675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:35.734650 containerd[1511]: time="2025-09-12T17:07:35.734559138Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:35.736831 containerd[1511]: time="2025-09-12T17:07:35.736790957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:35.737604 containerd[1511]: time="2025-09-12T17:07:35.737566493Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 4.272814506s" Sep 12 17:07:35.737604 containerd[1511]: time="2025-09-12T17:07:35.737600188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 17:07:35.738217 containerd[1511]: time="2025-09-12T17:07:35.738169633Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:07:36.970568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165488889.mount: Deactivated successfully. Sep 12 17:07:39.798519 containerd[1511]: time="2025-09-12T17:07:39.798418259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:39.799290 containerd[1511]: time="2025-09-12T17:07:39.799233075Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 17:07:39.801006 containerd[1511]: time="2025-09-12T17:07:39.800960779Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:39.805693 containerd[1511]: time="2025-09-12T17:07:39.805649839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:39.808107 containerd[1511]: time="2025-09-12T17:07:39.807995614Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.069768972s" Sep 12 17:07:39.808207 containerd[1511]: time="2025-09-12T17:07:39.808121081Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 17:07:39.809094 containerd[1511]: time="2025-09-12T17:07:39.809062356Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:07:40.469438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:07:40.483569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:40.683316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:40.687822 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:07:40.956341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount471907607.mount: Deactivated successfully. Sep 12 17:07:40.998275 containerd[1511]: time="2025-09-12T17:07:40.998210475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:40.999033 containerd[1511]: time="2025-09-12T17:07:40.998969625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:07:41.000166 containerd[1511]: time="2025-09-12T17:07:41.000102400Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:41.002538 containerd[1511]: time="2025-09-12T17:07:41.002497569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:41.003161 containerd[1511]: time="2025-09-12T17:07:41.003135531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.193974334s" Sep 12 17:07:41.003228 containerd[1511]: time="2025-09-12T17:07:41.003163909Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:07:41.003642 containerd[1511]: time="2025-09-12T17:07:41.003616784Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:07:41.017420 kubelet[2064]: E0912 17:07:41.017337 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:07:41.022068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:07:41.022266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:07:41.022656 systemd[1]: kubelet.service: Consumed 322ms CPU time, 112.1M memory peak. Sep 12 17:07:41.622616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount380901558.mount: Deactivated successfully. Sep 12 17:07:43.707553 containerd[1511]: time="2025-09-12T17:07:43.707471107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:43.708258 containerd[1511]: time="2025-09-12T17:07:43.708175200Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 17:07:43.709559 containerd[1511]: time="2025-09-12T17:07:43.709511284Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:43.713853 containerd[1511]: time="2025-09-12T17:07:43.713816161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:07:43.715436 containerd[1511]: time="2025-09-12T17:07:43.715376135Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.711727694s" Sep 12 17:07:43.715436 containerd[1511]: time="2025-09-12T17:07:43.715429225Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 17:07:46.351414 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:46.351590 systemd[1]: kubelet.service: Consumed 322ms CPU time, 112.1M memory peak. Sep 12 17:07:46.371619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:46.399707 systemd[1]: Reload requested from client PID 2163 ('systemctl') (unit session-7.scope)... Sep 12 17:07:46.399751 systemd[1]: Reloading... Sep 12 17:07:46.499440 zram_generator::config[2210]: No configuration found. Sep 12 17:07:46.747607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:07:46.853632 systemd[1]: Reloading finished in 453 ms. Sep 12 17:07:46.902503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:46.905758 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:46.907445 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:07:46.907853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:46.907916 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.3M memory peak. Sep 12 17:07:46.910199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:47.120024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:47.125056 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:07:47.182468 kubelet[2257]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:07:47.182468 kubelet[2257]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:07:47.182468 kubelet[2257]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:07:47.183230 kubelet[2257]: I0912 17:07:47.182490 2257 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:07:47.590182 kubelet[2257]: I0912 17:07:47.590112 2257 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:07:47.590182 kubelet[2257]: I0912 17:07:47.590154 2257 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:07:47.590431 kubelet[2257]: I0912 17:07:47.590383 2257 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:07:47.780864 kubelet[2257]: I0912 17:07:47.780804 2257 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:07:47.814089 kubelet[2257]: E0912 17:07:47.814052 2257 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:07:47.895666 kubelet[2257]: E0912 17:07:47.895493 2257 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:07:47.895666 kubelet[2257]: I0912 17:07:47.895537 2257 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:07:47.902025 kubelet[2257]: I0912 17:07:47.901963 2257 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:07:47.902281 kubelet[2257]: I0912 17:07:47.902246 2257 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:07:47.902482 kubelet[2257]: I0912 17:07:47.902271 2257 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:07:47.902482 kubelet[2257]: I0912 17:07:47.902480 2257 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:07:47.902742 kubelet[2257]: I0912 17:07:47.902490 2257 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:07:47.916494 kubelet[2257]: I0912 17:07:47.916432 2257 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:07:47.970391 kubelet[2257]: I0912 17:07:47.970321 2257 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:07:47.970391 kubelet[2257]: I0912 17:07:47.970345 2257 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:07:47.970391 kubelet[2257]: I0912 17:07:47.970370 2257 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:07:47.970391 kubelet[2257]: I0912 17:07:47.970415 2257 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:07:48.005504 kubelet[2257]: I0912 17:07:48.005454 2257 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:07:48.005934 kubelet[2257]: I0912 17:07:48.005907 2257 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:07:48.006931 kubelet[2257]: W0912 17:07:48.006716 2257 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:07:48.006931 kubelet[2257]: E0912 17:07:48.006811 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:07:48.006931 kubelet[2257]: E0912 17:07:48.006857 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:07:48.011604 kubelet[2257]: I0912 17:07:48.011576 2257 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:07:48.011727 kubelet[2257]: I0912 17:07:48.011697 2257 server.go:1289] "Started kubelet" Sep 12 17:07:48.014097 kubelet[2257]: I0912 17:07:48.014071 2257 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:07:48.025189 kubelet[2257]: I0912 17:07:48.024985 2257 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:07:48.030239 kubelet[2257]: I0912 17:07:48.029455 2257 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:07:48.030239 kubelet[2257]: I0912 17:07:48.030014 2257 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:07:48.031090 kubelet[2257]: I0912 17:07:48.031058 2257 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:07:48.031978 kubelet[2257]: E0912 17:07:48.031948 2257 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:07:48.032060 kubelet[2257]: I0912 17:07:48.032049 2257 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:07:48.032257 kubelet[2257]: I0912 17:07:48.032246 2257 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:07:48.032380 kubelet[2257]: I0912 17:07:48.032370 2257 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:07:48.033091 kubelet[2257]: E0912 17:07:48.033055 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:07:48.033167 kubelet[2257]: E0912 17:07:48.033144 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Sep 12 17:07:48.034484 kubelet[2257]: E0912 17:07:48.033336 2257 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186497fd20a6c6f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:07:48.011656952 +0000 UTC m=+0.878615310,LastTimestamp:2025-09-12 17:07:48.011656952 +0000 UTC m=+0.878615310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:07:48.035100 kubelet[2257]: I0912 17:07:48.034856 2257 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:07:48.035100 kubelet[2257]: I0912 17:07:48.034970 2257 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:07:48.035750 kubelet[2257]: I0912 17:07:48.035690 2257 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:07:48.039114 kubelet[2257]: E0912 17:07:48.039087 2257 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:07:48.041278 kubelet[2257]: I0912 17:07:48.041255 2257 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:07:48.050823 kubelet[2257]: I0912 17:07:48.050755 2257 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:07:48.052369 kubelet[2257]: I0912 17:07:48.052342 2257 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:07:48.052422 kubelet[2257]: I0912 17:07:48.052391 2257 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:07:48.052469 kubelet[2257]: I0912 17:07:48.052449 2257 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:07:48.052469 kubelet[2257]: I0912 17:07:48.052464 2257 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:07:48.052543 kubelet[2257]: E0912 17:07:48.052508 2257 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:07:48.057558 kubelet[2257]: E0912 17:07:48.057519 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:07:48.058754 kubelet[2257]: I0912 17:07:48.058727 2257 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:07:48.058754 kubelet[2257]: I0912 17:07:48.058741 2257 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:07:48.058754 kubelet[2257]: I0912 17:07:48.058762 2257 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:07:48.062113 kubelet[2257]: I0912 17:07:48.062092 2257 policy_none.go:49] "None policy: Start" Sep 12 17:07:48.062178 kubelet[2257]: I0912 17:07:48.062125 2257 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:07:48.062178 kubelet[2257]: I0912 17:07:48.062144 2257 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:07:48.069290 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:07:48.083891 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:07:48.087461 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:07:48.096484 kubelet[2257]: E0912 17:07:48.096439 2257 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:07:48.096791 kubelet[2257]: I0912 17:07:48.096754 2257 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:07:48.097447 kubelet[2257]: I0912 17:07:48.097369 2257 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:07:48.098093 kubelet[2257]: I0912 17:07:48.097762 2257 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:07:48.099113 kubelet[2257]: E0912 17:07:48.099094 2257 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:07:48.099229 kubelet[2257]: E0912 17:07:48.099216 2257 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:07:48.164436 systemd[1]: Created slice kubepods-burstable-podd61bbb17fe84e4869a53f92662f8499d.slice - libcontainer container kubepods-burstable-podd61bbb17fe84e4869a53f92662f8499d.slice. Sep 12 17:07:48.181776 kubelet[2257]: E0912 17:07:48.181682 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:07:48.184588 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 17:07:48.198448 kubelet[2257]: E0912 17:07:48.198386 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:07:48.199250 kubelet[2257]: I0912 17:07:48.199208 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:07:48.199713 kubelet[2257]: E0912 17:07:48.199681 2257 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 12 17:07:48.202101 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 17:07:48.203886 kubelet[2257]: E0912 17:07:48.203866 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:07:48.234057 kubelet[2257]: E0912 17:07:48.234002 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Sep 12 17:07:48.333574 kubelet[2257]: I0912 17:07:48.333482 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d61bbb17fe84e4869a53f92662f8499d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d61bbb17fe84e4869a53f92662f8499d\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:48.333574 kubelet[2257]: I0912 17:07:48.333553 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:48.333834 kubelet[2257]: I0912 17:07:48.333681 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:48.333834 kubelet[2257]: I0912 17:07:48.333735 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:07:48.333834 kubelet[2257]: I0912 17:07:48.333762 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d61bbb17fe84e4869a53f92662f8499d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d61bbb17fe84e4869a53f92662f8499d\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:48.333834 kubelet[2257]: I0912 17:07:48.333805 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:48.333834 kubelet[2257]: I0912 17:07:48.333832 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:48.334003 kubelet[2257]: I0912 17:07:48.333859 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:48.334003 kubelet[2257]: I0912 17:07:48.333890 2257 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d61bbb17fe84e4869a53f92662f8499d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d61bbb17fe84e4869a53f92662f8499d\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:48.402222 kubelet[2257]: I0912 17:07:48.402162 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:07:48.402840 kubelet[2257]: E0912 17:07:48.402765 2257 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 12 17:07:48.482503 kubelet[2257]: E0912 17:07:48.482459 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:48.483298 containerd[1511]: time="2025-09-12T17:07:48.483253789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d61bbb17fe84e4869a53f92662f8499d,Namespace:kube-system,Attempt:0,}" Sep 12 17:07:48.499598 kubelet[2257]: E0912 17:07:48.499561 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:48.500508 containerd[1511]: time="2025-09-12T17:07:48.500444336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 17:07:48.504754 kubelet[2257]: E0912 17:07:48.504724 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:48.505221 containerd[1511]: time="2025-09-12T17:07:48.505164609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 17:07:48.635853 kubelet[2257]: E0912 17:07:48.635788 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Sep 12 17:07:48.805210 kubelet[2257]: I0912 17:07:48.805046 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:07:48.805481 kubelet[2257]: E0912 17:07:48.805435 2257 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 12 17:07:48.847495 kubelet[2257]: E0912 17:07:48.847456 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:07:49.139132 kubelet[2257]: E0912 17:07:49.138941 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:07:49.436429 kubelet[2257]: E0912 17:07:49.436266 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Sep 12 17:07:49.566081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1958326042.mount: Deactivated successfully. Sep 12 17:07:49.572641 containerd[1511]: time="2025-09-12T17:07:49.572591292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:07:49.575468 containerd[1511]: time="2025-09-12T17:07:49.575419243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:07:49.576479 containerd[1511]: time="2025-09-12T17:07:49.576440739Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:07:49.578294 containerd[1511]: time="2025-09-12T17:07:49.578245821Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:07:49.579207 containerd[1511]: time="2025-09-12T17:07:49.579166327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:07:49.579998 kubelet[2257]: E0912 17:07:49.579959 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:07:49.580315 containerd[1511]: time="2025-09-12T17:07:49.580280061Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:07:49.581010 containerd[1511]: time="2025-09-12T17:07:49.580972580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:07:49.582670 containerd[1511]: time="2025-09-12T17:07:49.582632181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:07:49.585699 containerd[1511]: time="2025-09-12T17:07:49.585660750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.085103479s" Sep 12 17:07:49.587031 containerd[1511]: time="2025-09-12T17:07:49.586997599Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.103662473s" Sep 12 17:07:49.590151 containerd[1511]: time="2025-09-12T17:07:49.590115189Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.08484742s" Sep 12 17:07:49.601756 kubelet[2257]: E0912 17:07:49.601388 2257 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:07:49.607575 kubelet[2257]: I0912 17:07:49.607536 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:07:49.607845 kubelet[2257]: E0912 17:07:49.607818 2257 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 12 17:07:49.894829 containerd[1511]: time="2025-09-12T17:07:49.894732431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:07:49.895037 containerd[1511]: time="2025-09-12T17:07:49.892715504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:07:49.895144 containerd[1511]: time="2025-09-12T17:07:49.895033689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:07:49.895144 containerd[1511]: time="2025-09-12T17:07:49.895056837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:07:49.895227 containerd[1511]: time="2025-09-12T17:07:49.895166740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:07:49.895710 containerd[1511]: time="2025-09-12T17:07:49.895632716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:07:49.895809 containerd[1511]: time="2025-09-12T17:07:49.895709617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:07:49.895914 containerd[1511]: time="2025-09-12T17:07:49.895861785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:07:49.897790 containerd[1511]: time="2025-09-12T17:07:49.897566637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:07:49.897790 containerd[1511]: time="2025-09-12T17:07:49.897625644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:07:49.897790 containerd[1511]: time="2025-09-12T17:07:49.897645755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:07:49.897790 containerd[1511]: time="2025-09-12T17:07:49.897738725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:07:49.963565 systemd[1]: Started cri-containerd-c4ac558bd3bb012db974c2ff16e28820da2d2b1746b99571439d6ba4b5239f9a.scope - libcontainer container c4ac558bd3bb012db974c2ff16e28820da2d2b1746b99571439d6ba4b5239f9a. Sep 12 17:07:49.967239 systemd[1]: Started cri-containerd-72a708f5722d527c4e0366fcf0a1a33abdaccc18a041e3fad4e402f0a3a49925.scope - libcontainer container 72a708f5722d527c4e0366fcf0a1a33abdaccc18a041e3fad4e402f0a3a49925. Sep 12 17:07:49.976623 kubelet[2257]: E0912 17:07:49.976561 2257 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:07:49.981585 systemd[1]: Started cri-containerd-ec83b91fa2af614323d09e5795a2ee2a0cf9048464c519352e37b3c898a2eba2.scope - libcontainer container ec83b91fa2af614323d09e5795a2ee2a0cf9048464c519352e37b3c898a2eba2. Sep 12 17:07:50.027906 containerd[1511]: time="2025-09-12T17:07:50.027864682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4ac558bd3bb012db974c2ff16e28820da2d2b1746b99571439d6ba4b5239f9a\"" Sep 12 17:07:50.029976 containerd[1511]: time="2025-09-12T17:07:50.029946250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"72a708f5722d527c4e0366fcf0a1a33abdaccc18a041e3fad4e402f0a3a49925\"" Sep 12 17:07:50.032677 kubelet[2257]: E0912 17:07:50.031992 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:50.033484 kubelet[2257]: E0912 17:07:50.033459 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:50.039941 containerd[1511]: time="2025-09-12T17:07:50.039875317Z" level=info msg="CreateContainer within sandbox \"c4ac558bd3bb012db974c2ff16e28820da2d2b1746b99571439d6ba4b5239f9a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:07:50.041666 containerd[1511]: time="2025-09-12T17:07:50.041638986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d61bbb17fe84e4869a53f92662f8499d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec83b91fa2af614323d09e5795a2ee2a0cf9048464c519352e37b3c898a2eba2\"" Sep 12 17:07:50.042281 kubelet[2257]: E0912 17:07:50.042250 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:50.042455 containerd[1511]: time="2025-09-12T17:07:50.042389334Z" level=info msg="CreateContainer within sandbox \"72a708f5722d527c4e0366fcf0a1a33abdaccc18a041e3fad4e402f0a3a49925\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:07:50.047122 containerd[1511]: time="2025-09-12T17:07:50.047073214Z" level=info msg="CreateContainer within sandbox \"ec83b91fa2af614323d09e5795a2ee2a0cf9048464c519352e37b3c898a2eba2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:07:50.058927 containerd[1511]: time="2025-09-12T17:07:50.058892841Z" level=info msg="CreateContainer within sandbox \"c4ac558bd3bb012db974c2ff16e28820da2d2b1746b99571439d6ba4b5239f9a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dbf785939d5feb872b1c1fbc293d14107824382ad810af059f4884b0f5b018c3\"" Sep 12 17:07:50.059392 containerd[1511]: time="2025-09-12T17:07:50.059370891Z" level=info msg="StartContainer for \"dbf785939d5feb872b1c1fbc293d14107824382ad810af059f4884b0f5b018c3\"" Sep 12 17:07:50.068625 containerd[1511]: time="2025-09-12T17:07:50.068550351Z" level=info msg="CreateContainer within sandbox \"72a708f5722d527c4e0366fcf0a1a33abdaccc18a041e3fad4e402f0a3a49925\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"017807a257b2e7dccc3d85e352ebbeaa5b442ea0edb09e235f078498d5618ed1\"" Sep 12 17:07:50.069443 containerd[1511]: time="2025-09-12T17:07:50.068946452Z" level=info msg="StartContainer for \"017807a257b2e7dccc3d85e352ebbeaa5b442ea0edb09e235f078498d5618ed1\"" Sep 12 17:07:50.071613 containerd[1511]: time="2025-09-12T17:07:50.071572623Z" level=info msg="CreateContainer within sandbox \"ec83b91fa2af614323d09e5795a2ee2a0cf9048464c519352e37b3c898a2eba2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bfd949e93adfdcd6479b0c62a23b97b386a232536d6cbdd7c1390b5aedf85dc1\"" Sep 12 17:07:50.071931 containerd[1511]: time="2025-09-12T17:07:50.071878592Z" level=info msg="StartContainer for \"bfd949e93adfdcd6479b0c62a23b97b386a232536d6cbdd7c1390b5aedf85dc1\"" Sep 12 17:07:50.134571 systemd[1]: Started cri-containerd-017807a257b2e7dccc3d85e352ebbeaa5b442ea0edb09e235f078498d5618ed1.scope - libcontainer container 017807a257b2e7dccc3d85e352ebbeaa5b442ea0edb09e235f078498d5618ed1. Sep 12 17:07:50.135870 systemd[1]: Started cri-containerd-dbf785939d5feb872b1c1fbc293d14107824382ad810af059f4884b0f5b018c3.scope - libcontainer container dbf785939d5feb872b1c1fbc293d14107824382ad810af059f4884b0f5b018c3. Sep 12 17:07:50.143578 systemd[1]: Started cri-containerd-bfd949e93adfdcd6479b0c62a23b97b386a232536d6cbdd7c1390b5aedf85dc1.scope - libcontainer container bfd949e93adfdcd6479b0c62a23b97b386a232536d6cbdd7c1390b5aedf85dc1. Sep 12 17:07:50.199580 containerd[1511]: time="2025-09-12T17:07:50.196452884Z" level=info msg="StartContainer for \"dbf785939d5feb872b1c1fbc293d14107824382ad810af059f4884b0f5b018c3\" returns successfully" Sep 12 17:07:50.213108 containerd[1511]: time="2025-09-12T17:07:50.212883873Z" level=info msg="StartContainer for \"bfd949e93adfdcd6479b0c62a23b97b386a232536d6cbdd7c1390b5aedf85dc1\" returns successfully" Sep 12 17:07:50.252811 containerd[1511]: time="2025-09-12T17:07:50.252755749Z" level=info msg="StartContainer for \"017807a257b2e7dccc3d85e352ebbeaa5b442ea0edb09e235f078498d5618ed1\" returns successfully" Sep 12 17:07:51.068747 kubelet[2257]: E0912 17:07:51.068703 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:07:51.069497 kubelet[2257]: E0912 17:07:51.068881 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:51.071147 kubelet[2257]: E0912 17:07:51.070767 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:07:51.071147 kubelet[2257]: E0912 17:07:51.070948 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:51.072511 kubelet[2257]: E0912 17:07:51.072491 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:07:51.072608 kubelet[2257]: E0912 17:07:51.072590 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:51.211496 kubelet[2257]: I0912 17:07:51.209947 2257 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:07:52.122444 kubelet[2257]: E0912 17:07:52.076069 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:07:52.122444 kubelet[2257]: E0912 17:07:52.076303 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:52.122444 kubelet[2257]: E0912 17:07:52.076749 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:07:52.122444 kubelet[2257]: E0912 17:07:52.076950 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:52.122444 kubelet[2257]: E0912 17:07:52.078646 2257 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:07:52.122444 kubelet[2257]: E0912 17:07:52.078787 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:52.293473 kubelet[2257]: E0912 17:07:52.293419 2257 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:07:52.407941 kubelet[2257]: I0912 17:07:52.407726 2257 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:07:52.407941 kubelet[2257]: E0912 17:07:52.407823 2257 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:07:52.434006 kubelet[2257]: I0912 17:07:52.433943 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:52.439421 kubelet[2257]: E0912 17:07:52.439363 2257 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:52.439421 kubelet[2257]: I0912 17:07:52.439389 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:52.441296 kubelet[2257]: E0912 17:07:52.441239 2257 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:52.441296 kubelet[2257]: I0912 17:07:52.441285 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:07:52.442677 kubelet[2257]: E0912 17:07:52.442647 2257 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:07:53.004949 kubelet[2257]: I0912 17:07:53.004881 2257 apiserver.go:52] "Watching apiserver" Sep 12 17:07:53.032957 kubelet[2257]: I0912 17:07:53.032913 2257 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:07:53.076874 kubelet[2257]: I0912 17:07:53.075764 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:07:53.076874 kubelet[2257]: I0912 17:07:53.075937 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:53.076874 kubelet[2257]: I0912 17:07:53.076138 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:53.078906 kubelet[2257]: E0912 17:07:53.078876 2257 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:07:53.079108 kubelet[2257]: E0912 17:07:53.079083 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:53.079269 kubelet[2257]: E0912 17:07:53.079243 2257 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:53.079391 kubelet[2257]: E0912 17:07:53.079373 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:53.080584 kubelet[2257]: E0912 17:07:53.080557 2257 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:53.080744 kubelet[2257]: E0912 17:07:53.080713 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:54.575386 systemd[1]: Reload requested from client PID 2545 ('systemctl') (unit session-7.scope)... Sep 12 17:07:54.575445 systemd[1]: Reloading... Sep 12 17:07:54.664490 zram_generator::config[2589]: No configuration found. Sep 12 17:07:54.937562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:07:55.033110 kubelet[2257]: I0912 17:07:55.033081 2257 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:07:55.055958 systemd[1]: Reloading finished in 480 ms. Sep 12 17:07:55.081211 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:55.106841 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:07:55.107197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:55.107260 systemd[1]: kubelet.service: Consumed 1.254s CPU time, 133.6M memory peak. Sep 12 17:07:55.122608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:07:55.303872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:07:55.308770 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:07:55.343159 kubelet[2633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:07:55.343159 kubelet[2633]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:07:55.343159 kubelet[2633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:07:55.343617 kubelet[2633]: I0912 17:07:55.343185 2633 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:07:55.351493 kubelet[2633]: I0912 17:07:55.351452 2633 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:07:55.351493 kubelet[2633]: I0912 17:07:55.351479 2633 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:07:55.351651 kubelet[2633]: I0912 17:07:55.351637 2633 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:07:55.352717 kubelet[2633]: I0912 17:07:55.352695 2633 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:07:55.354641 kubelet[2633]: I0912 17:07:55.354604 2633 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:07:55.357871 kubelet[2633]: E0912 17:07:55.357836 2633 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:07:55.357871 kubelet[2633]: I0912 17:07:55.357867 2633 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:07:55.365802 kubelet[2633]: I0912 17:07:55.364872 2633 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:07:55.365802 kubelet[2633]: I0912 17:07:55.365156 2633 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:07:55.365802 kubelet[2633]: I0912 17:07:55.365189 2633 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:07:55.365802 kubelet[2633]: I0912 17:07:55.365489 2633 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:07:55.366051 kubelet[2633]: I0912 17:07:55.365499 2633 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:07:55.366051 kubelet[2633]: I0912 17:07:55.365566 2633 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:07:55.366051 kubelet[2633]: I0912 17:07:55.365751 2633 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:07:55.366051 kubelet[2633]: I0912 17:07:55.365777 2633 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:07:55.366051 kubelet[2633]: I0912 17:07:55.365842 2633 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:07:55.366051 kubelet[2633]: I0912 17:07:55.365882 2633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:07:55.367245 kubelet[2633]: I0912 17:07:55.367213 2633 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:07:55.367675 kubelet[2633]: I0912 17:07:55.367631 2633 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:07:55.370524 kubelet[2633]: I0912 17:07:55.370492 2633 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:07:55.370644 kubelet[2633]: I0912 17:07:55.370533 2633 server.go:1289] "Started kubelet" Sep 12 17:07:55.371985 kubelet[2633]: I0912 17:07:55.371955 2633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:07:55.378179 kubelet[2633]: E0912 17:07:55.378078 2633 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:07:55.378557 kubelet[2633]: I0912 17:07:55.378497 2633 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:07:55.379887 kubelet[2633]: I0912 17:07:55.379160 2633 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:07:55.379887 kubelet[2633]: I0912 17:07:55.379698 2633 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:07:55.379983 kubelet[2633]: I0912 17:07:55.379965 2633 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:07:55.380101 kubelet[2633]: I0912 17:07:55.380085 2633 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:07:55.380153 kubelet[2633]: I0912 17:07:55.380147 2633 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:07:55.380231 kubelet[2633]: I0912 17:07:55.380219 2633 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:07:55.380352 kubelet[2633]: I0912 17:07:55.380331 2633 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:07:55.384158 kubelet[2633]: I0912 17:07:55.383841 2633 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:07:55.384158 kubelet[2633]: I0912 17:07:55.383932 2633 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:07:55.385914 kubelet[2633]: I0912 17:07:55.385890 2633 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:07:55.387945 kubelet[2633]: I0912 17:07:55.387914 2633 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:07:55.389213 kubelet[2633]: I0912 17:07:55.389192 2633 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:07:55.389213 kubelet[2633]: I0912 17:07:55.389214 2633 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:07:55.389273 kubelet[2633]: I0912 17:07:55.389234 2633 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:07:55.389273 kubelet[2633]: I0912 17:07:55.389241 2633 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:07:55.389331 kubelet[2633]: E0912 17:07:55.389276 2633 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:07:55.424420 kubelet[2633]: I0912 17:07:55.424374 2633 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:07:55.424420 kubelet[2633]: I0912 17:07:55.424416 2633 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:07:55.424582 kubelet[2633]: I0912 17:07:55.424439 2633 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:07:55.424582 kubelet[2633]: I0912 17:07:55.424577 2633 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:07:55.424625 kubelet[2633]: I0912 17:07:55.424588 2633 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:07:55.424625 kubelet[2633]: I0912 17:07:55.424603 2633 policy_none.go:49] "None policy: Start" Sep 12 17:07:55.424625 kubelet[2633]: I0912 17:07:55.424613 2633 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:07:55.424625 kubelet[2633]: I0912 17:07:55.424623 2633 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:07:55.424719 kubelet[2633]: I0912 17:07:55.424706 2633 state_mem.go:75] "Updated machine memory state" Sep 12 17:07:55.428547 kubelet[2633]: E0912 17:07:55.428497 2633 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:07:55.428731 kubelet[2633]: I0912 17:07:55.428716 2633 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:07:55.428787 kubelet[2633]: I0912 17:07:55.428733 2633 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:07:55.429214 kubelet[2633]: I0912 17:07:55.428914 2633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:07:55.429613 kubelet[2633]: E0912 17:07:55.429598 2633 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:07:55.490747 kubelet[2633]: I0912 17:07:55.490695 2633 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:55.490897 kubelet[2633]: I0912 17:07:55.490713 2633 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:55.491003 kubelet[2633]: I0912 17:07:55.490974 2633 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:07:55.506991 kubelet[2633]: E0912 17:07:55.506948 2633 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:07:55.536016 kubelet[2633]: I0912 17:07:55.535963 2633 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:07:55.558954 kubelet[2633]: I0912 17:07:55.558083 2633 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:07:55.558954 kubelet[2633]: I0912 17:07:55.558204 2633 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:07:55.609485 sudo[2673]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:07:55.609941 sudo[2673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:07:55.681035 kubelet[2633]: I0912 17:07:55.680968 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:07:55.681207 kubelet[2633]: I0912 17:07:55.681041 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d61bbb17fe84e4869a53f92662f8499d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d61bbb17fe84e4869a53f92662f8499d\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:55.681207 kubelet[2633]: I0912 17:07:55.681081 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d61bbb17fe84e4869a53f92662f8499d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d61bbb17fe84e4869a53f92662f8499d\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:55.681207 kubelet[2633]: I0912 17:07:55.681124 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d61bbb17fe84e4869a53f92662f8499d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d61bbb17fe84e4869a53f92662f8499d\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:55.681207 kubelet[2633]: I0912 17:07:55.681148 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:55.681207 kubelet[2633]: I0912 17:07:55.681164 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:55.681326 kubelet[2633]: I0912 17:07:55.681179 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:55.681326 kubelet[2633]: I0912 17:07:55.681200 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:55.681326 kubelet[2633]: I0912 17:07:55.681215 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:55.806793 kubelet[2633]: E0912 17:07:55.806560 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:55.806793 kubelet[2633]: E0912 17:07:55.806635 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:55.807950 kubelet[2633]: E0912 17:07:55.807882 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:56.090289 sudo[2673]: pam_unix(sudo:session): session closed for user root Sep 12 17:07:56.368436 kubelet[2633]: I0912 17:07:56.368278 2633 apiserver.go:52] "Watching apiserver" Sep 12 17:07:56.380452 kubelet[2633]: I0912 17:07:56.380371 2633 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:07:56.406245 kubelet[2633]: I0912 17:07:56.406212 2633 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:56.406685 kubelet[2633]: I0912 17:07:56.406650 2633 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:56.407780 kubelet[2633]: E0912 17:07:56.407124 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:56.411685 kubelet[2633]: E0912 17:07:56.411412 2633 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:07:56.411685 kubelet[2633]: E0912 17:07:56.411599 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:56.413900 kubelet[2633]: E0912 17:07:56.413860 2633 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:07:56.414029 kubelet[2633]: E0912 17:07:56.413982 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:56.426543 kubelet[2633]: I0912 17:07:56.426487 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4264682419999999 podStartE2EDuration="1.426468242s" podCreationTimestamp="2025-09-12 17:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:07:56.426281655 +0000 UTC m=+1.113328584" watchObservedRunningTime="2025-09-12 17:07:56.426468242 +0000 UTC m=+1.113515171" Sep 12 17:07:56.433631 kubelet[2633]: I0912 17:07:56.433576 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.433562354 podStartE2EDuration="1.433562354s" podCreationTimestamp="2025-09-12 17:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:07:56.433359409 +0000 UTC m=+1.120406338" watchObservedRunningTime="2025-09-12 17:07:56.433562354 +0000 UTC m=+1.120609283" Sep 12 17:07:57.407452 kubelet[2633]: E0912 17:07:57.407388 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:57.407991 kubelet[2633]: E0912 17:07:57.407503 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:57.407991 kubelet[2633]: E0912 17:07:57.407502 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:58.052007 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 12 17:07:58.053467 sshd[1698]: Connection closed by 10.0.0.1 port 47222 Sep 12 17:07:58.054597 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:58.059454 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:47222.service: Deactivated successfully. Sep 12 17:07:58.061696 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:07:58.061925 systemd[1]: session-7.scope: Consumed 4.728s CPU time, 251.3M memory peak. Sep 12 17:07:58.063150 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:07:58.064247 systemd-logind[1497]: Removed session 7. Sep 12 17:07:58.409878 kubelet[2633]: E0912 17:07:58.409744 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:59.411710 kubelet[2633]: E0912 17:07:59.411655 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:59.579676 kubelet[2633]: E0912 17:07:59.579602 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:00.069972 kubelet[2633]: I0912 17:08:00.069929 2633 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:08:00.070323 containerd[1511]: time="2025-09-12T17:08:00.070278206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:08:00.070732 kubelet[2633]: I0912 17:08:00.070588 2633 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:08:01.096864 kubelet[2633]: I0912 17:08:01.096746 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.096728295 podStartE2EDuration="6.096728295s" podCreationTimestamp="2025-09-12 17:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:07:56.440940465 +0000 UTC m=+1.127987394" watchObservedRunningTime="2025-09-12 17:08:01.096728295 +0000 UTC m=+5.783775224" Sep 12 17:08:01.116596 kubelet[2633]: I0912 17:08:01.115751 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1-kube-proxy\") pod \"kube-proxy-rz7qg\" (UID: \"9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1\") " pod="kube-system/kube-proxy-rz7qg" Sep 12 17:08:01.116596 kubelet[2633]: I0912 17:08:01.115788 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1-xtables-lock\") pod \"kube-proxy-rz7qg\" (UID: \"9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1\") " pod="kube-system/kube-proxy-rz7qg" Sep 12 17:08:01.116596 kubelet[2633]: I0912 17:08:01.115806 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1-lib-modules\") pod \"kube-proxy-rz7qg\" (UID: \"9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1\") " pod="kube-system/kube-proxy-rz7qg" Sep 12 17:08:01.116596 kubelet[2633]: I0912 17:08:01.115822 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94pw7\" (UniqueName: \"kubernetes.io/projected/9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1-kube-api-access-94pw7\") pod \"kube-proxy-rz7qg\" (UID: \"9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1\") " pod="kube-system/kube-proxy-rz7qg" Sep 12 17:08:01.115887 systemd[1]: Created slice kubepods-besteffort-pod9fb6a9f7_9a5e_446b_a24f_0adfa04cd6c1.slice - libcontainer container kubepods-besteffort-pod9fb6a9f7_9a5e_446b_a24f_0adfa04cd6c1.slice. Sep 12 17:08:01.135039 systemd[1]: Created slice kubepods-burstable-podb0554f14_d913_431f_8808_00c2d67c6fd5.slice - libcontainer container kubepods-burstable-podb0554f14_d913_431f_8808_00c2d67c6fd5.slice. Sep 12 17:08:01.216919 kubelet[2633]: I0912 17:08:01.216857 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0554f14-d913-431f-8808-00c2d67c6fd5-clustermesh-secrets\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.216919 kubelet[2633]: I0912 17:08:01.216909 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-config-path\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.216919 kubelet[2633]: I0912 17:08:01.216926 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-host-proc-sys-net\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217305 kubelet[2633]: I0912 17:08:01.216946 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-host-proc-sys-kernel\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217305 kubelet[2633]: I0912 17:08:01.216977 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-bpf-maps\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217305 kubelet[2633]: I0912 17:08:01.216995 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-etc-cni-netd\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217305 kubelet[2633]: I0912 17:08:01.217017 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0554f14-d913-431f-8808-00c2d67c6fd5-hubble-tls\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217305 kubelet[2633]: I0912 17:08:01.217050 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfprz\" (UniqueName: \"kubernetes.io/projected/b0554f14-d913-431f-8808-00c2d67c6fd5-kube-api-access-pfprz\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217305 kubelet[2633]: I0912 17:08:01.217083 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-hostproc\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217567 kubelet[2633]: I0912 17:08:01.217105 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-run\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217567 kubelet[2633]: I0912 17:08:01.217120 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-cgroup\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217567 kubelet[2633]: I0912 17:08:01.217139 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cni-path\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217567 kubelet[2633]: I0912 17:08:01.217156 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-lib-modules\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.217567 kubelet[2633]: I0912 17:08:01.217170 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-xtables-lock\") pod \"cilium-gppwc\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " pod="kube-system/cilium-gppwc" Sep 12 17:08:01.243439 systemd[1]: Created slice kubepods-besteffort-pod6297b608_f375_4e4f_abda_ac622d8926c9.slice - libcontainer container kubepods-besteffort-pod6297b608_f375_4e4f_abda_ac622d8926c9.slice. Sep 12 17:08:01.317639 kubelet[2633]: I0912 17:08:01.317569 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6297b608-f375-4e4f-abda-ac622d8926c9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jsls4\" (UID: \"6297b608-f375-4e4f-abda-ac622d8926c9\") " pod="kube-system/cilium-operator-6c4d7847fc-jsls4" Sep 12 17:08:01.317639 kubelet[2633]: I0912 17:08:01.317629 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgk4m\" (UniqueName: \"kubernetes.io/projected/6297b608-f375-4e4f-abda-ac622d8926c9-kube-api-access-sgk4m\") pod \"cilium-operator-6c4d7847fc-jsls4\" (UID: \"6297b608-f375-4e4f-abda-ac622d8926c9\") " pod="kube-system/cilium-operator-6c4d7847fc-jsls4" Sep 12 17:08:01.334556 update_engine[1499]: I20250912 17:08:01.334481 1499 update_attempter.cc:509] Updating boot flags... Sep 12 17:08:01.367455 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2725) Sep 12 17:08:01.399008 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2723) Sep 12 17:08:01.433277 kubelet[2633]: E0912 17:08:01.431967 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:01.436470 containerd[1511]: time="2025-09-12T17:08:01.435139455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rz7qg,Uid:9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:01.446296 kubelet[2633]: E0912 17:08:01.445906 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:01.449832 containerd[1511]: time="2025-09-12T17:08:01.449381603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gppwc,Uid:b0554f14-d913-431f-8808-00c2d67c6fd5,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:01.476448 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2723) Sep 12 17:08:01.485155 containerd[1511]: time="2025-09-12T17:08:01.485057853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:01.486325 containerd[1511]: time="2025-09-12T17:08:01.486276926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:01.486377 containerd[1511]: time="2025-09-12T17:08:01.486330323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:01.486545 containerd[1511]: time="2025-09-12T17:08:01.486506379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:01.519066 containerd[1511]: time="2025-09-12T17:08:01.517514948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:01.519066 containerd[1511]: time="2025-09-12T17:08:01.517575010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:01.519066 containerd[1511]: time="2025-09-12T17:08:01.517587247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:01.519066 containerd[1511]: time="2025-09-12T17:08:01.517659586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:01.533336 systemd[1]: Started cri-containerd-394a2b366d0c61fb805c59a67dca339ee0c7c88f92c45500dc3132958dab6761.scope - libcontainer container 394a2b366d0c61fb805c59a67dca339ee0c7c88f92c45500dc3132958dab6761. Sep 12 17:08:01.536354 systemd[1]: Started cri-containerd-80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88.scope - libcontainer container 80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88. Sep 12 17:08:01.547355 kubelet[2633]: E0912 17:08:01.547323 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:01.548051 containerd[1511]: time="2025-09-12T17:08:01.548016957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jsls4,Uid:6297b608-f375-4e4f-abda-ac622d8926c9,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:01.561134 containerd[1511]: time="2025-09-12T17:08:01.561092067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rz7qg,Uid:9fb6a9f7-9a5e-446b-a24f-0adfa04cd6c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"394a2b366d0c61fb805c59a67dca339ee0c7c88f92c45500dc3132958dab6761\"" Sep 12 17:08:01.561789 kubelet[2633]: E0912 17:08:01.561727 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:01.567502 containerd[1511]: time="2025-09-12T17:08:01.567445553Z" level=info msg="CreateContainer within sandbox \"394a2b366d0c61fb805c59a67dca339ee0c7c88f92c45500dc3132958dab6761\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:08:01.569890 containerd[1511]: time="2025-09-12T17:08:01.569786104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gppwc,Uid:b0554f14-d913-431f-8808-00c2d67c6fd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\"" Sep 12 17:08:01.571217 kubelet[2633]: E0912 17:08:01.571176 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:01.573133 containerd[1511]: time="2025-09-12T17:08:01.573078020Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:08:01.580600 containerd[1511]: time="2025-09-12T17:08:01.580221568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:01.580600 containerd[1511]: time="2025-09-12T17:08:01.580287102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:01.580600 containerd[1511]: time="2025-09-12T17:08:01.580300561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:01.581224 containerd[1511]: time="2025-09-12T17:08:01.581108271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:01.593448 containerd[1511]: time="2025-09-12T17:08:01.593333458Z" level=info msg="CreateContainer within sandbox \"394a2b366d0c61fb805c59a67dca339ee0c7c88f92c45500dc3132958dab6761\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea7472a976e172fc3a3fe529cad4c68af9cf98f1e7bc4a4809df488d2fd3e45e\"" Sep 12 17:08:01.595217 containerd[1511]: time="2025-09-12T17:08:01.595165144Z" level=info msg="StartContainer for \"ea7472a976e172fc3a3fe529cad4c68af9cf98f1e7bc4a4809df488d2fd3e45e\"" Sep 12 17:08:01.599588 systemd[1]: Started cri-containerd-ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232.scope - libcontainer container ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232. Sep 12 17:08:01.633554 systemd[1]: Started cri-containerd-ea7472a976e172fc3a3fe529cad4c68af9cf98f1e7bc4a4809df488d2fd3e45e.scope - libcontainer container ea7472a976e172fc3a3fe529cad4c68af9cf98f1e7bc4a4809df488d2fd3e45e. Sep 12 17:08:01.645564 containerd[1511]: time="2025-09-12T17:08:01.645468265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jsls4,Uid:6297b608-f375-4e4f-abda-ac622d8926c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232\"" Sep 12 17:08:01.646587 kubelet[2633]: E0912 17:08:01.646550 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:01.672559 containerd[1511]: time="2025-09-12T17:08:01.671853441Z" level=info msg="StartContainer for \"ea7472a976e172fc3a3fe529cad4c68af9cf98f1e7bc4a4809df488d2fd3e45e\" returns successfully" Sep 12 17:08:02.370155 kubelet[2633]: E0912 17:08:02.370105 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:02.423384 kubelet[2633]: E0912 17:08:02.423339 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:02.423606 kubelet[2633]: E0912 17:08:02.423394 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:02.444821 kubelet[2633]: I0912 17:08:02.444721 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rz7qg" podStartSLOduration=1.444682403 podStartE2EDuration="1.444682403s" podCreationTimestamp="2025-09-12 17:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:08:02.44412987 +0000 UTC m=+7.131176799" watchObservedRunningTime="2025-09-12 17:08:02.444682403 +0000 UTC m=+7.131729333" Sep 12 17:08:06.985594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697696323.mount: Deactivated successfully. Sep 12 17:08:09.181099 kubelet[2633]: E0912 17:08:09.181043 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:09.434995 kubelet[2633]: E0912 17:08:09.434849 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:09.585075 kubelet[2633]: E0912 17:08:09.585028 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:10.436643 kubelet[2633]: E0912 17:08:10.436590 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:13.642386 containerd[1511]: time="2025-09-12T17:08:13.642314295Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:13.643204 containerd[1511]: time="2025-09-12T17:08:13.643152380Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:08:13.644309 containerd[1511]: time="2025-09-12T17:08:13.644266081Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:13.646144 containerd[1511]: time="2025-09-12T17:08:13.646089303Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.072948648s" Sep 12 17:08:13.646215 containerd[1511]: time="2025-09-12T17:08:13.646151190Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:08:13.650248 containerd[1511]: time="2025-09-12T17:08:13.650219583Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:08:13.657587 containerd[1511]: time="2025-09-12T17:08:13.657533695Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:08:13.732990 containerd[1511]: time="2025-09-12T17:08:13.732923869Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\"" Sep 12 17:08:13.733639 containerd[1511]: time="2025-09-12T17:08:13.733598457Z" level=info msg="StartContainer for \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\"" Sep 12 17:08:13.766714 systemd[1]: Started cri-containerd-b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560.scope - libcontainer container b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560. Sep 12 17:08:13.800908 containerd[1511]: time="2025-09-12T17:08:13.800851926Z" level=info msg="StartContainer for \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\" returns successfully" Sep 12 17:08:13.813159 systemd[1]: cri-containerd-b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560.scope: Deactivated successfully. Sep 12 17:08:14.373169 containerd[1511]: time="2025-09-12T17:08:14.373055690Z" level=info msg="shim disconnected" id=b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560 namespace=k8s.io Sep 12 17:08:14.373169 containerd[1511]: time="2025-09-12T17:08:14.373142588Z" level=warning msg="cleaning up after shim disconnected" id=b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560 namespace=k8s.io Sep 12 17:08:14.373169 containerd[1511]: time="2025-09-12T17:08:14.373168421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:08:14.544598 kubelet[2633]: E0912 17:08:14.544543 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:14.549932 containerd[1511]: time="2025-09-12T17:08:14.549889915Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:08:14.568757 containerd[1511]: time="2025-09-12T17:08:14.568700360Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\"" Sep 12 17:08:14.570886 containerd[1511]: time="2025-09-12T17:08:14.570002310Z" level=info msg="StartContainer for \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\"" Sep 12 17:08:14.601533 systemd[1]: Started cri-containerd-95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab.scope - libcontainer container 95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab. Sep 12 17:08:14.629772 containerd[1511]: time="2025-09-12T17:08:14.629630640Z" level=info msg="StartContainer for \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\" returns successfully" Sep 12 17:08:14.642806 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:08:14.643151 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:08:14.643868 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:08:14.649936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:08:14.650181 systemd[1]: cri-containerd-95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab.scope: Deactivated successfully. Sep 12 17:08:14.668036 containerd[1511]: time="2025-09-12T17:08:14.667969953Z" level=info msg="shim disconnected" id=95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab namespace=k8s.io Sep 12 17:08:14.668036 containerd[1511]: time="2025-09-12T17:08:14.668017059Z" level=warning msg="cleaning up after shim disconnected" id=95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab namespace=k8s.io Sep 12 17:08:14.668036 containerd[1511]: time="2025-09-12T17:08:14.668025387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:08:14.675788 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:08:14.706654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560-rootfs.mount: Deactivated successfully. Sep 12 17:08:15.379333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount822160161.mount: Deactivated successfully. Sep 12 17:08:15.552144 kubelet[2633]: E0912 17:08:15.551835 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:15.557759 containerd[1511]: time="2025-09-12T17:08:15.557707807Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:08:15.573614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount100866795.mount: Deactivated successfully. Sep 12 17:08:15.587030 containerd[1511]: time="2025-09-12T17:08:15.586979508Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\"" Sep 12 17:08:15.587602 containerd[1511]: time="2025-09-12T17:08:15.587567149Z" level=info msg="StartContainer for \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\"" Sep 12 17:08:15.624574 systemd[1]: Started cri-containerd-186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932.scope - libcontainer container 186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932. Sep 12 17:08:15.665482 systemd[1]: cri-containerd-186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932.scope: Deactivated successfully. Sep 12 17:08:15.699905 containerd[1511]: time="2025-09-12T17:08:15.699851106Z" level=info msg="StartContainer for \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\" returns successfully" Sep 12 17:08:15.727201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932-rootfs.mount: Deactivated successfully. Sep 12 17:08:15.796291 containerd[1511]: time="2025-09-12T17:08:15.796213319Z" level=info msg="shim disconnected" id=186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932 namespace=k8s.io Sep 12 17:08:15.796291 containerd[1511]: time="2025-09-12T17:08:15.796271798Z" level=warning msg="cleaning up after shim disconnected" id=186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932 namespace=k8s.io Sep 12 17:08:15.796291 containerd[1511]: time="2025-09-12T17:08:15.796280537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:08:15.908657 containerd[1511]: time="2025-09-12T17:08:15.908590937Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:15.909337 containerd[1511]: time="2025-09-12T17:08:15.909282541Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:08:15.910463 containerd[1511]: time="2025-09-12T17:08:15.910432262Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:08:15.911916 containerd[1511]: time="2025-09-12T17:08:15.911872626Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.261619133s" Sep 12 17:08:15.911916 containerd[1511]: time="2025-09-12T17:08:15.911905893Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:08:15.916636 containerd[1511]: time="2025-09-12T17:08:15.916537030Z" level=info msg="CreateContainer within sandbox \"ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:08:15.929416 containerd[1511]: time="2025-09-12T17:08:15.929351432Z" level=info msg="CreateContainer within sandbox \"ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\"" Sep 12 17:08:15.929932 containerd[1511]: time="2025-09-12T17:08:15.929900204Z" level=info msg="StartContainer for \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\"" Sep 12 17:08:15.956534 systemd[1]: Started cri-containerd-65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef.scope - libcontainer container 65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef. Sep 12 17:08:15.986557 containerd[1511]: time="2025-09-12T17:08:15.986496205Z" level=info msg="StartContainer for \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\" returns successfully" Sep 12 17:08:16.552553 kubelet[2633]: E0912 17:08:16.552498 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:16.554962 kubelet[2633]: E0912 17:08:16.554930 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:16.563276 containerd[1511]: time="2025-09-12T17:08:16.563219632Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:08:16.585063 kubelet[2633]: I0912 17:08:16.584992 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jsls4" podStartSLOduration=1.319566245 podStartE2EDuration="15.584945292s" podCreationTimestamp="2025-09-12 17:08:01 +0000 UTC" firstStartedPulling="2025-09-12 17:08:01.647279936 +0000 UTC m=+6.334326865" lastFinishedPulling="2025-09-12 17:08:15.912658983 +0000 UTC m=+20.599705912" observedRunningTime="2025-09-12 17:08:16.584312203 +0000 UTC m=+21.271359132" watchObservedRunningTime="2025-09-12 17:08:16.584945292 +0000 UTC m=+21.271992221" Sep 12 17:08:16.712172 containerd[1511]: time="2025-09-12T17:08:16.712118071Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\"" Sep 12 17:08:16.722985 containerd[1511]: time="2025-09-12T17:08:16.720602610Z" level=info msg="StartContainer for \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\"" Sep 12 17:08:16.780050 systemd[1]: run-containerd-runc-k8s.io-fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0-runc.0RVFGA.mount: Deactivated successfully. Sep 12 17:08:16.791576 systemd[1]: Started cri-containerd-fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0.scope - libcontainer container fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0. Sep 12 17:08:16.833666 systemd[1]: cri-containerd-fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0.scope: Deactivated successfully. Sep 12 17:08:16.833908 containerd[1511]: time="2025-09-12T17:08:16.833697502Z" level=info msg="StartContainer for \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\" returns successfully" Sep 12 17:08:16.856620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0-rootfs.mount: Deactivated successfully. Sep 12 17:08:16.861753 containerd[1511]: time="2025-09-12T17:08:16.861690633Z" level=info msg="shim disconnected" id=fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0 namespace=k8s.io Sep 12 17:08:16.861753 containerd[1511]: time="2025-09-12T17:08:16.861750325Z" level=warning msg="cleaning up after shim disconnected" id=fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0 namespace=k8s.io Sep 12 17:08:16.861884 containerd[1511]: time="2025-09-12T17:08:16.861759674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:08:17.558786 kubelet[2633]: E0912 17:08:17.558723 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:17.559530 kubelet[2633]: E0912 17:08:17.559493 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:17.578010 containerd[1511]: time="2025-09-12T17:08:17.577954311Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:08:17.604193 containerd[1511]: time="2025-09-12T17:08:17.604140205Z" level=info msg="CreateContainer within sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\"" Sep 12 17:08:17.604679 containerd[1511]: time="2025-09-12T17:08:17.604652245Z" level=info msg="StartContainer for \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\"" Sep 12 17:08:17.644665 systemd[1]: Started cri-containerd-fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420.scope - libcontainer container fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420. Sep 12 17:08:17.695075 containerd[1511]: time="2025-09-12T17:08:17.694938428Z" level=info msg="StartContainer for \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\" returns successfully" Sep 12 17:08:17.835758 kubelet[2633]: I0912 17:08:17.835463 2633 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:08:17.883340 systemd[1]: Created slice kubepods-burstable-pod076ee75b_0107_4cfc_a1c1_6b5444985060.slice - libcontainer container kubepods-burstable-pod076ee75b_0107_4cfc_a1c1_6b5444985060.slice. Sep 12 17:08:17.891467 systemd[1]: Created slice kubepods-burstable-poda6266e47_5c39_452c_825d_ae9d0bd5f083.slice - libcontainer container kubepods-burstable-poda6266e47_5c39_452c_825d_ae9d0bd5f083.slice. Sep 12 17:08:17.926305 kubelet[2633]: I0912 17:08:17.926247 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/076ee75b-0107-4cfc-a1c1-6b5444985060-config-volume\") pod \"coredns-674b8bbfcf-l246z\" (UID: \"076ee75b-0107-4cfc-a1c1-6b5444985060\") " pod="kube-system/coredns-674b8bbfcf-l246z" Sep 12 17:08:17.926305 kubelet[2633]: I0912 17:08:17.926298 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ntt\" (UniqueName: \"kubernetes.io/projected/a6266e47-5c39-452c-825d-ae9d0bd5f083-kube-api-access-d2ntt\") pod \"coredns-674b8bbfcf-k55xk\" (UID: \"a6266e47-5c39-452c-825d-ae9d0bd5f083\") " pod="kube-system/coredns-674b8bbfcf-k55xk" Sep 12 17:08:17.926616 kubelet[2633]: I0912 17:08:17.926362 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6266e47-5c39-452c-825d-ae9d0bd5f083-config-volume\") pod \"coredns-674b8bbfcf-k55xk\" (UID: \"a6266e47-5c39-452c-825d-ae9d0bd5f083\") " pod="kube-system/coredns-674b8bbfcf-k55xk" Sep 12 17:08:17.926616 kubelet[2633]: I0912 17:08:17.926433 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qptn\" (UniqueName: \"kubernetes.io/projected/076ee75b-0107-4cfc-a1c1-6b5444985060-kube-api-access-5qptn\") pod \"coredns-674b8bbfcf-l246z\" (UID: \"076ee75b-0107-4cfc-a1c1-6b5444985060\") " pod="kube-system/coredns-674b8bbfcf-l246z" Sep 12 17:08:18.187654 kubelet[2633]: E0912 17:08:18.187504 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:18.189455 containerd[1511]: time="2025-09-12T17:08:18.188466498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l246z,Uid:076ee75b-0107-4cfc-a1c1-6b5444985060,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:18.195581 kubelet[2633]: E0912 17:08:18.195542 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:18.196155 containerd[1511]: time="2025-09-12T17:08:18.196110617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k55xk,Uid:a6266e47-5c39-452c-825d-ae9d0bd5f083,Namespace:kube-system,Attempt:0,}" Sep 12 17:08:18.563580 kubelet[2633]: E0912 17:08:18.563525 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:18.576548 kubelet[2633]: I0912 17:08:18.576460 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gppwc" podStartSLOduration=5.498166031 podStartE2EDuration="17.576393634s" podCreationTimestamp="2025-09-12 17:08:01 +0000 UTC" firstStartedPulling="2025-09-12 17:08:01.571782389 +0000 UTC m=+6.258829318" lastFinishedPulling="2025-09-12 17:08:13.650009992 +0000 UTC m=+18.337056921" observedRunningTime="2025-09-12 17:08:18.575746994 +0000 UTC m=+23.262793923" watchObservedRunningTime="2025-09-12 17:08:18.576393634 +0000 UTC m=+23.263440553" Sep 12 17:08:19.566202 kubelet[2633]: E0912 17:08:19.566151 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:19.996707 systemd-networkd[1428]: cilium_host: Link UP Sep 12 17:08:19.996872 systemd-networkd[1428]: cilium_net: Link UP Sep 12 17:08:19.996877 systemd-networkd[1428]: cilium_net: Gained carrier Sep 12 17:08:19.997097 systemd-networkd[1428]: cilium_host: Gained carrier Sep 12 17:08:19.998614 systemd-networkd[1428]: cilium_host: Gained IPv6LL Sep 12 17:08:20.123787 systemd-networkd[1428]: cilium_vxlan: Link UP Sep 12 17:08:20.123801 systemd-networkd[1428]: cilium_vxlan: Gained carrier Sep 12 17:08:20.356438 kernel: NET: Registered PF_ALG protocol family Sep 12 17:08:20.568497 kubelet[2633]: E0912 17:08:20.568455 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:20.701634 systemd-networkd[1428]: cilium_net: Gained IPv6LL Sep 12 17:08:21.082142 systemd-networkd[1428]: lxc_health: Link UP Sep 12 17:08:21.090737 systemd-networkd[1428]: lxc_health: Gained carrier Sep 12 17:08:21.213605 systemd-networkd[1428]: cilium_vxlan: Gained IPv6LL Sep 12 17:08:21.285094 systemd-networkd[1428]: lxc606f21161d00: Link UP Sep 12 17:08:21.285431 kernel: eth0: renamed from tmpb8361 Sep 12 17:08:21.304447 kernel: eth0: renamed from tmpf9b43 Sep 12 17:08:21.312873 systemd-networkd[1428]: lxc606f21161d00: Gained carrier Sep 12 17:08:21.313462 systemd-networkd[1428]: lxc4a1a37199ab0: Link UP Sep 12 17:08:21.315535 systemd-networkd[1428]: lxc4a1a37199ab0: Gained carrier Sep 12 17:08:21.571047 kubelet[2633]: E0912 17:08:21.570863 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:22.210717 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:47034.service - OpenSSH per-connection server daemon (10.0.0.1:47034). Sep 12 17:08:22.254151 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 47034 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:22.257328 sshd-session[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:22.267071 systemd-logind[1497]: New session 8 of user core. Sep 12 17:08:22.273691 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:08:22.429690 systemd-networkd[1428]: lxc606f21161d00: Gained IPv6LL Sep 12 17:08:22.430054 systemd-networkd[1428]: lxc_health: Gained IPv6LL Sep 12 17:08:22.578520 kubelet[2633]: E0912 17:08:22.575451 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:22.608808 sshd[3868]: Connection closed by 10.0.0.1 port 47034 Sep 12 17:08:22.609276 sshd-session[3866]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:22.614527 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:08:22.614894 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:47034.service: Deactivated successfully. Sep 12 17:08:22.617956 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:08:22.619127 systemd-logind[1497]: Removed session 8. Sep 12 17:08:23.325645 systemd-networkd[1428]: lxc4a1a37199ab0: Gained IPv6LL Sep 12 17:08:23.574686 kubelet[2633]: E0912 17:08:23.574646 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:24.735336 containerd[1511]: time="2025-09-12T17:08:24.735155677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:24.735336 containerd[1511]: time="2025-09-12T17:08:24.735206848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:24.735336 containerd[1511]: time="2025-09-12T17:08:24.735217731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:24.735336 containerd[1511]: time="2025-09-12T17:08:24.735290617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:24.748032 containerd[1511]: time="2025-09-12T17:08:24.747758848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:08:24.748032 containerd[1511]: time="2025-09-12T17:08:24.747835201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:08:24.748032 containerd[1511]: time="2025-09-12T17:08:24.747849258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:24.748679 containerd[1511]: time="2025-09-12T17:08:24.748639348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:08:24.775558 systemd[1]: Started cri-containerd-b8361ffe48b4a65438dcc15dad90739bc238f5e29eb899c9916d6d4c62b21d04.scope - libcontainer container b8361ffe48b4a65438dcc15dad90739bc238f5e29eb899c9916d6d4c62b21d04. Sep 12 17:08:24.777177 systemd[1]: Started cri-containerd-f9b43e6361a0b5e2c62b3782fa22c723f9946723ff48dfb92637b99b39f41669.scope - libcontainer container f9b43e6361a0b5e2c62b3782fa22c723f9946723ff48dfb92637b99b39f41669. Sep 12 17:08:24.788999 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:08:24.792372 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:08:24.819367 containerd[1511]: time="2025-09-12T17:08:24.819323346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k55xk,Uid:a6266e47-5c39-452c-825d-ae9d0bd5f083,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8361ffe48b4a65438dcc15dad90739bc238f5e29eb899c9916d6d4c62b21d04\"" Sep 12 17:08:24.820290 kubelet[2633]: E0912 17:08:24.820255 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:24.823590 containerd[1511]: time="2025-09-12T17:08:24.823561010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l246z,Uid:076ee75b-0107-4cfc-a1c1-6b5444985060,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9b43e6361a0b5e2c62b3782fa22c723f9946723ff48dfb92637b99b39f41669\"" Sep 12 17:08:24.824253 kubelet[2633]: E0912 17:08:24.824231 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:24.828910 containerd[1511]: time="2025-09-12T17:08:24.828874796Z" level=info msg="CreateContainer within sandbox \"b8361ffe48b4a65438dcc15dad90739bc238f5e29eb899c9916d6d4c62b21d04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:08:24.831332 containerd[1511]: time="2025-09-12T17:08:24.831263953Z" level=info msg="CreateContainer within sandbox \"f9b43e6361a0b5e2c62b3782fa22c723f9946723ff48dfb92637b99b39f41669\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:08:24.859021 containerd[1511]: time="2025-09-12T17:08:24.858961978Z" level=info msg="CreateContainer within sandbox \"f9b43e6361a0b5e2c62b3782fa22c723f9946723ff48dfb92637b99b39f41669\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"540209bd805c2585312057af03499c743566d36e3944d5a25d6a58d8147cd8fb\"" Sep 12 17:08:24.859171 containerd[1511]: time="2025-09-12T17:08:24.859041557Z" level=info msg="CreateContainer within sandbox \"b8361ffe48b4a65438dcc15dad90739bc238f5e29eb899c9916d6d4c62b21d04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c06491e6f1d0ef2c0ff0d8a0869d6487ce446a45cf00d166c46e78e7bfe09d5\"" Sep 12 17:08:24.859580 containerd[1511]: time="2025-09-12T17:08:24.859484152Z" level=info msg="StartContainer for \"540209bd805c2585312057af03499c743566d36e3944d5a25d6a58d8147cd8fb\"" Sep 12 17:08:24.859671 containerd[1511]: time="2025-09-12T17:08:24.859617248Z" level=info msg="StartContainer for \"2c06491e6f1d0ef2c0ff0d8a0869d6487ce446a45cf00d166c46e78e7bfe09d5\"" Sep 12 17:08:24.887384 systemd[1]: Started cri-containerd-540209bd805c2585312057af03499c743566d36e3944d5a25d6a58d8147cd8fb.scope - libcontainer container 540209bd805c2585312057af03499c743566d36e3944d5a25d6a58d8147cd8fb. Sep 12 17:08:24.897546 systemd[1]: Started cri-containerd-2c06491e6f1d0ef2c0ff0d8a0869d6487ce446a45cf00d166c46e78e7bfe09d5.scope - libcontainer container 2c06491e6f1d0ef2c0ff0d8a0869d6487ce446a45cf00d166c46e78e7bfe09d5. Sep 12 17:08:24.929018 containerd[1511]: time="2025-09-12T17:08:24.928971397Z" level=info msg="StartContainer for \"2c06491e6f1d0ef2c0ff0d8a0869d6487ce446a45cf00d166c46e78e7bfe09d5\" returns successfully" Sep 12 17:08:24.929158 containerd[1511]: time="2025-09-12T17:08:24.928973882Z" level=info msg="StartContainer for \"540209bd805c2585312057af03499c743566d36e3944d5a25d6a58d8147cd8fb\" returns successfully" Sep 12 17:08:25.578799 kubelet[2633]: E0912 17:08:25.578746 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:25.581256 kubelet[2633]: E0912 17:08:25.581148 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:25.612730 kubelet[2633]: I0912 17:08:25.611514 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l246z" podStartSLOduration=24.611490079 podStartE2EDuration="24.611490079s" podCreationTimestamp="2025-09-12 17:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:08:25.588836532 +0000 UTC m=+30.275883461" watchObservedRunningTime="2025-09-12 17:08:25.611490079 +0000 UTC m=+30.298537008" Sep 12 17:08:25.626706 kubelet[2633]: I0912 17:08:25.625449 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k55xk" podStartSLOduration=24.625387931 podStartE2EDuration="24.625387931s" podCreationTimestamp="2025-09-12 17:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:08:25.624858744 +0000 UTC m=+30.311905673" watchObservedRunningTime="2025-09-12 17:08:25.625387931 +0000 UTC m=+30.312434860" Sep 12 17:08:25.745220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875571783.mount: Deactivated successfully. Sep 12 17:08:26.583084 kubelet[2633]: E0912 17:08:26.583031 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:26.583084 kubelet[2633]: E0912 17:08:26.583074 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:27.584312 kubelet[2633]: E0912 17:08:27.584245 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:27.584312 kubelet[2633]: E0912 17:08:27.584290 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:08:27.624296 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:47048.service - OpenSSH per-connection server daemon (10.0.0.1:47048). Sep 12 17:08:27.670477 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 47048 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:27.672219 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:27.676940 systemd-logind[1497]: New session 9 of user core. Sep 12 17:08:27.689561 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:08:27.908999 sshd[4063]: Connection closed by 10.0.0.1 port 47048 Sep 12 17:08:27.909140 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:27.914662 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:47048.service: Deactivated successfully. Sep 12 17:08:27.917196 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:08:27.917971 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:08:27.918849 systemd-logind[1497]: Removed session 9. Sep 12 17:08:32.921255 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:51390.service - OpenSSH per-connection server daemon (10.0.0.1:51390). Sep 12 17:08:32.963495 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 51390 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:32.965221 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:32.969565 systemd-logind[1497]: New session 10 of user core. Sep 12 17:08:32.980550 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:08:33.121084 sshd[4081]: Connection closed by 10.0.0.1 port 51390 Sep 12 17:08:33.121512 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:33.125297 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:51390.service: Deactivated successfully. Sep 12 17:08:33.127272 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:08:33.127950 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:08:33.128840 systemd-logind[1497]: Removed session 10. Sep 12 17:08:38.134902 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:51394.service - OpenSSH per-connection server daemon (10.0.0.1:51394). Sep 12 17:08:38.180235 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 51394 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:38.181713 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:38.186074 systemd-logind[1497]: New session 11 of user core. Sep 12 17:08:38.194547 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:08:38.330264 sshd[4097]: Connection closed by 10.0.0.1 port 51394 Sep 12 17:08:38.330699 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:38.335017 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:51394.service: Deactivated successfully. Sep 12 17:08:38.337269 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:08:38.338043 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:08:38.339071 systemd-logind[1497]: Removed session 11. Sep 12 17:08:43.350225 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:45106.service - OpenSSH per-connection server daemon (10.0.0.1:45106). Sep 12 17:08:43.396781 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 45106 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:43.398692 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:43.403242 systemd-logind[1497]: New session 12 of user core. Sep 12 17:08:43.413545 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:08:43.536839 sshd[4114]: Connection closed by 10.0.0.1 port 45106 Sep 12 17:08:43.537418 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:43.548844 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:45106.service: Deactivated successfully. Sep 12 17:08:43.551800 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:08:43.553998 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:08:43.561962 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:45112.service - OpenSSH per-connection server daemon (10.0.0.1:45112). Sep 12 17:08:43.563360 systemd-logind[1497]: Removed session 12. Sep 12 17:08:43.599814 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 45112 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:43.601523 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:43.607107 systemd-logind[1497]: New session 13 of user core. Sep 12 17:08:43.617610 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:08:43.793979 sshd[4130]: Connection closed by 10.0.0.1 port 45112 Sep 12 17:08:43.795378 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:43.806656 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:45112.service: Deactivated successfully. Sep 12 17:08:43.809539 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:08:43.811179 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:08:43.821851 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:45118.service - OpenSSH per-connection server daemon (10.0.0.1:45118). Sep 12 17:08:43.822921 systemd-logind[1497]: Removed session 13. Sep 12 17:08:43.858438 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 45118 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:43.859965 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:43.864517 systemd-logind[1497]: New session 14 of user core. Sep 12 17:08:43.882548 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:08:43.992490 sshd[4144]: Connection closed by 10.0.0.1 port 45118 Sep 12 17:08:43.992910 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:43.997298 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:45118.service: Deactivated successfully. Sep 12 17:08:43.999430 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:08:44.000109 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:08:44.000988 systemd-logind[1497]: Removed session 14. Sep 12 17:08:49.006627 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). Sep 12 17:08:49.048136 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:49.049616 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:49.053978 systemd-logind[1497]: New session 15 of user core. Sep 12 17:08:49.065547 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:08:49.171182 sshd[4161]: Connection closed by 10.0.0.1 port 45122 Sep 12 17:08:49.171562 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:49.176156 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:45122.service: Deactivated successfully. Sep 12 17:08:49.178392 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:08:49.179055 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:08:49.179879 systemd-logind[1497]: Removed session 15. Sep 12 17:08:54.184284 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:47884.service - OpenSSH per-connection server daemon (10.0.0.1:47884). Sep 12 17:08:54.227366 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 47884 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:54.228854 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:54.232788 systemd-logind[1497]: New session 16 of user core. Sep 12 17:08:54.238527 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:08:54.348756 sshd[4177]: Connection closed by 10.0.0.1 port 47884 Sep 12 17:08:54.349173 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:54.362078 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:47884.service: Deactivated successfully. Sep 12 17:08:54.363962 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:08:54.365729 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:08:54.367105 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:47896.service - OpenSSH per-connection server daemon (10.0.0.1:47896). Sep 12 17:08:54.367941 systemd-logind[1497]: Removed session 16. Sep 12 17:08:54.408648 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 47896 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:54.410226 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:54.414466 systemd-logind[1497]: New session 17 of user core. Sep 12 17:08:54.429542 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:08:54.720980 sshd[4193]: Connection closed by 10.0.0.1 port 47896 Sep 12 17:08:54.721472 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:54.738892 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:47896.service: Deactivated successfully. Sep 12 17:08:54.741478 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:08:54.743941 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:08:54.749841 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:47910.service - OpenSSH per-connection server daemon (10.0.0.1:47910). Sep 12 17:08:54.750800 systemd-logind[1497]: Removed session 17. Sep 12 17:08:54.793977 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 47910 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:54.795460 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:54.799733 systemd-logind[1497]: New session 18 of user core. Sep 12 17:08:54.806541 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:08:55.317219 sshd[4207]: Connection closed by 10.0.0.1 port 47910 Sep 12 17:08:55.317863 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:55.333652 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:47910.service: Deactivated successfully. Sep 12 17:08:55.336278 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:08:55.337081 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:08:55.345472 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:47922.service - OpenSSH per-connection server daemon (10.0.0.1:47922). Sep 12 17:08:55.349999 systemd-logind[1497]: Removed session 18. Sep 12 17:08:55.387539 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 47922 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:55.389332 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:55.394492 systemd-logind[1497]: New session 19 of user core. Sep 12 17:08:55.406553 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:08:55.689770 sshd[4230]: Connection closed by 10.0.0.1 port 47922 Sep 12 17:08:55.692382 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:55.703414 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:47922.service: Deactivated successfully. Sep 12 17:08:55.705491 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:08:55.706994 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:08:55.719816 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:47928.service - OpenSSH per-connection server daemon (10.0.0.1:47928). Sep 12 17:08:55.721245 systemd-logind[1497]: Removed session 19. Sep 12 17:08:55.757745 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 47928 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:08:55.759342 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:08:55.764491 systemd-logind[1497]: New session 20 of user core. Sep 12 17:08:55.771557 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:08:55.891122 sshd[4243]: Connection closed by 10.0.0.1 port 47928 Sep 12 17:08:55.891532 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Sep 12 17:08:55.896152 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:47928.service: Deactivated successfully. Sep 12 17:08:55.898513 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:08:55.899342 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:08:55.900225 systemd-logind[1497]: Removed session 20. Sep 12 17:09:00.904611 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:43422.service - OpenSSH per-connection server daemon (10.0.0.1:43422). Sep 12 17:09:00.946687 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 43422 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:09:00.948693 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:00.953942 systemd-logind[1497]: New session 21 of user core. Sep 12 17:09:00.959562 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:09:01.076778 sshd[4258]: Connection closed by 10.0.0.1 port 43422 Sep 12 17:09:01.077172 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:01.081680 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:43422.service: Deactivated successfully. Sep 12 17:09:01.084193 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:09:01.085089 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:09:01.086005 systemd-logind[1497]: Removed session 21. Sep 12 17:09:06.094819 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:43434.service - OpenSSH per-connection server daemon (10.0.0.1:43434). Sep 12 17:09:06.138626 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 43434 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:09:06.140111 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:06.144306 systemd-logind[1497]: New session 22 of user core. Sep 12 17:09:06.155601 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:09:06.272655 sshd[4278]: Connection closed by 10.0.0.1 port 43434 Sep 12 17:09:06.273080 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:06.277391 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:43434.service: Deactivated successfully. Sep 12 17:09:06.279695 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:09:06.280436 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:09:06.281455 systemd-logind[1497]: Removed session 22. Sep 12 17:09:10.390631 kubelet[2633]: E0912 17:09:10.390543 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:11.286824 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:41862.service - OpenSSH per-connection server daemon (10.0.0.1:41862). Sep 12 17:09:11.327983 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 41862 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:09:11.329637 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:11.333847 systemd-logind[1497]: New session 23 of user core. Sep 12 17:09:11.342536 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:09:11.449338 sshd[4293]: Connection closed by 10.0.0.1 port 41862 Sep 12 17:09:11.449773 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:11.460876 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:41862.service: Deactivated successfully. Sep 12 17:09:11.463462 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:09:11.465148 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:09:11.468905 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:41878.service - OpenSSH per-connection server daemon (10.0.0.1:41878). Sep 12 17:09:11.470100 systemd-logind[1497]: Removed session 23. Sep 12 17:09:11.507658 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 41878 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:09:11.509077 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:11.513428 systemd-logind[1497]: New session 24 of user core. Sep 12 17:09:11.529582 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:09:13.067175 containerd[1511]: time="2025-09-12T17:09:13.067086536Z" level=info msg="StopContainer for \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\" with timeout 30 (s)" Sep 12 17:09:13.068220 containerd[1511]: time="2025-09-12T17:09:13.067582130Z" level=info msg="Stop container \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\" with signal terminated" Sep 12 17:09:13.085574 systemd[1]: cri-containerd-65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef.scope: Deactivated successfully. Sep 12 17:09:13.103705 systemd[1]: run-containerd-runc-k8s.io-fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420-runc.0UkpBT.mount: Deactivated successfully. Sep 12 17:09:13.113952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef-rootfs.mount: Deactivated successfully. Sep 12 17:09:13.119727 containerd[1511]: time="2025-09-12T17:09:13.119629736Z" level=info msg="shim disconnected" id=65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef namespace=k8s.io Sep 12 17:09:13.119727 containerd[1511]: time="2025-09-12T17:09:13.119715107Z" level=warning msg="cleaning up after shim disconnected" id=65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef namespace=k8s.io Sep 12 17:09:13.119727 containerd[1511]: time="2025-09-12T17:09:13.119730015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:13.133493 containerd[1511]: time="2025-09-12T17:09:13.133433968Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:09:13.137599 containerd[1511]: time="2025-09-12T17:09:13.137543842Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:09:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:09:13.142707 containerd[1511]: time="2025-09-12T17:09:13.142664592Z" level=info msg="StopContainer for \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\" returns successfully" Sep 12 17:09:13.143482 containerd[1511]: time="2025-09-12T17:09:13.143441786Z" level=info msg="StopPodSandbox for \"ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232\"" Sep 12 17:09:13.150678 containerd[1511]: time="2025-09-12T17:09:13.143500968Z" level=info msg="Container to stop \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:09:13.153836 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232-shm.mount: Deactivated successfully. Sep 12 17:09:13.158439 systemd[1]: cri-containerd-ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232.scope: Deactivated successfully. Sep 12 17:09:13.162385 containerd[1511]: time="2025-09-12T17:09:13.162340749Z" level=info msg="StopContainer for \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\" with timeout 2 (s)" Sep 12 17:09:13.162661 containerd[1511]: time="2025-09-12T17:09:13.162639732Z" level=info msg="Stop container \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\" with signal terminated" Sep 12 17:09:13.170350 systemd-networkd[1428]: lxc_health: Link DOWN Sep 12 17:09:13.170364 systemd-networkd[1428]: lxc_health: Lost carrier Sep 12 17:09:13.183853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232-rootfs.mount: Deactivated successfully. Sep 12 17:09:13.192305 containerd[1511]: time="2025-09-12T17:09:13.192219852Z" level=info msg="shim disconnected" id=ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232 namespace=k8s.io Sep 12 17:09:13.192305 containerd[1511]: time="2025-09-12T17:09:13.192286667Z" level=warning msg="cleaning up after shim disconnected" id=ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232 namespace=k8s.io Sep 12 17:09:13.192305 containerd[1511]: time="2025-09-12T17:09:13.192302417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:13.197867 systemd[1]: cri-containerd-fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420.scope: Deactivated successfully. Sep 12 17:09:13.198589 systemd[1]: cri-containerd-fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420.scope: Consumed 7.091s CPU time, 127.7M memory peak, 780K read from disk, 13.3M written to disk. Sep 12 17:09:13.210161 containerd[1511]: time="2025-09-12T17:09:13.210087159Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:09:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:09:13.212357 containerd[1511]: time="2025-09-12T17:09:13.212306892Z" level=info msg="TearDown network for sandbox \"ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232\" successfully" Sep 12 17:09:13.212357 containerd[1511]: time="2025-09-12T17:09:13.212348321Z" level=info msg="StopPodSandbox for \"ecacbe3837adbe888f67c0d7ce0e6e2d59be89f3ff278e1e04d00e802d1cb232\" returns successfully" Sep 12 17:09:13.256537 kubelet[2633]: I0912 17:09:13.256471 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgk4m\" (UniqueName: \"kubernetes.io/projected/6297b608-f375-4e4f-abda-ac622d8926c9-kube-api-access-sgk4m\") pod \"6297b608-f375-4e4f-abda-ac622d8926c9\" (UID: \"6297b608-f375-4e4f-abda-ac622d8926c9\") " Sep 12 17:09:13.256537 kubelet[2633]: I0912 17:09:13.256532 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6297b608-f375-4e4f-abda-ac622d8926c9-cilium-config-path\") pod \"6297b608-f375-4e4f-abda-ac622d8926c9\" (UID: \"6297b608-f375-4e4f-abda-ac622d8926c9\") " Sep 12 17:09:13.261339 kubelet[2633]: I0912 17:09:13.261276 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6297b608-f375-4e4f-abda-ac622d8926c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6297b608-f375-4e4f-abda-ac622d8926c9" (UID: "6297b608-f375-4e4f-abda-ac622d8926c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:09:13.271046 kubelet[2633]: I0912 17:09:13.270982 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6297b608-f375-4e4f-abda-ac622d8926c9-kube-api-access-sgk4m" (OuterVolumeSpecName: "kube-api-access-sgk4m") pod "6297b608-f375-4e4f-abda-ac622d8926c9" (UID: "6297b608-f375-4e4f-abda-ac622d8926c9"). InnerVolumeSpecName "kube-api-access-sgk4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:09:13.305093 containerd[1511]: time="2025-09-12T17:09:13.304990040Z" level=info msg="shim disconnected" id=fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420 namespace=k8s.io Sep 12 17:09:13.305093 containerd[1511]: time="2025-09-12T17:09:13.305053449Z" level=warning msg="cleaning up after shim disconnected" id=fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420 namespace=k8s.io Sep 12 17:09:13.305093 containerd[1511]: time="2025-09-12T17:09:13.305065372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:13.325868 containerd[1511]: time="2025-09-12T17:09:13.325692892Z" level=info msg="StopContainer for \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\" returns successfully" Sep 12 17:09:13.326433 containerd[1511]: time="2025-09-12T17:09:13.326369967Z" level=info msg="StopPodSandbox for \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\"" Sep 12 17:09:13.326665 containerd[1511]: time="2025-09-12T17:09:13.326430832Z" level=info msg="Container to stop \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:09:13.326665 containerd[1511]: time="2025-09-12T17:09:13.326493249Z" level=info msg="Container to stop \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:09:13.326665 containerd[1511]: time="2025-09-12T17:09:13.326521333Z" level=info msg="Container to stop \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:09:13.326665 containerd[1511]: time="2025-09-12T17:09:13.326537984Z" level=info msg="Container to stop \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:09:13.326665 containerd[1511]: time="2025-09-12T17:09:13.326554064Z" level=info msg="Container to stop \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:09:13.335101 systemd[1]: cri-containerd-80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88.scope: Deactivated successfully. Sep 12 17:09:13.357618 kubelet[2633]: I0912 17:09:13.357570 2633 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sgk4m\" (UniqueName: \"kubernetes.io/projected/6297b608-f375-4e4f-abda-ac622d8926c9-kube-api-access-sgk4m\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.357618 kubelet[2633]: I0912 17:09:13.357609 2633 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6297b608-f375-4e4f-abda-ac622d8926c9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.362081 containerd[1511]: time="2025-09-12T17:09:13.361977244Z" level=info msg="shim disconnected" id=80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88 namespace=k8s.io Sep 12 17:09:13.362081 containerd[1511]: time="2025-09-12T17:09:13.362050963Z" level=warning msg="cleaning up after shim disconnected" id=80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88 namespace=k8s.io Sep 12 17:09:13.362081 containerd[1511]: time="2025-09-12T17:09:13.362061392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:13.377878 containerd[1511]: time="2025-09-12T17:09:13.377820198Z" level=info msg="TearDown network for sandbox \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" successfully" Sep 12 17:09:13.377878 containerd[1511]: time="2025-09-12T17:09:13.377855103Z" level=info msg="StopPodSandbox for \"80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88\" returns successfully" Sep 12 17:09:13.411427 systemd[1]: Removed slice kubepods-besteffort-pod6297b608_f375_4e4f_abda_ac622d8926c9.slice - libcontainer container kubepods-besteffort-pod6297b608_f375_4e4f_abda_ac622d8926c9.slice. Sep 12 17:09:13.458203 kubelet[2633]: I0912 17:09:13.458139 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-run\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458203 kubelet[2633]: I0912 17:09:13.458187 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-etc-cni-netd\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458203 kubelet[2633]: I0912 17:09:13.458206 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cni-path\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458203 kubelet[2633]: I0912 17:09:13.458219 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-lib-modules\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458522 kubelet[2633]: I0912 17:09:13.458242 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0554f14-d913-431f-8808-00c2d67c6fd5-hubble-tls\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458522 kubelet[2633]: I0912 17:09:13.458256 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-cgroup\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458522 kubelet[2633]: I0912 17:09:13.458275 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-config-path\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458522 kubelet[2633]: I0912 17:09:13.458292 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfprz\" (UniqueName: \"kubernetes.io/projected/b0554f14-d913-431f-8808-00c2d67c6fd5-kube-api-access-pfprz\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458522 kubelet[2633]: I0912 17:09:13.458307 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-xtables-lock\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458522 kubelet[2633]: I0912 17:09:13.458319 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-hostproc\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458738 kubelet[2633]: I0912 17:09:13.458337 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-host-proc-sys-net\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458738 kubelet[2633]: I0912 17:09:13.458351 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-host-proc-sys-kernel\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.458738 kubelet[2633]: I0912 17:09:13.458368 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0554f14-d913-431f-8808-00c2d67c6fd5-clustermesh-secrets\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.459288 kubelet[2633]: I0912 17:09:13.458327 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.459334 kubelet[2633]: I0912 17:09:13.458337 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.459334 kubelet[2633]: I0912 17:09:13.458389 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-hostproc" (OuterVolumeSpecName: "hostproc") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.459334 kubelet[2633]: I0912 17:09:13.459240 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.459334 kubelet[2633]: I0912 17:09:13.459256 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.460722 kubelet[2633]: I0912 17:09:13.460445 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.460722 kubelet[2633]: I0912 17:09:13.460497 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.460722 kubelet[2633]: I0912 17:09:13.460524 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.460722 kubelet[2633]: I0912 17:09:13.460560 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-bpf-maps\") pod \"b0554f14-d913-431f-8808-00c2d67c6fd5\" (UID: \"b0554f14-d913-431f-8808-00c2d67c6fd5\") " Sep 12 17:09:13.460722 kubelet[2633]: I0912 17:09:13.460622 2633 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.460722 kubelet[2633]: I0912 17:09:13.460633 2633 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.460904 kubelet[2633]: I0912 17:09:13.460641 2633 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.460904 kubelet[2633]: I0912 17:09:13.460654 2633 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.460904 kubelet[2633]: I0912 17:09:13.460662 2633 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.460904 kubelet[2633]: I0912 17:09:13.460670 2633 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.460904 kubelet[2633]: I0912 17:09:13.460678 2633 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.460904 kubelet[2633]: I0912 17:09:13.460685 2633 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.460904 kubelet[2633]: I0912 17:09:13.460703 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.462378 kubelet[2633]: I0912 17:09:13.461092 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cni-path" (OuterVolumeSpecName: "cni-path") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:09:13.462590 kubelet[2633]: I0912 17:09:13.462519 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0554f14-d913-431f-8808-00c2d67c6fd5-kube-api-access-pfprz" (OuterVolumeSpecName: "kube-api-access-pfprz") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "kube-api-access-pfprz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:09:13.462831 kubelet[2633]: I0912 17:09:13.462713 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:09:13.462831 kubelet[2633]: I0912 17:09:13.462752 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0554f14-d913-431f-8808-00c2d67c6fd5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:09:13.462930 kubelet[2633]: I0912 17:09:13.462866 2633 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0554f14-d913-431f-8808-00c2d67c6fd5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b0554f14-d913-431f-8808-00c2d67c6fd5" (UID: "b0554f14-d913-431f-8808-00c2d67c6fd5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:09:13.561501 kubelet[2633]: I0912 17:09:13.561439 2633 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.561501 kubelet[2633]: I0912 17:09:13.561488 2633 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0554f14-d913-431f-8808-00c2d67c6fd5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.561501 kubelet[2633]: I0912 17:09:13.561512 2633 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0554f14-d913-431f-8808-00c2d67c6fd5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.561501 kubelet[2633]: I0912 17:09:13.561522 2633 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pfprz\" (UniqueName: \"kubernetes.io/projected/b0554f14-d913-431f-8808-00c2d67c6fd5-kube-api-access-pfprz\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.561501 kubelet[2633]: I0912 17:09:13.561532 2633 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0554f14-d913-431f-8808-00c2d67c6fd5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.561501 kubelet[2633]: I0912 17:09:13.561540 2633 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0554f14-d913-431f-8808-00c2d67c6fd5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 17:09:13.666760 kubelet[2633]: I0912 17:09:13.666114 2633 scope.go:117] "RemoveContainer" containerID="65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef" Sep 12 17:09:13.672737 containerd[1511]: time="2025-09-12T17:09:13.672677252Z" level=info msg="RemoveContainer for \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\"" Sep 12 17:09:13.678090 systemd[1]: Removed slice kubepods-burstable-podb0554f14_d913_431f_8808_00c2d67c6fd5.slice - libcontainer container kubepods-burstable-podb0554f14_d913_431f_8808_00c2d67c6fd5.slice. Sep 12 17:09:13.678222 systemd[1]: kubepods-burstable-podb0554f14_d913_431f_8808_00c2d67c6fd5.slice: Consumed 7.204s CPU time, 128M memory peak, 804K read from disk, 13.3M written to disk. Sep 12 17:09:13.680227 containerd[1511]: time="2025-09-12T17:09:13.680171243Z" level=info msg="RemoveContainer for \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\" returns successfully" Sep 12 17:09:13.681262 kubelet[2633]: I0912 17:09:13.680795 2633 scope.go:117] "RemoveContainer" containerID="65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef" Sep 12 17:09:13.681350 containerd[1511]: time="2025-09-12T17:09:13.681103400Z" level=error msg="ContainerStatus for \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\": not found" Sep 12 17:09:13.681457 kubelet[2633]: E0912 17:09:13.681381 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\": not found" containerID="65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef" Sep 12 17:09:13.681526 kubelet[2633]: I0912 17:09:13.681466 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef"} err="failed to get container status \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\": rpc error: code = NotFound desc = an error occurred when try to find container \"65226dfa8c537a945e398e484aac37d963218cf1b98d1232d781a0439003ddef\": not found" Sep 12 17:09:13.681579 kubelet[2633]: I0912 17:09:13.681526 2633 scope.go:117] "RemoveContainer" containerID="fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420" Sep 12 17:09:13.683885 containerd[1511]: time="2025-09-12T17:09:13.683272066Z" level=info msg="RemoveContainer for \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\"" Sep 12 17:09:13.688090 containerd[1511]: time="2025-09-12T17:09:13.688043006Z" level=info msg="RemoveContainer for \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\" returns successfully" Sep 12 17:09:13.688315 kubelet[2633]: I0912 17:09:13.688268 2633 scope.go:117] "RemoveContainer" containerID="fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0" Sep 12 17:09:13.690632 containerd[1511]: time="2025-09-12T17:09:13.690312233Z" level=info msg="RemoveContainer for \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\"" Sep 12 17:09:13.695176 containerd[1511]: time="2025-09-12T17:09:13.695132315Z" level=info msg="RemoveContainer for \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\" returns successfully" Sep 12 17:09:13.695366 kubelet[2633]: I0912 17:09:13.695335 2633 scope.go:117] "RemoveContainer" containerID="186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932" Sep 12 17:09:13.716547 containerd[1511]: time="2025-09-12T17:09:13.716513935Z" level=info msg="RemoveContainer for \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\"" Sep 12 17:09:13.720865 containerd[1511]: time="2025-09-12T17:09:13.720814149Z" level=info msg="RemoveContainer for \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\" returns successfully" Sep 12 17:09:13.721111 kubelet[2633]: I0912 17:09:13.721053 2633 scope.go:117] "RemoveContainer" containerID="95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab" Sep 12 17:09:13.728237 containerd[1511]: time="2025-09-12T17:09:13.728190478Z" level=info msg="RemoveContainer for \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\"" Sep 12 17:09:13.732040 containerd[1511]: time="2025-09-12T17:09:13.732004745Z" level=info msg="RemoveContainer for \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\" returns successfully" Sep 12 17:09:13.732189 kubelet[2633]: I0912 17:09:13.732151 2633 scope.go:117] "RemoveContainer" containerID="b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560" Sep 12 17:09:13.733282 containerd[1511]: time="2025-09-12T17:09:13.733250703Z" level=info msg="RemoveContainer for \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\"" Sep 12 17:09:13.736695 containerd[1511]: time="2025-09-12T17:09:13.736662994Z" level=info msg="RemoveContainer for \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\" returns successfully" Sep 12 17:09:13.736837 kubelet[2633]: I0912 17:09:13.736812 2633 scope.go:117] "RemoveContainer" containerID="fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420" Sep 12 17:09:13.737091 containerd[1511]: time="2025-09-12T17:09:13.737014786Z" level=error msg="ContainerStatus for \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\": not found" Sep 12 17:09:13.737173 kubelet[2633]: E0912 17:09:13.737150 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\": not found" containerID="fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420" Sep 12 17:09:13.737211 kubelet[2633]: I0912 17:09:13.737178 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420"} err="failed to get container status \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420\": not found" Sep 12 17:09:13.737211 kubelet[2633]: I0912 17:09:13.737199 2633 scope.go:117] "RemoveContainer" containerID="fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0" Sep 12 17:09:13.737392 containerd[1511]: time="2025-09-12T17:09:13.737357923Z" level=error msg="ContainerStatus for \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\": not found" Sep 12 17:09:13.737537 kubelet[2633]: E0912 17:09:13.737495 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\": not found" containerID="fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0" Sep 12 17:09:13.737571 kubelet[2633]: I0912 17:09:13.737537 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0"} err="failed to get container status \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe67b96570de29faaf693e8ba3dc614bd4f849c1304321288cf04ca9582f24e0\": not found" Sep 12 17:09:13.737571 kubelet[2633]: I0912 17:09:13.737554 2633 scope.go:117] "RemoveContainer" containerID="186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932" Sep 12 17:09:13.737786 containerd[1511]: time="2025-09-12T17:09:13.737732028Z" level=error msg="ContainerStatus for \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\": not found" Sep 12 17:09:13.737885 kubelet[2633]: E0912 17:09:13.737864 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\": not found" containerID="186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932" Sep 12 17:09:13.737933 kubelet[2633]: I0912 17:09:13.737886 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932"} err="failed to get container status \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\": rpc error: code = NotFound desc = an error occurred when try to find container \"186b0a82420b59ccddf077e3be68f9169f99e1d2ae0eb05ab3d90f7c97b87932\": not found" Sep 12 17:09:13.737933 kubelet[2633]: I0912 17:09:13.737898 2633 scope.go:117] "RemoveContainer" containerID="95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab" Sep 12 17:09:13.738069 containerd[1511]: time="2025-09-12T17:09:13.738038315Z" level=error msg="ContainerStatus for \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\": not found" Sep 12 17:09:13.738192 kubelet[2633]: E0912 17:09:13.738173 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\": not found" containerID="95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab" Sep 12 17:09:13.738229 kubelet[2633]: I0912 17:09:13.738191 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab"} err="failed to get container status \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"95332cb5f96766c21b38a0234c506e93146db0341fae4e3b584648f8f1d4a0ab\": not found" Sep 12 17:09:13.738229 kubelet[2633]: I0912 17:09:13.738204 2633 scope.go:117] "RemoveContainer" containerID="b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560" Sep 12 17:09:13.738379 containerd[1511]: time="2025-09-12T17:09:13.738349331Z" level=error msg="ContainerStatus for \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\": not found" Sep 12 17:09:13.738494 kubelet[2633]: E0912 17:09:13.738473 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\": not found" containerID="b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560" Sep 12 17:09:13.738553 kubelet[2633]: I0912 17:09:13.738495 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560"} err="failed to get container status \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4e3aab5e7e5eb0335dceca93c345d55fbcde6e52fb44e75aad3fd7254f7a560\": not found" Sep 12 17:09:14.097250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb992324e59e6b66322ca7bab43c71472805bbd327e617eb5cefb00b0cf9b420-rootfs.mount: Deactivated successfully. Sep 12 17:09:14.097419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88-rootfs.mount: Deactivated successfully. Sep 12 17:09:14.097516 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80e4be52ec43c9a5a9b668aca03139b3698d122d1e9a2a6db69cd001c4979b88-shm.mount: Deactivated successfully. Sep 12 17:09:14.097662 systemd[1]: var-lib-kubelet-pods-6297b608\x2df375\x2d4e4f\x2dabda\x2dac622d8926c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsgk4m.mount: Deactivated successfully. Sep 12 17:09:14.097775 systemd[1]: var-lib-kubelet-pods-b0554f14\x2dd913\x2d431f\x2d8808\x2d00c2d67c6fd5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpfprz.mount: Deactivated successfully. Sep 12 17:09:14.097888 systemd[1]: var-lib-kubelet-pods-b0554f14\x2dd913\x2d431f\x2d8808\x2d00c2d67c6fd5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:09:14.098010 systemd[1]: var-lib-kubelet-pods-b0554f14\x2dd913\x2d431f\x2d8808\x2d00c2d67c6fd5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:09:15.022891 sshd[4308]: Connection closed by 10.0.0.1 port 41878 Sep 12 17:09:15.023626 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:15.036720 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:41878.service: Deactivated successfully. Sep 12 17:09:15.038924 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:09:15.041353 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:09:15.047712 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:41884.service - OpenSSH per-connection server daemon (10.0.0.1:41884). Sep 12 17:09:15.048821 systemd-logind[1497]: Removed session 24. Sep 12 17:09:15.088510 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 41884 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:09:15.090087 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:15.095074 systemd-logind[1497]: New session 25 of user core. Sep 12 17:09:15.106554 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:09:15.392055 kubelet[2633]: I0912 17:09:15.391902 2633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6297b608-f375-4e4f-abda-ac622d8926c9" path="/var/lib/kubelet/pods/6297b608-f375-4e4f-abda-ac622d8926c9/volumes" Sep 12 17:09:15.392544 kubelet[2633]: I0912 17:09:15.392531 2633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0554f14-d913-431f-8808-00c2d67c6fd5" path="/var/lib/kubelet/pods/b0554f14-d913-431f-8808-00c2d67c6fd5/volumes" Sep 12 17:09:15.455903 kubelet[2633]: E0912 17:09:15.455852 2633 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:09:15.627253 sshd[4472]: Connection closed by 10.0.0.1 port 41884 Sep 12 17:09:15.628803 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:15.641935 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:41884.service: Deactivated successfully. Sep 12 17:09:15.646955 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:09:15.648383 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:09:15.656913 systemd[1]: Started sshd@25-10.0.0.50:22-10.0.0.1:41894.service - OpenSSH per-connection server daemon (10.0.0.1:41894). Sep 12 17:09:15.660361 systemd-logind[1497]: Removed session 25. Sep 12 17:09:15.678806 systemd[1]: Created slice kubepods-burstable-podd8d6ae51_7d0c_46b3_846a_47c36824b906.slice - libcontainer container kubepods-burstable-podd8d6ae51_7d0c_46b3_846a_47c36824b906.slice. Sep 12 17:09:15.700467 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 41894 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:09:15.702463 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:15.707738 systemd-logind[1497]: New session 26 of user core. Sep 12 17:09:15.723612 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:09:15.775429 sshd[4486]: Connection closed by 10.0.0.1 port 41894 Sep 12 17:09:15.775815 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:15.776820 kubelet[2633]: I0912 17:09:15.776753 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-bpf-maps\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.776820 kubelet[2633]: I0912 17:09:15.776798 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8d6ae51-7d0c-46b3-846a-47c36824b906-hubble-tls\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.776820 kubelet[2633]: I0912 17:09:15.776815 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsw2d\" (UniqueName: \"kubernetes.io/projected/d8d6ae51-7d0c-46b3-846a-47c36824b906-kube-api-access-rsw2d\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.776820 kubelet[2633]: I0912 17:09:15.776837 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8d6ae51-7d0c-46b3-846a-47c36824b906-clustermesh-secrets\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.776820 kubelet[2633]: I0912 17:09:15.776857 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-cilium-run\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.776820 kubelet[2633]: I0912 17:09:15.776874 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8d6ae51-7d0c-46b3-846a-47c36824b906-cilium-config-path\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.777667 kubelet[2633]: I0912 17:09:15.776897 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-host-proc-sys-net\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.777667 kubelet[2633]: I0912 17:09:15.776913 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-host-proc-sys-kernel\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.777667 kubelet[2633]: I0912 17:09:15.776953 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-etc-cni-netd\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.777667 kubelet[2633]: I0912 17:09:15.776970 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-lib-modules\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.777667 kubelet[2633]: I0912 17:09:15.776983 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d8d6ae51-7d0c-46b3-846a-47c36824b906-cilium-ipsec-secrets\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.777667 kubelet[2633]: I0912 17:09:15.776998 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-cni-path\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.777814 kubelet[2633]: I0912 17:09:15.777014 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-xtables-lock\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.777814 kubelet[2633]: I0912 17:09:15.777028 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-hostproc\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.777814 kubelet[2633]: I0912 17:09:15.777044 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8d6ae51-7d0c-46b3-846a-47c36824b906-cilium-cgroup\") pod \"cilium-2cdgb\" (UID: \"d8d6ae51-7d0c-46b3-846a-47c36824b906\") " pod="kube-system/cilium-2cdgb" Sep 12 17:09:15.789551 systemd[1]: sshd@25-10.0.0.50:22-10.0.0.1:41894.service: Deactivated successfully. Sep 12 17:09:15.791569 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:09:15.793307 systemd-logind[1497]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:09:15.802742 systemd[1]: Started sshd@26-10.0.0.50:22-10.0.0.1:41910.service - OpenSSH per-connection server daemon (10.0.0.1:41910). Sep 12 17:09:15.803704 systemd-logind[1497]: Removed session 26. Sep 12 17:09:15.840930 sshd[4492]: Accepted publickey for core from 10.0.0.1 port 41910 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:09:15.842497 sshd-session[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:09:15.846903 systemd-logind[1497]: New session 27 of user core. Sep 12 17:09:15.857635 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:09:15.982565 kubelet[2633]: E0912 17:09:15.982518 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:15.983162 containerd[1511]: time="2025-09-12T17:09:15.983109243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cdgb,Uid:d8d6ae51-7d0c-46b3-846a-47c36824b906,Namespace:kube-system,Attempt:0,}" Sep 12 17:09:16.006834 containerd[1511]: time="2025-09-12T17:09:16.006653786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:09:16.006834 containerd[1511]: time="2025-09-12T17:09:16.006725321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:09:16.007100 containerd[1511]: time="2025-09-12T17:09:16.006737745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:09:16.007100 containerd[1511]: time="2025-09-12T17:09:16.007013486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:09:16.031706 systemd[1]: Started cri-containerd-a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941.scope - libcontainer container a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941. Sep 12 17:09:16.057064 containerd[1511]: time="2025-09-12T17:09:16.057002900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cdgb,Uid:d8d6ae51-7d0c-46b3-846a-47c36824b906,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\"" Sep 12 17:09:16.057886 kubelet[2633]: E0912 17:09:16.057837 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:16.063794 containerd[1511]: time="2025-09-12T17:09:16.063737389Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:09:16.077361 containerd[1511]: time="2025-09-12T17:09:16.077290988Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1dc11723c1e8df611235e49337425a531db8444797ee8834a137238da23f3edb\"" Sep 12 17:09:16.077949 containerd[1511]: time="2025-09-12T17:09:16.077886552Z" level=info msg="StartContainer for \"1dc11723c1e8df611235e49337425a531db8444797ee8834a137238da23f3edb\"" Sep 12 17:09:16.108601 systemd[1]: Started cri-containerd-1dc11723c1e8df611235e49337425a531db8444797ee8834a137238da23f3edb.scope - libcontainer container 1dc11723c1e8df611235e49337425a531db8444797ee8834a137238da23f3edb. Sep 12 17:09:16.135782 containerd[1511]: time="2025-09-12T17:09:16.135726974Z" level=info msg="StartContainer for \"1dc11723c1e8df611235e49337425a531db8444797ee8834a137238da23f3edb\" returns successfully" Sep 12 17:09:16.148326 systemd[1]: cri-containerd-1dc11723c1e8df611235e49337425a531db8444797ee8834a137238da23f3edb.scope: Deactivated successfully. Sep 12 17:09:16.228923 containerd[1511]: time="2025-09-12T17:09:16.228846898Z" level=info msg="shim disconnected" id=1dc11723c1e8df611235e49337425a531db8444797ee8834a137238da23f3edb namespace=k8s.io Sep 12 17:09:16.228923 containerd[1511]: time="2025-09-12T17:09:16.228905377Z" level=warning msg="cleaning up after shim disconnected" id=1dc11723c1e8df611235e49337425a531db8444797ee8834a137238da23f3edb namespace=k8s.io Sep 12 17:09:16.228923 containerd[1511]: time="2025-09-12T17:09:16.228914014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:16.681171 kubelet[2633]: E0912 17:09:16.681132 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:16.689735 containerd[1511]: time="2025-09-12T17:09:16.689560526Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:09:16.702448 containerd[1511]: time="2025-09-12T17:09:16.702362826Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c08a4cacfd2006197c69d6af29cc7a4bb517b94e440d5e6258049c769a49fd1\"" Sep 12 17:09:16.703077 containerd[1511]: time="2025-09-12T17:09:16.703046577Z" level=info msg="StartContainer for \"5c08a4cacfd2006197c69d6af29cc7a4bb517b94e440d5e6258049c769a49fd1\"" Sep 12 17:09:16.730677 systemd[1]: Started cri-containerd-5c08a4cacfd2006197c69d6af29cc7a4bb517b94e440d5e6258049c769a49fd1.scope - libcontainer container 5c08a4cacfd2006197c69d6af29cc7a4bb517b94e440d5e6258049c769a49fd1. Sep 12 17:09:16.761980 containerd[1511]: time="2025-09-12T17:09:16.761923537Z" level=info msg="StartContainer for \"5c08a4cacfd2006197c69d6af29cc7a4bb517b94e440d5e6258049c769a49fd1\" returns successfully" Sep 12 17:09:16.767702 systemd[1]: cri-containerd-5c08a4cacfd2006197c69d6af29cc7a4bb517b94e440d5e6258049c769a49fd1.scope: Deactivated successfully. Sep 12 17:09:16.794351 containerd[1511]: time="2025-09-12T17:09:16.794287983Z" level=info msg="shim disconnected" id=5c08a4cacfd2006197c69d6af29cc7a4bb517b94e440d5e6258049c769a49fd1 namespace=k8s.io Sep 12 17:09:16.794351 containerd[1511]: time="2025-09-12T17:09:16.794343388Z" level=warning msg="cleaning up after shim disconnected" id=5c08a4cacfd2006197c69d6af29cc7a4bb517b94e440d5e6258049c769a49fd1 namespace=k8s.io Sep 12 17:09:16.794351 containerd[1511]: time="2025-09-12T17:09:16.794352445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:16.920465 kubelet[2633]: I0912 17:09:16.920385 2633 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:09:16Z","lastTransitionTime":"2025-09-12T17:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:09:17.684710 kubelet[2633]: E0912 17:09:17.684664 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:17.831065 containerd[1511]: time="2025-09-12T17:09:17.831014428Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:09:17.854591 containerd[1511]: time="2025-09-12T17:09:17.854544480Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d91e5e3ab91d37f10d8929f651bb142ec7e721d18cefda864ba17792e30f381\"" Sep 12 17:09:17.855054 containerd[1511]: time="2025-09-12T17:09:17.855013076Z" level=info msg="StartContainer for \"5d91e5e3ab91d37f10d8929f651bb142ec7e721d18cefda864ba17792e30f381\"" Sep 12 17:09:17.888550 systemd[1]: Started cri-containerd-5d91e5e3ab91d37f10d8929f651bb142ec7e721d18cefda864ba17792e30f381.scope - libcontainer container 5d91e5e3ab91d37f10d8929f651bb142ec7e721d18cefda864ba17792e30f381. Sep 12 17:09:17.921226 containerd[1511]: time="2025-09-12T17:09:17.921175956Z" level=info msg="StartContainer for \"5d91e5e3ab91d37f10d8929f651bb142ec7e721d18cefda864ba17792e30f381\" returns successfully" Sep 12 17:09:17.925662 systemd[1]: cri-containerd-5d91e5e3ab91d37f10d8929f651bb142ec7e721d18cefda864ba17792e30f381.scope: Deactivated successfully. Sep 12 17:09:17.946064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d91e5e3ab91d37f10d8929f651bb142ec7e721d18cefda864ba17792e30f381-rootfs.mount: Deactivated successfully. Sep 12 17:09:17.950711 containerd[1511]: time="2025-09-12T17:09:17.950638732Z" level=info msg="shim disconnected" id=5d91e5e3ab91d37f10d8929f651bb142ec7e721d18cefda864ba17792e30f381 namespace=k8s.io Sep 12 17:09:17.950834 containerd[1511]: time="2025-09-12T17:09:17.950710778Z" level=warning msg="cleaning up after shim disconnected" id=5d91e5e3ab91d37f10d8929f651bb142ec7e721d18cefda864ba17792e30f381 namespace=k8s.io Sep 12 17:09:17.950834 containerd[1511]: time="2025-09-12T17:09:17.950729413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:18.687634 kubelet[2633]: E0912 17:09:18.687603 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:18.694383 containerd[1511]: time="2025-09-12T17:09:18.694336811Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:09:18.709135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631467243.mount: Deactivated successfully. Sep 12 17:09:18.710087 containerd[1511]: time="2025-09-12T17:09:18.710053316Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6f0cfe995ece0b9a7e9de7fbc28fb957155e4766ab858bd39532f3ad1675bbe1\"" Sep 12 17:09:18.710750 containerd[1511]: time="2025-09-12T17:09:18.710571336Z" level=info msg="StartContainer for \"6f0cfe995ece0b9a7e9de7fbc28fb957155e4766ab858bd39532f3ad1675bbe1\"" Sep 12 17:09:18.742552 systemd[1]: Started cri-containerd-6f0cfe995ece0b9a7e9de7fbc28fb957155e4766ab858bd39532f3ad1675bbe1.scope - libcontainer container 6f0cfe995ece0b9a7e9de7fbc28fb957155e4766ab858bd39532f3ad1675bbe1. Sep 12 17:09:18.765939 systemd[1]: cri-containerd-6f0cfe995ece0b9a7e9de7fbc28fb957155e4766ab858bd39532f3ad1675bbe1.scope: Deactivated successfully. Sep 12 17:09:18.768139 containerd[1511]: time="2025-09-12T17:09:18.768104280Z" level=info msg="StartContainer for \"6f0cfe995ece0b9a7e9de7fbc28fb957155e4766ab858bd39532f3ad1675bbe1\" returns successfully" Sep 12 17:09:18.790050 containerd[1511]: time="2025-09-12T17:09:18.789988391Z" level=info msg="shim disconnected" id=6f0cfe995ece0b9a7e9de7fbc28fb957155e4766ab858bd39532f3ad1675bbe1 namespace=k8s.io Sep 12 17:09:18.790050 containerd[1511]: time="2025-09-12T17:09:18.790047002Z" level=warning msg="cleaning up after shim disconnected" id=6f0cfe995ece0b9a7e9de7fbc28fb957155e4766ab858bd39532f3ad1675bbe1 namespace=k8s.io Sep 12 17:09:18.790050 containerd[1511]: time="2025-09-12T17:09:18.790055338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:09:18.945952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f0cfe995ece0b9a7e9de7fbc28fb957155e4766ab858bd39532f3ad1675bbe1-rootfs.mount: Deactivated successfully. Sep 12 17:09:19.390647 kubelet[2633]: E0912 17:09:19.390612 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:19.691251 kubelet[2633]: E0912 17:09:19.691105 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:19.697600 containerd[1511]: time="2025-09-12T17:09:19.697542810Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:09:19.719129 containerd[1511]: time="2025-09-12T17:09:19.719086082Z" level=info msg="CreateContainer within sandbox \"a3eacd9ab675e7b8a0b18b6ed03472faf1825555ab5f19f161f0b62b2bca1941\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef97c018a38d563cb28bc5152fd307161af8f699932098997884ee3f332dc622\"" Sep 12 17:09:19.719746 containerd[1511]: time="2025-09-12T17:09:19.719711806Z" level=info msg="StartContainer for \"ef97c018a38d563cb28bc5152fd307161af8f699932098997884ee3f332dc622\"" Sep 12 17:09:19.758548 systemd[1]: Started cri-containerd-ef97c018a38d563cb28bc5152fd307161af8f699932098997884ee3f332dc622.scope - libcontainer container ef97c018a38d563cb28bc5152fd307161af8f699932098997884ee3f332dc622. Sep 12 17:09:19.787810 containerd[1511]: time="2025-09-12T17:09:19.787760536Z" level=info msg="StartContainer for \"ef97c018a38d563cb28bc5152fd307161af8f699932098997884ee3f332dc622\" returns successfully" Sep 12 17:09:20.221431 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 17:09:20.695920 kubelet[2633]: E0912 17:09:20.695778 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:20.709572 kubelet[2633]: I0912 17:09:20.709504 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2cdgb" podStartSLOduration=5.709484932 podStartE2EDuration="5.709484932s" podCreationTimestamp="2025-09-12 17:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:09:20.708968403 +0000 UTC m=+85.396015322" watchObservedRunningTime="2025-09-12 17:09:20.709484932 +0000 UTC m=+85.396531861" Sep 12 17:09:21.983489 kubelet[2633]: E0912 17:09:21.983433 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:23.342304 systemd-networkd[1428]: lxc_health: Link UP Sep 12 17:09:23.353301 systemd-networkd[1428]: lxc_health: Gained carrier Sep 12 17:09:23.985340 kubelet[2633]: E0912 17:09:23.984471 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:24.390471 kubelet[2633]: E0912 17:09:24.390304 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:24.509544 systemd-networkd[1428]: lxc_health: Gained IPv6LL Sep 12 17:09:24.705988 kubelet[2633]: E0912 17:09:24.705842 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:25.707274 kubelet[2633]: E0912 17:09:25.707234 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:09:26.341519 systemd[1]: run-containerd-runc-k8s.io-ef97c018a38d563cb28bc5152fd307161af8f699932098997884ee3f332dc622-runc.GsxxuH.mount: Deactivated successfully. Sep 12 17:09:28.494592 sshd[4495]: Connection closed by 10.0.0.1 port 41910 Sep 12 17:09:28.495005 sshd-session[4492]: pam_unix(sshd:session): session closed for user core Sep 12 17:09:28.500339 systemd[1]: sshd@26-10.0.0.50:22-10.0.0.1:41910.service: Deactivated successfully. Sep 12 17:09:28.503468 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:09:28.504258 systemd-logind[1497]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:09:28.505174 systemd-logind[1497]: Removed session 27.