Sep 12 17:04:59.936532 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:35:29 -00 2025 Sep 12 17:04:59.936556 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 17:04:59.936568 kernel: BIOS-provided physical RAM map: Sep 12 17:04:59.936574 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:04:59.936581 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 17:04:59.936588 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 17:04:59.936596 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 17:04:59.936602 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 17:04:59.936609 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 17:04:59.936616 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 17:04:59.936623 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 12 17:04:59.936632 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 17:04:59.936642 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 17:04:59.936649 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 17:04:59.936661 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 17:04:59.936668 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 17:04:59.936678 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 12 17:04:59.936685 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 12 17:04:59.936692 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 12 17:04:59.936700 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 12 17:04:59.936707 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 17:04:59.936714 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 17:04:59.936721 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:04:59.936728 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:04:59.936736 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 17:04:59.936743 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:04:59.936750 kernel: NX (Execute Disable) protection: active Sep 12 17:04:59.936760 kernel: APIC: Static calls initialized Sep 12 17:04:59.936767 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 12 17:04:59.936775 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 12 17:04:59.936782 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 12 17:04:59.936789 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 12 17:04:59.936796 kernel: extended physical RAM map: Sep 12 17:04:59.936803 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:04:59.936810 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 17:04:59.936817 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 17:04:59.936824 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 17:04:59.936831 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 17:04:59.936838 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 17:04:59.936848 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 17:04:59.936859 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Sep 12 17:04:59.936867 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Sep 12 17:04:59.936874 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Sep 12 17:04:59.936882 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Sep 12 17:04:59.936889 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Sep 12 17:04:59.936902 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 17:04:59.936909 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 17:04:59.936917 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 17:04:59.936924 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 17:04:59.936932 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 17:04:59.936939 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 12 17:04:59.936947 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 12 17:04:59.936954 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 12 17:04:59.936961 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 12 17:04:59.936971 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 17:04:59.936978 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 17:04:59.936986 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:04:59.936993 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:04:59.937003 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 17:04:59.937010 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:04:59.937018 kernel: efi: EFI v2.7 by EDK II Sep 12 17:04:59.937026 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Sep 12 17:04:59.937033 kernel: random: crng init done Sep 12 17:04:59.937041 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 12 17:04:59.937048 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 12 17:04:59.937058 kernel: secureboot: Secure boot disabled Sep 12 17:04:59.937068 kernel: SMBIOS 2.8 present. Sep 12 17:04:59.937075 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 17:04:59.937083 kernel: Hypervisor detected: KVM Sep 12 17:04:59.937090 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:04:59.937097 kernel: kvm-clock: using sched offset of 3358997268 cycles Sep 12 17:04:59.937105 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:04:59.937113 kernel: tsc: Detected 2794.748 MHz processor Sep 12 17:04:59.937129 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:04:59.937137 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:04:59.937145 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 12 17:04:59.937155 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:04:59.937163 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:04:59.937171 kernel: Using GB pages for direct mapping Sep 12 17:04:59.937178 kernel: ACPI: Early table checksum verification disabled Sep 12 17:04:59.937186 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 17:04:59.937194 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:04:59.937201 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:04:59.937209 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:04:59.937217 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 17:04:59.937227 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:04:59.937235 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:04:59.937242 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:04:59.937250 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:04:59.937258 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:04:59.937265 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 17:04:59.937273 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 17:04:59.937281 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 17:04:59.937288 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 17:04:59.937298 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 17:04:59.937306 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 17:04:59.937313 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 17:04:59.937321 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 17:04:59.937328 kernel: No NUMA configuration found Sep 12 17:04:59.937336 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 12 17:04:59.937356 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Sep 12 17:04:59.937364 kernel: Zone ranges: Sep 12 17:04:59.937371 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:04:59.937382 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 12 17:04:59.937389 kernel: Normal empty Sep 12 17:04:59.937400 kernel: Movable zone start for each node Sep 12 17:04:59.937408 kernel: Early memory node ranges Sep 12 17:04:59.937415 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:04:59.937423 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 17:04:59.937431 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 17:04:59.937438 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 12 17:04:59.937446 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 12 17:04:59.937453 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 12 17:04:59.937463 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Sep 12 17:04:59.937471 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Sep 12 17:04:59.937478 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 12 17:04:59.937486 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:04:59.937494 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:04:59.937510 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 17:04:59.937520 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:04:59.937528 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 12 17:04:59.937536 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 12 17:04:59.937544 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 17:04:59.937554 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 17:04:59.937565 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 12 17:04:59.937573 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:04:59.937581 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:04:59.937604 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:04:59.937612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:04:59.937623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:04:59.937631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:04:59.937639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:04:59.937647 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:04:59.937654 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:04:59.937662 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:04:59.937670 kernel: TSC deadline timer available Sep 12 17:04:59.937678 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 17:04:59.937686 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:04:59.937696 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:04:59.937704 kernel: kvm-guest: setup PV sched yield Sep 12 17:04:59.937712 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 17:04:59.937720 kernel: Booting paravirtualized kernel on KVM Sep 12 17:04:59.937728 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:04:59.937736 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:04:59.937744 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 17:04:59.937752 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 17:04:59.937759 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:04:59.937770 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:04:59.937778 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:04:59.937787 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 17:04:59.937795 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:04:59.937803 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:04:59.937814 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:04:59.937822 kernel: Fallback order for Node 0: 0 Sep 12 17:04:59.937830 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Sep 12 17:04:59.937838 kernel: Policy zone: DMA32 Sep 12 17:04:59.937849 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:04:59.937858 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2293K rwdata, 22872K rodata, 43520K init, 1556K bss, 177824K reserved, 0K cma-reserved) Sep 12 17:04:59.937866 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:04:59.937874 kernel: ftrace: allocating 37948 entries in 149 pages Sep 12 17:04:59.937881 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:04:59.937890 kernel: Dynamic Preempt: voluntary Sep 12 17:04:59.937912 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:04:59.937921 kernel: rcu: RCU event tracing is enabled. Sep 12 17:04:59.937932 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:04:59.937941 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:04:59.937949 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:04:59.937957 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:04:59.937965 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:04:59.937973 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:04:59.937981 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:04:59.937989 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:04:59.937997 kernel: Console: colour dummy device 80x25 Sep 12 17:04:59.938005 kernel: printk: console [ttyS0] enabled Sep 12 17:04:59.938015 kernel: ACPI: Core revision 20230628 Sep 12 17:04:59.938023 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:04:59.938032 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:04:59.938039 kernel: x2apic enabled Sep 12 17:04:59.938047 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:04:59.938059 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:04:59.938067 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:04:59.938075 kernel: kvm-guest: setup PV IPIs Sep 12 17:04:59.938083 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:04:59.938094 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:04:59.938102 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 17:04:59.938110 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:04:59.938118 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:04:59.938134 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:04:59.938142 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:04:59.938150 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:04:59.938158 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:04:59.938166 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:04:59.938177 kernel: active return thunk: retbleed_return_thunk Sep 12 17:04:59.938185 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:04:59.938193 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:04:59.938201 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:04:59.938209 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:04:59.938218 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:04:59.938229 kernel: active return thunk: srso_return_thunk Sep 12 17:04:59.938237 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:04:59.938248 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:04:59.938256 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:04:59.938264 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:04:59.938272 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:04:59.938280 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:04:59.938288 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:04:59.938296 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:04:59.938304 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:04:59.938312 kernel: landlock: Up and running. Sep 12 17:04:59.938322 kernel: SELinux: Initializing. Sep 12 17:04:59.938330 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:04:59.938338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:04:59.938357 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:04:59.938365 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:04:59.938373 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:04:59.938381 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:04:59.938390 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:04:59.938397 kernel: ... version: 0 Sep 12 17:04:59.938408 kernel: ... bit width: 48 Sep 12 17:04:59.938416 kernel: ... generic registers: 6 Sep 12 17:04:59.938424 kernel: ... value mask: 0000ffffffffffff Sep 12 17:04:59.938432 kernel: ... max period: 00007fffffffffff Sep 12 17:04:59.938440 kernel: ... fixed-purpose events: 0 Sep 12 17:04:59.938447 kernel: ... event mask: 000000000000003f Sep 12 17:04:59.938455 kernel: signal: max sigframe size: 1776 Sep 12 17:04:59.938463 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:04:59.938471 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:04:59.938481 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:04:59.938489 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:04:59.938497 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:04:59.938505 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:04:59.938512 kernel: smpboot: Max logical packages: 1 Sep 12 17:04:59.938520 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 17:04:59.938529 kernel: devtmpfs: initialized Sep 12 17:04:59.938536 kernel: x86/mm: Memory block size: 128MB Sep 12 17:04:59.938544 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 17:04:59.938555 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 17:04:59.938563 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 12 17:04:59.938571 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 17:04:59.938579 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Sep 12 17:04:59.938587 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 17:04:59.938595 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:04:59.938603 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:04:59.938611 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:04:59.938619 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:04:59.938629 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:04:59.938637 kernel: audit: type=2000 audit(1757696700.331:1): state=initialized audit_enabled=0 res=1 Sep 12 17:04:59.938645 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:04:59.938653 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:04:59.938661 kernel: cpuidle: using governor menu Sep 12 17:04:59.938669 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:04:59.938677 kernel: dca service started, version 1.12.1 Sep 12 17:04:59.938685 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 12 17:04:59.938693 kernel: PCI: Using configuration type 1 for base access Sep 12 17:04:59.938703 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:04:59.938711 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:04:59.938719 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:04:59.938727 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:04:59.938735 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:04:59.938743 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:04:59.938751 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:04:59.938759 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:04:59.938767 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:04:59.938777 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:04:59.938785 kernel: ACPI: Interpreter enabled Sep 12 17:04:59.938793 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:04:59.938801 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:04:59.938809 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:04:59.938816 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:04:59.938824 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:04:59.938832 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:04:59.939056 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:04:59.939217 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:04:59.939374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:04:59.939386 kernel: PCI host bridge to bus 0000:00 Sep 12 17:04:59.939533 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:04:59.939654 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:04:59.939774 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:04:59.939898 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 17:04:59.940016 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 17:04:59.940147 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 17:04:59.940306 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:04:59.940539 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:04:59.940689 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 17:04:59.940833 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 12 17:04:59.940966 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 12 17:04:59.941098 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 17:04:59.941240 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 12 17:04:59.941392 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:04:59.941554 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:04:59.941690 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 12 17:04:59.941834 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 12 17:04:59.941965 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 17:04:59.942117 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:04:59.942262 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 12 17:04:59.942418 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 12 17:04:59.942553 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 17:04:59.942700 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:04:59.942845 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 12 17:04:59.942978 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 12 17:04:59.943109 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 17:04:59.943253 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 12 17:04:59.943419 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:04:59.943554 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:04:59.943704 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:04:59.943846 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 12 17:04:59.943981 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 12 17:04:59.944139 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:04:59.944274 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 12 17:04:59.944285 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:04:59.944294 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:04:59.944302 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:04:59.944314 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:04:59.944322 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:04:59.944330 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:04:59.944339 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:04:59.944359 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:04:59.944367 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:04:59.944376 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:04:59.944384 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:04:59.944392 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:04:59.944403 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:04:59.944411 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:04:59.944419 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:04:59.944428 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:04:59.944436 kernel: iommu: Default domain type: Translated Sep 12 17:04:59.944444 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:04:59.944453 kernel: efivars: Registered efivars operations Sep 12 17:04:59.944461 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:04:59.944469 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:04:59.944479 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 17:04:59.944487 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 12 17:04:59.944495 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Sep 12 17:04:59.944503 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Sep 12 17:04:59.944511 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 12 17:04:59.944519 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 12 17:04:59.944527 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Sep 12 17:04:59.944535 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 12 17:04:59.944670 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:04:59.944812 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:04:59.944945 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:04:59.944956 kernel: vgaarb: loaded Sep 12 17:04:59.944964 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:04:59.944972 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:04:59.944981 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:04:59.944989 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:04:59.944997 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:04:59.945009 kernel: pnp: PnP ACPI init Sep 12 17:04:59.945181 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 17:04:59.945195 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:04:59.945203 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:04:59.945212 kernel: NET: Registered PF_INET protocol family Sep 12 17:04:59.945242 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:04:59.945253 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:04:59.945261 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:04:59.945272 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:04:59.945281 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:04:59.945289 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:04:59.945298 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:04:59.945306 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:04:59.945314 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:04:59.945322 kernel: NET: Registered PF_XDP protocol family Sep 12 17:04:59.945484 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 12 17:04:59.945620 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 12 17:04:59.945749 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:04:59.945892 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:04:59.946013 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:04:59.946144 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 17:04:59.946265 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 17:04:59.946436 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 17:04:59.946449 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:04:59.946458 kernel: Initialise system trusted keyrings Sep 12 17:04:59.946471 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:04:59.946479 kernel: Key type asymmetric registered Sep 12 17:04:59.946487 kernel: Asymmetric key parser 'x509' registered Sep 12 17:04:59.946496 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:04:59.946505 kernel: io scheduler mq-deadline registered Sep 12 17:04:59.946513 kernel: io scheduler kyber registered Sep 12 17:04:59.946521 kernel: io scheduler bfq registered Sep 12 17:04:59.946529 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:04:59.946539 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:04:59.946550 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:04:59.946561 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:04:59.946570 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:04:59.946579 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:04:59.946587 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:04:59.946595 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:04:59.946606 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:04:59.946761 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:04:59.946886 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:04:59.946898 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:04:59.947018 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:04:59 UTC (1757696699) Sep 12 17:04:59.947149 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 17:04:59.947161 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:04:59.947174 kernel: efifb: probing for efifb Sep 12 17:04:59.947183 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 17:04:59.947191 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 17:04:59.947200 kernel: efifb: scrolling: redraw Sep 12 17:04:59.947208 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:04:59.947216 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 17:04:59.947225 kernel: fb0: EFI VGA frame buffer device Sep 12 17:04:59.947233 kernel: pstore: Using crash dump compression: deflate Sep 12 17:04:59.947242 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:04:59.947253 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:04:59.947261 kernel: Segment Routing with IPv6 Sep 12 17:04:59.947270 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:04:59.947278 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:04:59.947287 kernel: Key type dns_resolver registered Sep 12 17:04:59.947295 kernel: IPI shorthand broadcast: enabled Sep 12 17:04:59.947303 kernel: sched_clock: Marking stable (1029003465, 145116061)->(1189493630, -15374104) Sep 12 17:04:59.947311 kernel: registered taskstats version 1 Sep 12 17:04:59.947320 kernel: Loading compiled-in X.509 certificates Sep 12 17:04:59.947329 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: d1d9e065fdbec39026aa56a07626d6d91ab4fce4' Sep 12 17:04:59.947354 kernel: Key type .fscrypt registered Sep 12 17:04:59.947363 kernel: Key type fscrypt-provisioning registered Sep 12 17:04:59.947372 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:04:59.947380 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:04:59.947388 kernel: ima: No architecture policies found Sep 12 17:04:59.947397 kernel: clk: Disabling unused clocks Sep 12 17:04:59.947405 kernel: Freeing unused kernel image (initmem) memory: 43520K Sep 12 17:04:59.947414 kernel: Write protecting the kernel read-only data: 38912k Sep 12 17:04:59.947425 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Sep 12 17:04:59.947433 kernel: Run /init as init process Sep 12 17:04:59.947441 kernel: with arguments: Sep 12 17:04:59.947450 kernel: /init Sep 12 17:04:59.947458 kernel: with environment: Sep 12 17:04:59.947466 kernel: HOME=/ Sep 12 17:04:59.947475 kernel: TERM=linux Sep 12 17:04:59.947483 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:04:59.947493 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:04:59.947508 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:04:59.947518 systemd[1]: Detected virtualization kvm. Sep 12 17:04:59.947527 systemd[1]: Detected architecture x86-64. Sep 12 17:04:59.947536 systemd[1]: Running in initrd. Sep 12 17:04:59.947545 systemd[1]: No hostname configured, using default hostname. Sep 12 17:04:59.947554 systemd[1]: Hostname set to . Sep 12 17:04:59.947563 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:04:59.947574 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:04:59.947583 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:04:59.947592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:04:59.947602 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:04:59.947611 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:04:59.947620 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:04:59.947630 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:04:59.947643 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:04:59.947653 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:04:59.947662 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:04:59.947671 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:04:59.947680 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:04:59.947689 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:04:59.947698 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:04:59.947707 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:04:59.947715 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:04:59.947727 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:04:59.947736 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:04:59.947745 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:04:59.947754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:04:59.947763 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:04:59.947773 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:04:59.947782 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:04:59.947791 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:04:59.947802 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:04:59.947811 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:04:59.947820 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:04:59.947829 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:04:59.947838 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:04:59.947847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:04:59.947856 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:04:59.947865 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:04:59.947877 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:04:59.947887 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:04:59.947926 systemd-journald[194]: Collecting audit messages is disabled. Sep 12 17:04:59.947951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:04:59.947961 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:04:59.947971 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:04:59.947980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:04:59.947990 systemd-journald[194]: Journal started Sep 12 17:04:59.948011 systemd-journald[194]: Runtime Journal (/run/log/journal/6c95ac27258441f3a956488a1d0bab9e) is 6M, max 48.2M, 42.2M free. Sep 12 17:04:59.933188 systemd-modules-load[195]: Inserted module 'overlay' Sep 12 17:04:59.949362 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:04:59.949934 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:04:59.962363 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:04:59.964459 kernel: Bridge firewalling registered Sep 12 17:04:59.963594 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 12 17:04:59.964906 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:04:59.972564 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:04:59.975006 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:04:59.977621 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:04:59.980200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:04:59.982802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:04:59.997625 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:05:00.001463 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:05:00.013325 dracut-cmdline[229]: dracut-dracut-053 Sep 12 17:05:00.017259 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 17:05:00.043115 systemd-resolved[231]: Positive Trust Anchors: Sep 12 17:05:00.043143 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:05:00.043175 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:05:00.045750 systemd-resolved[231]: Defaulting to hostname 'linux'. Sep 12 17:05:00.046986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:05:00.052607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:05:00.120388 kernel: SCSI subsystem initialized Sep 12 17:05:00.129373 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:05:00.139373 kernel: iscsi: registered transport (tcp) Sep 12 17:05:00.162393 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:05:00.162488 kernel: QLogic iSCSI HBA Driver Sep 12 17:05:00.216533 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:05:00.229524 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:05:00.257017 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:05:00.257079 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:05:00.257093 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:05:00.300383 kernel: raid6: avx2x4 gen() 30221 MB/s Sep 12 17:05:00.317381 kernel: raid6: avx2x2 gen() 31472 MB/s Sep 12 17:05:00.334442 kernel: raid6: avx2x1 gen() 25763 MB/s Sep 12 17:05:00.334552 kernel: raid6: using algorithm avx2x2 gen() 31472 MB/s Sep 12 17:05:00.352502 kernel: raid6: .... xor() 18596 MB/s, rmw enabled Sep 12 17:05:00.352654 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:05:00.375381 kernel: xor: automatically using best checksumming function avx Sep 12 17:05:00.587398 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:05:00.601774 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:05:00.619523 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:05:00.636500 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 12 17:05:00.643357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:05:00.652536 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:05:00.667334 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 12 17:05:00.707279 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:05:00.732629 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:05:00.804825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:05:00.813044 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:05:00.834410 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:05:00.837591 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:05:00.840136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:05:00.841298 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:05:00.850631 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:05:00.851556 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:05:00.854424 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:05:00.852541 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:05:00.862849 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:05:00.862878 kernel: GPT:9289727 != 19775487 Sep 12 17:05:00.862890 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:05:00.862901 kernel: GPT:9289727 != 19775487 Sep 12 17:05:00.862911 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:05:00.862928 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:05:00.867040 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:05:00.877417 kernel: libata version 3.00 loaded. Sep 12 17:05:00.883028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:05:00.883208 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:05:00.887034 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:05:00.893032 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:05:00.893281 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:05:00.893296 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:05:00.893468 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:05:00.890279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:05:00.898413 kernel: scsi host0: ahci Sep 12 17:05:00.906767 kernel: scsi host1: ahci Sep 12 17:05:00.915503 kernel: scsi host2: ahci Sep 12 17:05:00.915673 kernel: scsi host3: ahci Sep 12 17:05:00.915845 kernel: scsi host4: ahci Sep 12 17:05:00.916010 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (474) Sep 12 17:05:00.916023 kernel: BTRFS: device fsid 8328a8c6-e42c-42bb-93d2-f755d7523d53 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (472) Sep 12 17:05:00.916035 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:05:00.916046 kernel: AES CTR mode by8 optimization enabled Sep 12 17:05:00.890452 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:00.895930 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:05:00.905653 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:05:00.925711 kernel: scsi host5: ahci Sep 12 17:05:00.925899 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 12 17:05:00.925912 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 12 17:05:00.927443 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 12 17:05:00.927475 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 12 17:05:00.929036 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 12 17:05:00.929053 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 12 17:05:00.933236 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:05:00.933725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:00.966401 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:05:00.976252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:05:00.983909 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:05:00.983977 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:05:01.001466 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:05:01.001535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:05:01.001588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:01.005672 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:05:01.006546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:05:01.008816 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:05:01.018688 disk-uuid[559]: Primary Header is updated. Sep 12 17:05:01.018688 disk-uuid[559]: Secondary Entries is updated. Sep 12 17:05:01.018688 disk-uuid[559]: Secondary Header is updated. Sep 12 17:05:01.023398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:05:01.027370 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:05:01.028949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:01.037522 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:05:01.060581 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:05:01.238436 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:05:01.238510 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:05:01.238522 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:05:01.240386 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:05:01.240416 kernel: ata3.00: applying bridge limits Sep 12 17:05:01.241378 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:05:01.241469 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:05:01.242382 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:05:01.243370 kernel: ata3.00: configured for UDMA/100 Sep 12 17:05:01.245375 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:05:01.287866 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:05:01.288105 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:05:01.302378 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:05:02.028386 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:05:02.028883 disk-uuid[562]: The operation has completed successfully. Sep 12 17:05:02.064488 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:05:02.064624 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:05:02.114551 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:05:02.118442 sh[599]: Success Sep 12 17:05:02.131545 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:05:02.171947 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:05:02.192225 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:05:02.194773 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:05:02.207503 kernel: BTRFS info (device dm-0): first mount of filesystem 8328a8c6-e42c-42bb-93d2-f755d7523d53 Sep 12 17:05:02.207546 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:05:02.207558 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:05:02.209771 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:05:02.209790 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:05:02.214333 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:05:02.214991 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:05:02.222491 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:05:02.224133 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:05:02.242834 kernel: BTRFS info (device vda6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:05:02.242896 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:05:02.242908 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:05:02.246370 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:05:02.251371 kernel: BTRFS info (device vda6): last unmount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:05:02.256636 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:05:02.267534 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:05:02.356847 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:05:02.365581 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:05:02.370940 ignition[688]: Ignition 2.20.0 Sep 12 17:05:02.371290 ignition[688]: Stage: fetch-offline Sep 12 17:05:02.371329 ignition[688]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:02.371339 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:05:02.372040 ignition[688]: parsed url from cmdline: "" Sep 12 17:05:02.372045 ignition[688]: no config URL provided Sep 12 17:05:02.372051 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:05:02.372071 ignition[688]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:05:02.372098 ignition[688]: op(1): [started] loading QEMU firmware config module Sep 12 17:05:02.372104 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:05:02.381042 ignition[688]: op(1): [finished] loading QEMU firmware config module Sep 12 17:05:02.381082 ignition[688]: QEMU firmware config was not found. Ignoring... Sep 12 17:05:02.400824 systemd-networkd[782]: lo: Link UP Sep 12 17:05:02.400834 systemd-networkd[782]: lo: Gained carrier Sep 12 17:05:02.402655 systemd-networkd[782]: Enumeration completed Sep 12 17:05:02.402767 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:05:02.403040 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:05:02.403045 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:05:02.404090 systemd-networkd[782]: eth0: Link UP Sep 12 17:05:02.404094 systemd-networkd[782]: eth0: Gained carrier Sep 12 17:05:02.404101 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:05:02.404328 systemd[1]: Reached target network.target - Network. Sep 12 17:05:02.419407 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:05:02.432970 ignition[688]: parsing config with SHA512: c638e83f8edc132969d7acb77c678db6cbc309dbe7f5a1b6300d6ad6cd779926b3387b13f3e2e04d7b44accbb1cd54eb2b6286015e8f57dab215c3ee7ea9911e Sep 12 17:05:02.457864 unknown[688]: fetched base config from "system" Sep 12 17:05:02.457878 unknown[688]: fetched user config from "qemu" Sep 12 17:05:02.458294 ignition[688]: fetch-offline: fetch-offline passed Sep 12 17:05:02.458398 ignition[688]: Ignition finished successfully Sep 12 17:05:02.461090 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:05:02.462622 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:05:02.468586 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:05:02.486358 ignition[790]: Ignition 2.20.0 Sep 12 17:05:02.486368 ignition[790]: Stage: kargs Sep 12 17:05:02.486523 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:02.486535 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:05:02.487375 ignition[790]: kargs: kargs passed Sep 12 17:05:02.487418 ignition[790]: Ignition finished successfully Sep 12 17:05:02.493710 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:05:02.502589 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:05:02.521427 ignition[799]: Ignition 2.20.0 Sep 12 17:05:02.521439 ignition[799]: Stage: disks Sep 12 17:05:02.521586 ignition[799]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:02.521598 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:05:02.522421 ignition[799]: disks: disks passed Sep 12 17:05:02.522463 ignition[799]: Ignition finished successfully Sep 12 17:05:02.526530 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:05:02.529128 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:05:02.531233 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:05:02.533513 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:05:02.535626 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:05:02.537527 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:05:02.552552 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:05:02.566516 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:05:02.573262 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:05:02.583482 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:05:02.676384 kernel: EXT4-fs (vda9): mounted filesystem 5378802a-8117-4ea8-949a-cd38005ba44a r/w with ordered data mode. Quota mode: none. Sep 12 17:05:02.677147 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:05:02.677903 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:05:02.699431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:05:02.701310 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:05:02.702465 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:05:02.702509 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:05:02.713075 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (818) Sep 12 17:05:02.713113 kernel: BTRFS info (device vda6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:05:02.713126 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:05:02.713138 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:05:02.702534 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:05:02.716380 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:05:02.709429 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:05:02.713952 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:05:02.718230 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:05:02.753294 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:05:02.757291 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:05:02.762431 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:05:02.767315 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:05:02.875625 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:05:02.887467 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:05:02.891172 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:05:02.897388 kernel: BTRFS info (device vda6): last unmount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:05:02.914695 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.33 Sep 12 17:05:02.914711 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Sep 12 17:05:02.922477 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:05:02.928803 ignition[932]: INFO : Ignition 2.20.0 Sep 12 17:05:02.928803 ignition[932]: INFO : Stage: mount Sep 12 17:05:02.930521 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:02.930521 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:05:02.933078 ignition[932]: INFO : mount: mount passed Sep 12 17:05:02.933974 ignition[932]: INFO : Ignition finished successfully Sep 12 17:05:02.936858 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:05:02.952500 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:05:03.207491 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:05:03.219565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:05:03.227384 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (945) Sep 12 17:05:03.229459 kernel: BTRFS info (device vda6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:05:03.229490 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:05:03.229505 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:05:03.232372 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:05:03.234183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:05:03.254351 ignition[962]: INFO : Ignition 2.20.0 Sep 12 17:05:03.254351 ignition[962]: INFO : Stage: files Sep 12 17:05:03.256581 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:03.256581 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:05:03.256581 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:05:03.256581 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:05:03.256581 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:05:03.263620 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:05:03.263620 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:05:03.263620 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:05:03.263620 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:05:03.263620 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:05:03.259568 unknown[962]: wrote ssh authorized keys file for user: core Sep 12 17:05:03.314701 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:05:03.479617 systemd-networkd[782]: eth0: Gained IPv6LL Sep 12 17:05:03.603992 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:05:03.606237 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:05:03.606237 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:05:03.885438 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:05:04.190434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:05:04.190434 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:05:04.193877 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:05:04.195465 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:05:04.197424 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:05:04.199322 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:05:04.201391 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:05:04.203336 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:05:04.205475 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:05:04.207753 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:05:04.209845 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:05:04.211846 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:05:04.215039 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:05:04.217787 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:05:04.220269 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:05:04.702387 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:05:06.023428 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:05:06.023428 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:05:06.027371 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:05:06.029169 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:05:06.029169 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:05:06.029169 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:05:06.029169 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:05:06.029169 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:05:06.029169 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:05:06.029169 ignition[962]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:05:06.050077 ignition[962]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:05:06.056098 ignition[962]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:05:06.057858 ignition[962]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:05:06.059235 ignition[962]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:05:06.059235 ignition[962]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:05:06.062079 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:05:06.063806 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:05:06.065430 ignition[962]: INFO : files: files passed Sep 12 17:05:06.066138 ignition[962]: INFO : Ignition finished successfully Sep 12 17:05:06.068958 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:05:06.077610 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:05:06.080977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:05:06.083628 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:05:06.084607 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:05:06.099513 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:05:06.103761 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:05:06.103761 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:05:06.106990 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:05:06.111277 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:05:06.111847 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:05:06.123520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:05:06.148622 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:05:06.148780 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:05:06.151172 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:05:06.153234 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:05:06.155122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:05:06.157815 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:05:06.176863 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:05:06.186609 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:05:06.200020 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:05:06.202425 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:05:06.204744 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:05:06.206579 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:05:06.207581 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:05:06.210066 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:05:06.212139 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:05:06.213889 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:05:06.216003 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:05:06.218364 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:05:06.220528 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:05:06.222510 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:05:06.224868 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:05:06.226852 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:05:06.228839 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:05:06.230415 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:05:06.231411 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:05:06.233660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:05:06.235740 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:05:06.238000 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:05:06.239010 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:05:06.241487 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:05:06.242465 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:05:06.244647 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:05:06.245716 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:05:06.247999 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:05:06.249691 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:05:06.250751 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:05:06.253407 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:05:06.255169 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:05:06.257001 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:05:06.257853 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:05:06.259742 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:05:06.260598 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:05:06.262608 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:05:06.263774 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:05:06.266234 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:05:06.267211 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:05:06.283626 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:05:06.285599 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:05:06.286613 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:05:06.289994 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:05:06.290943 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:05:06.291086 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:05:06.292403 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:05:06.293390 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:05:06.299238 ignition[1017]: INFO : Ignition 2.20.0 Sep 12 17:05:06.301194 ignition[1017]: INFO : Stage: umount Sep 12 17:05:06.301577 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:05:06.303691 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:05:06.305143 ignition[1017]: INFO : umount: umount passed Sep 12 17:05:06.305143 ignition[1017]: INFO : Ignition finished successfully Sep 12 17:05:06.306880 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:05:06.307040 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:05:06.311904 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:05:06.312047 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:05:06.314756 systemd[1]: Stopped target network.target - Network. Sep 12 17:05:06.316532 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:05:06.316599 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:05:06.318710 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:05:06.318762 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:05:06.320628 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:05:06.320692 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:05:06.322580 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:05:06.322631 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:05:06.324752 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:05:06.326776 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:05:06.331686 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:05:06.336792 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:05:06.336929 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:05:06.340879 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:05:06.341136 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:05:06.341254 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:05:06.346033 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:05:06.347195 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:05:06.347245 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:05:06.356431 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:05:06.357439 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:05:06.357511 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:05:06.359508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:05:06.359561 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:05:06.361585 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:05:06.361637 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:05:06.363857 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:05:06.363907 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:05:06.366092 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:05:06.369454 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:05:06.369548 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:05:06.377696 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:05:06.377840 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:05:06.391722 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:05:06.391965 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:05:06.394287 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:05:06.394364 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:05:06.396098 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:05:06.396146 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:05:06.398004 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:05:06.398072 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:05:06.400075 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:05:06.400135 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:05:06.401973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:05:06.402038 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:05:06.414620 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:05:06.415725 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:05:06.415821 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:05:06.419216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:05:06.419290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:06.423652 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:05:06.423734 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:05:06.434257 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:05:06.434460 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:05:06.555428 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:05:06.555600 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:05:06.557749 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:05:06.559283 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:05:06.559379 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:05:06.571603 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:05:06.581204 systemd[1]: Switching root. Sep 12 17:05:06.614299 systemd-journald[194]: Journal stopped Sep 12 17:05:07.912842 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 12 17:05:07.912927 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:05:07.912948 kernel: SELinux: policy capability open_perms=1 Sep 12 17:05:07.912961 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:05:07.912977 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:05:07.912989 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:05:07.913001 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:05:07.913013 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:05:07.913029 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:05:07.913047 kernel: audit: type=1403 audit(1757696707.076:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:05:07.913061 systemd[1]: Successfully loaded SELinux policy in 41.222ms. Sep 12 17:05:07.913093 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.856ms. Sep 12 17:05:07.913107 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:05:07.913130 systemd[1]: Detected virtualization kvm. Sep 12 17:05:07.913143 systemd[1]: Detected architecture x86-64. Sep 12 17:05:07.913155 systemd[1]: Detected first boot. Sep 12 17:05:07.913178 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:05:07.913191 zram_generator::config[1064]: No configuration found. Sep 12 17:05:07.913208 kernel: Guest personality initialized and is inactive Sep 12 17:05:07.913220 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:05:07.913232 kernel: Initialized host personality Sep 12 17:05:07.913247 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:05:07.913259 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:05:07.913272 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:05:07.913285 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:05:07.913298 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:05:07.913311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:05:07.913324 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:05:07.913337 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:05:07.913365 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:05:07.913381 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:05:07.913394 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:05:07.913407 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:05:07.913420 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:05:07.913433 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:05:07.913446 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:05:07.913459 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:05:07.913480 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:05:07.913496 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:05:07.913518 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:05:07.913536 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:05:07.913549 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:05:07.913568 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:05:07.913581 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:05:07.913594 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:05:07.913606 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:05:07.913622 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:05:07.913635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:05:07.913648 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:05:07.913661 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:05:07.913674 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:05:07.913686 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:05:07.913700 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:05:07.913713 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:05:07.913726 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:05:07.913741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:05:07.913754 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:05:07.913766 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:05:07.913788 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:05:07.913803 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:05:07.913818 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:05:07.913831 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:05:07.913843 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:05:07.913856 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:05:07.913872 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:05:07.913886 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:05:07.913898 systemd[1]: Reached target machines.target - Containers. Sep 12 17:05:07.913919 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:05:07.913932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:05:07.913945 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:05:07.913958 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:05:07.913972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:05:07.913988 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:05:07.914001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:05:07.914014 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:05:07.914027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:05:07.914041 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:05:07.914054 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:05:07.914067 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:05:07.914164 kernel: fuse: init (API version 7.39) Sep 12 17:05:07.914176 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:05:07.914192 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:05:07.914206 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:05:07.914218 kernel: loop: module loaded Sep 12 17:05:07.914231 kernel: ACPI: bus type drm_connector registered Sep 12 17:05:07.914243 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:05:07.914256 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:05:07.914269 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:05:07.914281 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:05:07.914294 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:05:07.914330 systemd-journald[1135]: Collecting audit messages is disabled. Sep 12 17:05:07.914392 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:05:07.914406 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:05:07.914423 systemd[1]: Stopped verity-setup.service. Sep 12 17:05:07.914436 systemd-journald[1135]: Journal started Sep 12 17:05:07.914459 systemd-journald[1135]: Runtime Journal (/run/log/journal/6c95ac27258441f3a956488a1d0bab9e) is 6M, max 48.2M, 42.2M free. Sep 12 17:05:07.683769 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:05:07.694620 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:05:07.695137 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:05:07.919381 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:05:07.922138 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:05:07.924391 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:05:07.926875 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:05:07.928430 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:05:07.929730 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:05:07.930972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:05:07.932205 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:05:07.933579 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:05:07.935302 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:05:07.937021 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:05:07.937256 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:05:07.939040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:05:07.939359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:05:07.941169 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:05:07.941411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:05:07.942802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:05:07.943048 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:05:07.944605 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:05:07.944830 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:05:07.946351 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:05:07.946587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:05:07.948211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:05:07.950254 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:05:07.952243 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:05:07.954539 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:05:07.975523 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:05:07.984514 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:05:07.987522 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:05:07.988638 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:05:07.988687 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:05:07.990723 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:05:07.993160 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:05:07.998846 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:05:08.000965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:05:08.002771 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:05:08.005190 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:05:08.006444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:05:08.010611 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:05:08.011960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:05:08.014588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:05:08.021254 systemd-journald[1135]: Time spent on flushing to /var/log/journal/6c95ac27258441f3a956488a1d0bab9e is 29.755ms for 1063 entries. Sep 12 17:05:08.021254 systemd-journald[1135]: System Journal (/var/log/journal/6c95ac27258441f3a956488a1d0bab9e) is 8M, max 195.6M, 187.6M free. Sep 12 17:05:08.101270 systemd-journald[1135]: Received client request to flush runtime journal. Sep 12 17:05:08.101334 kernel: loop0: detected capacity change from 0 to 221472 Sep 12 17:05:08.101380 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:05:08.101401 kernel: loop1: detected capacity change from 0 to 147912 Sep 12 17:05:08.021716 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:05:08.025432 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:05:08.029427 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:05:08.054629 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:05:08.056893 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:05:08.059748 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:05:08.061429 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:05:08.074418 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:05:08.087568 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:05:08.098584 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:05:08.100757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:05:08.107673 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:05:08.114184 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:05:08.235259 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:05:08.237520 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:05:08.245884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:05:08.293382 kernel: loop2: detected capacity change from 0 to 138176 Sep 12 17:05:08.308650 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Sep 12 17:05:08.308670 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Sep 12 17:05:08.317487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:05:08.336382 kernel: loop3: detected capacity change from 0 to 221472 Sep 12 17:05:08.351395 kernel: loop4: detected capacity change from 0 to 147912 Sep 12 17:05:08.372650 kernel: loop5: detected capacity change from 0 to 138176 Sep 12 17:05:08.381541 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:05:08.382213 (sd-merge)[1208]: Merged extensions into '/usr'. Sep 12 17:05:08.428839 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:05:08.428871 systemd[1]: Reloading... Sep 12 17:05:08.499399 zram_generator::config[1234]: No configuration found. Sep 12 17:05:08.657213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:05:08.683937 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:05:08.729806 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:05:08.730295 systemd[1]: Reloading finished in 300 ms. Sep 12 17:05:08.751643 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:05:08.753405 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:05:08.769081 systemd[1]: Starting ensure-sysext.service... Sep 12 17:05:08.771270 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:05:08.784213 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:05:08.784229 systemd[1]: Reloading... Sep 12 17:05:08.822419 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:05:08.822753 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:05:08.823873 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:05:08.824167 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 12 17:05:08.824252 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Sep 12 17:05:08.828989 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:05:08.829088 systemd-tmpfiles[1274]: Skipping /boot Sep 12 17:05:08.852988 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:05:08.853336 systemd-tmpfiles[1274]: Skipping /boot Sep 12 17:05:08.903201 zram_generator::config[1303]: No configuration found. Sep 12 17:05:09.034077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:05:09.111245 systemd[1]: Reloading finished in 326 ms. Sep 12 17:05:09.130033 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:05:09.148764 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:05:09.159178 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:05:09.162647 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:05:09.165522 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:05:09.172460 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:05:09.175727 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:05:09.179462 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:05:09.184060 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:05:09.184271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:05:09.186216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:05:09.189327 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:05:09.197487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:05:09.198730 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:05:09.198846 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:05:09.200832 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:05:09.201913 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:05:09.203324 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:05:09.203611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:05:09.205373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:05:09.205611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:05:09.211313 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:05:09.211902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:05:09.224958 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:05:09.227383 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Sep 12 17:05:09.228719 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:05:09.236817 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:05:09.237059 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:05:09.249741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:05:09.255666 augenrules[1377]: No rules Sep 12 17:05:09.257727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:05:09.261006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:05:09.267470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:05:09.268649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:05:09.268792 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:05:09.271766 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:05:09.273245 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:05:09.279054 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:05:09.279803 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:05:09.280102 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:05:09.280874 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:05:09.281631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:05:09.281874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:05:09.282654 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:05:09.282868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:05:09.283760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:05:09.283991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:05:09.294809 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:05:09.297799 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:05:09.309581 systemd[1]: Finished ensure-sysext.service. Sep 12 17:05:09.312837 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:05:09.313102 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:05:09.332529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:05:09.333596 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:05:09.333678 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:05:09.337561 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:05:09.339471 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:05:09.342562 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:05:09.344360 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1395) Sep 12 17:05:09.441117 systemd-resolved[1344]: Positive Trust Anchors: Sep 12 17:05:09.441516 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:05:09.441551 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:05:09.450460 systemd-resolved[1344]: Defaulting to hostname 'linux'. Sep 12 17:05:09.452654 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:05:09.453899 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:05:09.481985 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:05:09.492515 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:05:09.496615 systemd-networkd[1417]: lo: Link UP Sep 12 17:05:09.496623 systemd-networkd[1417]: lo: Gained carrier Sep 12 17:05:09.498531 systemd-networkd[1417]: Enumeration completed Sep 12 17:05:09.498630 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:05:09.499065 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:05:09.499077 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:05:09.499938 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:05:09.500629 systemd[1]: Reached target network.target - Network. Sep 12 17:05:09.501760 systemd-networkd[1417]: eth0: Link UP Sep 12 17:05:09.501764 systemd-networkd[1417]: eth0: Gained carrier Sep 12 17:05:09.501778 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:05:09.510519 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:05:09.510519 systemd-networkd[1417]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:05:09.510757 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:05:09.513376 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:05:09.519439 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 17:05:09.519782 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:05:09.520035 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:05:11.278988 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:05:09.518842 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:05:11.279064 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:05:11.279075 systemd-resolved[1344]: Clock change detected. Flushing caches. Sep 12 17:05:11.279171 systemd-timesyncd[1419]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:05:11.279215 systemd-timesyncd[1419]: Initial clock synchronization to Fri 2025-09-12 17:05:11.278841 UTC. Sep 12 17:05:11.282741 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:05:11.288045 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:05:11.310881 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:05:11.393726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:05:11.403076 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:05:11.414268 kernel: kvm_amd: TSC scaling supported Sep 12 17:05:11.414339 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:05:11.414353 kernel: kvm_amd: Nested Paging enabled Sep 12 17:05:11.414387 kernel: kvm_amd: LBR virtualization supported Sep 12 17:05:11.415411 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:05:11.415462 kernel: kvm_amd: Virtual GIF supported Sep 12 17:05:11.418296 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:05:11.418963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:11.437030 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:05:11.437304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:05:11.469913 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:05:11.475286 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:05:11.488545 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:05:11.492663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:05:11.532135 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:05:11.534038 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:05:11.535323 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:05:11.536487 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:05:11.538187 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:05:11.539623 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:05:11.540802 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:05:11.542047 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:05:11.543449 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:05:11.543477 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:05:11.544398 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:05:11.546259 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:05:11.549070 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:05:11.552833 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:05:11.554310 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:05:11.555568 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:05:11.567492 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:05:11.568958 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:05:11.571576 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:05:11.573215 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:05:11.574365 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:05:11.575337 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:05:11.576349 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:05:11.576380 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:05:11.577424 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:05:11.579581 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:05:11.582135 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:05:11.584148 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:05:11.587845 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:05:11.590224 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:05:11.591908 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:05:11.592139 jq[1459]: false Sep 12 17:05:11.596215 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:05:11.599248 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:05:11.604178 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:05:11.612233 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:05:11.614764 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:05:11.615617 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:05:11.617810 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:05:11.623154 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:05:11.624418 extend-filesystems[1460]: Found loop3 Sep 12 17:05:11.624418 extend-filesystems[1460]: Found loop4 Sep 12 17:05:11.624418 extend-filesystems[1460]: Found loop5 Sep 12 17:05:11.624418 extend-filesystems[1460]: Found sr0 Sep 12 17:05:11.624418 extend-filesystems[1460]: Found vda Sep 12 17:05:11.624418 extend-filesystems[1460]: Found vda1 Sep 12 17:05:11.624418 extend-filesystems[1460]: Found vda2 Sep 12 17:05:11.624418 extend-filesystems[1460]: Found vda3 Sep 12 17:05:11.643237 extend-filesystems[1460]: Found usr Sep 12 17:05:11.643237 extend-filesystems[1460]: Found vda4 Sep 12 17:05:11.643237 extend-filesystems[1460]: Found vda6 Sep 12 17:05:11.643237 extend-filesystems[1460]: Found vda7 Sep 12 17:05:11.643237 extend-filesystems[1460]: Found vda9 Sep 12 17:05:11.643237 extend-filesystems[1460]: Checking size of /dev/vda9 Sep 12 17:05:11.638965 dbus-daemon[1458]: [system] SELinux support is enabled Sep 12 17:05:11.625964 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:05:11.628983 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:05:11.650771 jq[1473]: true Sep 12 17:05:11.630503 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:05:11.630887 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:05:11.652960 extend-filesystems[1460]: Resized partition /dev/vda9 Sep 12 17:05:11.631164 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:05:11.656229 jq[1480]: true Sep 12 17:05:11.635124 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:05:11.635385 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:05:11.644137 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:05:11.663455 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:05:11.667033 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:05:11.676040 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:05:11.678037 update_engine[1472]: I20250912 17:05:11.677287 1472 main.cc:92] Flatcar Update Engine starting Sep 12 17:05:11.679866 update_engine[1472]: I20250912 17:05:11.679811 1472 update_check_scheduler.cc:74] Next update check in 2m58s Sep 12 17:05:11.691795 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:05:11.691839 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:05:11.693661 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:05:11.693694 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:05:11.695358 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:05:11.697518 tar[1478]: linux-amd64/helm Sep 12 17:05:11.725221 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1395) Sep 12 17:05:11.726234 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:05:11.735860 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:05:11.765746 extend-filesystems[1487]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:05:11.765746 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:05:11.765746 extend-filesystems[1487]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:05:11.778319 extend-filesystems[1460]: Resized filesystem in /dev/vda9 Sep 12 17:05:11.775222 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:05:11.775713 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:05:11.778025 systemd-logind[1468]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:05:11.778447 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:05:11.779745 systemd-logind[1468]: New seat seat0. Sep 12 17:05:11.784211 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:05:11.822250 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:05:11.851435 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:05:11.854272 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:05:11.861395 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:05:11.866526 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:05:11.892929 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:05:11.902464 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:05:11.909887 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:05:11.910303 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:05:11.977885 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:05:11.999319 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:05:12.051149 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:05:12.054337 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:05:12.055727 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:05:12.113329 containerd[1485]: time="2025-09-12T17:05:12.113136347Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 17:05:12.141659 containerd[1485]: time="2025-09-12T17:05:12.141302318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:05:12.143989 containerd[1485]: time="2025-09-12T17:05:12.143957489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.144324006Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.144353812Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.144595075Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.144616595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.144711052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.144727162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.145131731Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.145148493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.145161056Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.145170233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.145294496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146135 containerd[1485]: time="2025-09-12T17:05:12.145613525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146437 containerd[1485]: time="2025-09-12T17:05:12.145831624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:05:12.146437 containerd[1485]: time="2025-09-12T17:05:12.145845670Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:05:12.146437 containerd[1485]: time="2025-09-12T17:05:12.145964263Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:05:12.146437 containerd[1485]: time="2025-09-12T17:05:12.146056977Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:05:12.263268 containerd[1485]: time="2025-09-12T17:05:12.263178082Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:05:12.263268 containerd[1485]: time="2025-09-12T17:05:12.263282769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:05:12.263475 containerd[1485]: time="2025-09-12T17:05:12.263306593Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:05:12.263475 containerd[1485]: time="2025-09-12T17:05:12.263330608Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:05:12.263475 containerd[1485]: time="2025-09-12T17:05:12.263357148Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:05:12.263704 containerd[1485]: time="2025-09-12T17:05:12.263645309Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:05:12.264024 containerd[1485]: time="2025-09-12T17:05:12.263977592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:05:12.264177 containerd[1485]: time="2025-09-12T17:05:12.264151959Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:05:12.264266 containerd[1485]: time="2025-09-12T17:05:12.264180092Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:05:12.264266 containerd[1485]: time="2025-09-12T17:05:12.264199799Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:05:12.264266 containerd[1485]: time="2025-09-12T17:05:12.264218003Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:05:12.264266 containerd[1485]: time="2025-09-12T17:05:12.264234514Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:05:12.264266 containerd[1485]: time="2025-09-12T17:05:12.264250183Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:05:12.264382 containerd[1485]: time="2025-09-12T17:05:12.264273126Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:05:12.264382 containerd[1485]: time="2025-09-12T17:05:12.264293965Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:05:12.264382 containerd[1485]: time="2025-09-12T17:05:12.264312230Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:05:12.264382 containerd[1485]: time="2025-09-12T17:05:12.264327158Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:05:12.264382 containerd[1485]: time="2025-09-12T17:05:12.264340823Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:05:12.264382 containerd[1485]: time="2025-09-12T17:05:12.264377262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264541 containerd[1485]: time="2025-09-12T17:05:12.264397229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264541 containerd[1485]: time="2025-09-12T17:05:12.264412828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264541 containerd[1485]: time="2025-09-12T17:05:12.264428668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264541 containerd[1485]: time="2025-09-12T17:05:12.264444618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264541 containerd[1485]: time="2025-09-12T17:05:12.264461089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264541 containerd[1485]: time="2025-09-12T17:05:12.264475746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264541 containerd[1485]: time="2025-09-12T17:05:12.264491656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264541 containerd[1485]: time="2025-09-12T17:05:12.264520691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264541 containerd[1485]: time="2025-09-12T17:05:12.264540698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264767 containerd[1485]: time="2025-09-12T17:05:12.264556738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264767 containerd[1485]: time="2025-09-12T17:05:12.264573500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264767 containerd[1485]: time="2025-09-12T17:05:12.264588478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264767 containerd[1485]: time="2025-09-12T17:05:12.264610319Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:05:12.264767 containerd[1485]: time="2025-09-12T17:05:12.264675221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264767 containerd[1485]: time="2025-09-12T17:05:12.264695949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264767 containerd[1485]: time="2025-09-12T17:05:12.264710256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:05:12.264928 containerd[1485]: time="2025-09-12T17:05:12.264772723Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:05:12.264928 containerd[1485]: time="2025-09-12T17:05:12.264804072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:05:12.264928 containerd[1485]: time="2025-09-12T17:05:12.264818379Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:05:12.264928 containerd[1485]: time="2025-09-12T17:05:12.264833978Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:05:12.264928 containerd[1485]: time="2025-09-12T17:05:12.264846352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.264928 containerd[1485]: time="2025-09-12T17:05:12.264876027Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:05:12.264928 containerd[1485]: time="2025-09-12T17:05:12.264890304Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:05:12.264928 containerd[1485]: time="2025-09-12T17:05:12.264909650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:05:12.265409 containerd[1485]: time="2025-09-12T17:05:12.265326773Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:05:12.265409 containerd[1485]: time="2025-09-12T17:05:12.265394149Z" level=info msg="Connect containerd service" Sep 12 17:05:12.265748 containerd[1485]: time="2025-09-12T17:05:12.265440336Z" level=info msg="using legacy CRI server" Sep 12 17:05:12.265748 containerd[1485]: time="2025-09-12T17:05:12.265451346Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:05:12.265748 containerd[1485]: time="2025-09-12T17:05:12.265638217Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:05:12.266561 containerd[1485]: time="2025-09-12T17:05:12.266534067Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:05:12.266781 containerd[1485]: time="2025-09-12T17:05:12.266721659Z" level=info msg="Start subscribing containerd event" Sep 12 17:05:12.266841 containerd[1485]: time="2025-09-12T17:05:12.266796179Z" level=info msg="Start recovering state" Sep 12 17:05:12.266916 containerd[1485]: time="2025-09-12T17:05:12.266893011Z" level=info msg="Start event monitor" Sep 12 17:05:12.266951 containerd[1485]: time="2025-09-12T17:05:12.266923147Z" level=info msg="Start snapshots syncer" Sep 12 17:05:12.266951 containerd[1485]: time="2025-09-12T17:05:12.266936853Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:05:12.266951 containerd[1485]: time="2025-09-12T17:05:12.266947593Z" level=info msg="Start streaming server" Sep 12 17:05:12.267059 containerd[1485]: time="2025-09-12T17:05:12.266953484Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:05:12.267059 containerd[1485]: time="2025-09-12T17:05:12.267048422Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:05:12.267222 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:05:12.267503 containerd[1485]: time="2025-09-12T17:05:12.267381847Z" level=info msg="containerd successfully booted in 0.159302s" Sep 12 17:05:12.319756 tar[1478]: linux-amd64/LICENSE Sep 12 17:05:12.319928 tar[1478]: linux-amd64/README.md Sep 12 17:05:12.342202 systemd-networkd[1417]: eth0: Gained IPv6LL Sep 12 17:05:12.344317 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:05:12.346299 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:05:12.349158 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:05:12.360299 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:05:12.363287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:05:12.365919 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:05:12.387329 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:05:12.387742 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:05:12.389687 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:05:12.397687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:05:14.086549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:05:14.088327 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:05:14.090201 systemd[1]: Startup finished in 1.181s (kernel) + 7.325s (initrd) + 5.295s (userspace) = 13.802s. Sep 12 17:05:14.093218 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:05:14.666945 kubelet[1572]: E0912 17:05:14.666854 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:05:14.671258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:05:14.671528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:05:14.672043 systemd[1]: kubelet.service: Consumed 2.170s CPU time, 267.3M memory peak. Sep 12 17:05:14.959918 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:05:14.961384 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:55230.service - OpenSSH per-connection server daemon (10.0.0.1:55230). Sep 12 17:05:15.018644 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 55230 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:05:15.020734 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:15.032198 systemd-logind[1468]: New session 1 of user core. Sep 12 17:05:15.033701 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:05:15.043304 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:05:15.055473 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:05:15.058330 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:05:15.067033 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:05:15.069536 systemd-logind[1468]: New session c1 of user core. Sep 12 17:05:15.221900 systemd[1589]: Queued start job for default target default.target. Sep 12 17:05:15.239486 systemd[1589]: Created slice app.slice - User Application Slice. Sep 12 17:05:15.239514 systemd[1589]: Reached target paths.target - Paths. Sep 12 17:05:15.239558 systemd[1589]: Reached target timers.target - Timers. Sep 12 17:05:15.241491 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:05:15.254667 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:05:15.254894 systemd[1589]: Reached target sockets.target - Sockets. Sep 12 17:05:15.254953 systemd[1589]: Reached target basic.target - Basic System. Sep 12 17:05:15.255029 systemd[1589]: Reached target default.target - Main User Target. Sep 12 17:05:15.255091 systemd[1589]: Startup finished in 178ms. Sep 12 17:05:15.255587 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:05:15.257455 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:05:15.342368 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:55240.service - OpenSSH per-connection server daemon (10.0.0.1:55240). Sep 12 17:05:15.382923 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 55240 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:05:15.384926 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:15.388998 systemd-logind[1468]: New session 2 of user core. Sep 12 17:05:15.399130 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:05:15.451937 sshd[1602]: Connection closed by 10.0.0.1 port 55240 Sep 12 17:05:15.452305 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:15.488901 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:55240.service: Deactivated successfully. Sep 12 17:05:15.491233 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:05:15.493230 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:05:15.510285 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:55248.service - OpenSSH per-connection server daemon (10.0.0.1:55248). Sep 12 17:05:15.511089 systemd-logind[1468]: Removed session 2. Sep 12 17:05:15.549528 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 55248 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:05:15.551241 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:15.555548 systemd-logind[1468]: New session 3 of user core. Sep 12 17:05:15.569138 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:05:15.618621 sshd[1610]: Connection closed by 10.0.0.1 port 55248 Sep 12 17:05:15.619354 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:15.637769 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:55248.service: Deactivated successfully. Sep 12 17:05:15.640095 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:05:15.642096 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:05:15.663318 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:55254.service - OpenSSH per-connection server daemon (10.0.0.1:55254). Sep 12 17:05:15.664516 systemd-logind[1468]: Removed session 3. Sep 12 17:05:15.702909 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 55254 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:05:15.704672 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:15.709372 systemd-logind[1468]: New session 4 of user core. Sep 12 17:05:15.719151 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:05:15.774921 sshd[1618]: Connection closed by 10.0.0.1 port 55254 Sep 12 17:05:15.775607 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:15.786106 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:55254.service: Deactivated successfully. Sep 12 17:05:15.788063 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:05:15.788771 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:05:15.799305 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:55266.service - OpenSSH per-connection server daemon (10.0.0.1:55266). Sep 12 17:05:15.800280 systemd-logind[1468]: Removed session 4. Sep 12 17:05:15.841496 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 55266 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:05:15.843092 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:15.847661 systemd-logind[1468]: New session 5 of user core. Sep 12 17:05:15.857166 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:05:15.917081 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:05:15.917447 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:05:15.932644 sudo[1627]: pam_unix(sudo:session): session closed for user root Sep 12 17:05:15.934738 sshd[1626]: Connection closed by 10.0.0.1 port 55266 Sep 12 17:05:15.935262 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:15.947136 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:55266.service: Deactivated successfully. Sep 12 17:05:15.949287 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:05:15.951141 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:05:15.967283 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:55278.service - OpenSSH per-connection server daemon (10.0.0.1:55278). Sep 12 17:05:15.968341 systemd-logind[1468]: Removed session 5. Sep 12 17:05:16.010861 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 55278 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:05:16.012871 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:16.017604 systemd-logind[1468]: New session 6 of user core. Sep 12 17:05:16.033139 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:05:16.090508 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:05:16.090870 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:05:16.095523 sudo[1637]: pam_unix(sudo:session): session closed for user root Sep 12 17:05:16.103369 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:05:16.103772 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:05:16.124478 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:05:16.155597 augenrules[1659]: No rules Sep 12 17:05:16.157470 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:05:16.157791 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:05:16.158918 sudo[1636]: pam_unix(sudo:session): session closed for user root Sep 12 17:05:16.160572 sshd[1635]: Connection closed by 10.0.0.1 port 55278 Sep 12 17:05:16.160954 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:16.177686 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:55278.service: Deactivated successfully. Sep 12 17:05:16.179480 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:05:16.180847 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:05:16.191259 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:55284.service - OpenSSH per-connection server daemon (10.0.0.1:55284). Sep 12 17:05:16.192288 systemd-logind[1468]: Removed session 6. Sep 12 17:05:16.230350 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 55284 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:05:16.231883 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:05:16.236463 systemd-logind[1468]: New session 7 of user core. Sep 12 17:05:16.252207 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:05:16.308477 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:05:16.308939 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:05:16.818265 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:05:16.818515 (dockerd)[1691]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:05:17.475104 dockerd[1691]: time="2025-09-12T17:05:17.475023956Z" level=info msg="Starting up" Sep 12 17:05:18.246250 dockerd[1691]: time="2025-09-12T17:05:18.246187698Z" level=info msg="Loading containers: start." Sep 12 17:05:18.452040 kernel: Initializing XFRM netlink socket Sep 12 17:05:18.557918 systemd-networkd[1417]: docker0: Link UP Sep 12 17:05:18.617443 dockerd[1691]: time="2025-09-12T17:05:18.617376502Z" level=info msg="Loading containers: done." Sep 12 17:05:18.637498 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1724734108-merged.mount: Deactivated successfully. Sep 12 17:05:18.643206 dockerd[1691]: time="2025-09-12T17:05:18.643148984Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:05:18.643304 dockerd[1691]: time="2025-09-12T17:05:18.643266204Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 17:05:18.643468 dockerd[1691]: time="2025-09-12T17:05:18.643438768Z" level=info msg="Daemon has completed initialization" Sep 12 17:05:18.688084 dockerd[1691]: time="2025-09-12T17:05:18.687987979Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:05:18.688268 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:05:19.729330 containerd[1485]: time="2025-09-12T17:05:19.729243467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:05:20.921340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21444441.mount: Deactivated successfully. Sep 12 17:05:23.384679 containerd[1485]: time="2025-09-12T17:05:23.384609860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:23.385442 containerd[1485]: time="2025-09-12T17:05:23.385353114Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:05:23.386420 containerd[1485]: time="2025-09-12T17:05:23.386390269Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:23.389030 containerd[1485]: time="2025-09-12T17:05:23.388992610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:23.390357 containerd[1485]: time="2025-09-12T17:05:23.390309490Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 3.661000811s" Sep 12 17:05:23.390357 containerd[1485]: time="2025-09-12T17:05:23.390353994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:05:23.391075 containerd[1485]: time="2025-09-12T17:05:23.391043467Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:05:24.922498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:05:24.936187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:05:26.185397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:05:26.189880 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:05:26.443544 containerd[1485]: time="2025-09-12T17:05:26.443382413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:26.445649 containerd[1485]: time="2025-09-12T17:05:26.445587519Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:05:26.448313 containerd[1485]: time="2025-09-12T17:05:26.448280280Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:26.454055 containerd[1485]: time="2025-09-12T17:05:26.451649619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:26.454055 containerd[1485]: time="2025-09-12T17:05:26.453382640Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 3.062307785s" Sep 12 17:05:26.454055 containerd[1485]: time="2025-09-12T17:05:26.453424529Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:05:26.454665 containerd[1485]: time="2025-09-12T17:05:26.454568785Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:05:26.459194 kubelet[1959]: E0912 17:05:26.459124 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:05:26.466078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:05:26.466329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:05:26.466759 systemd[1]: kubelet.service: Consumed 272ms CPU time, 111.8M memory peak. Sep 12 17:05:27.642524 containerd[1485]: time="2025-09-12T17:05:27.642449600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:27.643364 containerd[1485]: time="2025-09-12T17:05:27.643327997Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:05:27.644674 containerd[1485]: time="2025-09-12T17:05:27.644634508Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:27.647784 containerd[1485]: time="2025-09-12T17:05:27.647717551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:27.649482 containerd[1485]: time="2025-09-12T17:05:27.649453828Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.194707029s" Sep 12 17:05:27.649482 containerd[1485]: time="2025-09-12T17:05:27.649484575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:05:27.650078 containerd[1485]: time="2025-09-12T17:05:27.650042883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:05:29.651778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702461401.mount: Deactivated successfully. Sep 12 17:05:30.472974 containerd[1485]: time="2025-09-12T17:05:30.472885599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:30.497565 containerd[1485]: time="2025-09-12T17:05:30.497452960Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:05:30.508996 containerd[1485]: time="2025-09-12T17:05:30.508931471Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:30.533859 containerd[1485]: time="2025-09-12T17:05:30.533798304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:30.534495 containerd[1485]: time="2025-09-12T17:05:30.534448914Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.884253495s" Sep 12 17:05:30.534495 containerd[1485]: time="2025-09-12T17:05:30.534495561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:05:30.535065 containerd[1485]: time="2025-09-12T17:05:30.535040303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:05:31.070630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41955723.mount: Deactivated successfully. Sep 12 17:05:32.875419 containerd[1485]: time="2025-09-12T17:05:32.875333043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:32.876272 containerd[1485]: time="2025-09-12T17:05:32.876185623Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:05:32.877571 containerd[1485]: time="2025-09-12T17:05:32.877521498Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:32.887874 containerd[1485]: time="2025-09-12T17:05:32.887807883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:32.888816 containerd[1485]: time="2025-09-12T17:05:32.888772121Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.353699287s" Sep 12 17:05:32.888816 containerd[1485]: time="2025-09-12T17:05:32.888813088Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:05:32.889519 containerd[1485]: time="2025-09-12T17:05:32.889462015Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:05:33.351889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376287415.mount: Deactivated successfully. Sep 12 17:05:33.358528 containerd[1485]: time="2025-09-12T17:05:33.358458599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:33.359333 containerd[1485]: time="2025-09-12T17:05:33.359298524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:05:33.360795 containerd[1485]: time="2025-09-12T17:05:33.360730641Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:33.364973 containerd[1485]: time="2025-09-12T17:05:33.364927312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:33.365873 containerd[1485]: time="2025-09-12T17:05:33.365825988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 476.32501ms" Sep 12 17:05:33.366172 containerd[1485]: time="2025-09-12T17:05:33.365878587Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:05:33.366519 containerd[1485]: time="2025-09-12T17:05:33.366496085Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:05:33.983867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2935292376.mount: Deactivated successfully. Sep 12 17:05:35.774309 containerd[1485]: time="2025-09-12T17:05:35.774235706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:35.774999 containerd[1485]: time="2025-09-12T17:05:35.774953903Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:05:35.776238 containerd[1485]: time="2025-09-12T17:05:35.776166628Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:35.779129 containerd[1485]: time="2025-09-12T17:05:35.779094670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:05:35.780278 containerd[1485]: time="2025-09-12T17:05:35.780249526Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.413726411s" Sep 12 17:05:35.780319 containerd[1485]: time="2025-09-12T17:05:35.780277900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:05:36.492018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:05:36.504418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:05:36.688176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:05:36.692767 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:05:36.734578 kubelet[2121]: E0912 17:05:36.734501 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:05:36.739283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:05:36.739548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:05:36.739973 systemd[1]: kubelet.service: Consumed 232ms CPU time, 110.6M memory peak. Sep 12 17:05:38.639390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:05:38.639555 systemd[1]: kubelet.service: Consumed 232ms CPU time, 110.6M memory peak. Sep 12 17:05:38.648255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:05:38.675636 systemd[1]: Reload requested from client PID 2137 ('systemctl') (unit session-7.scope)... Sep 12 17:05:38.675659 systemd[1]: Reloading... Sep 12 17:05:38.779909 zram_generator::config[2182]: No configuration found. Sep 12 17:05:39.128902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:05:39.236498 systemd[1]: Reloading finished in 560 ms. Sep 12 17:05:39.287292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:05:39.291157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:05:39.293393 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:05:39.293678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:05:39.293719 systemd[1]: kubelet.service: Consumed 170ms CPU time, 98.3M memory peak. Sep 12 17:05:39.316385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:05:39.493442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:05:39.498175 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:05:39.551677 kubelet[2231]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:05:39.551677 kubelet[2231]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:05:39.551677 kubelet[2231]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:05:39.552182 kubelet[2231]: I0912 17:05:39.551767 2231 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:05:40.083063 kubelet[2231]: I0912 17:05:40.083023 2231 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:05:40.083063 kubelet[2231]: I0912 17:05:40.083054 2231 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:05:40.083330 kubelet[2231]: I0912 17:05:40.083309 2231 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:05:40.106062 kubelet[2231]: E0912 17:05:40.106024 2231 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:40.106880 kubelet[2231]: I0912 17:05:40.106859 2231 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:05:40.117363 kubelet[2231]: E0912 17:05:40.117312 2231 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:05:40.117363 kubelet[2231]: I0912 17:05:40.117348 2231 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:05:40.124137 kubelet[2231]: I0912 17:05:40.124107 2231 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:05:40.125183 kubelet[2231]: I0912 17:05:40.125156 2231 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:05:40.125370 kubelet[2231]: I0912 17:05:40.125327 2231 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:05:40.125569 kubelet[2231]: I0912 17:05:40.125362 2231 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:05:40.125690 kubelet[2231]: I0912 17:05:40.125584 2231 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:05:40.125690 kubelet[2231]: I0912 17:05:40.125594 2231 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:05:40.125767 kubelet[2231]: I0912 17:05:40.125750 2231 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:05:40.128886 kubelet[2231]: I0912 17:05:40.128855 2231 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:05:40.128886 kubelet[2231]: I0912 17:05:40.128885 2231 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:05:40.128962 kubelet[2231]: I0912 17:05:40.128945 2231 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:05:40.128993 kubelet[2231]: I0912 17:05:40.128981 2231 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:05:40.131488 kubelet[2231]: W0912 17:05:40.131364 2231 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Sep 12 17:05:40.131488 kubelet[2231]: E0912 17:05:40.131445 2231 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:40.132166 kubelet[2231]: I0912 17:05:40.132136 2231 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:05:40.132411 kubelet[2231]: W0912 17:05:40.132376 2231 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Sep 12 17:05:40.132448 kubelet[2231]: E0912 17:05:40.132417 2231 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:40.132590 kubelet[2231]: I0912 17:05:40.132570 2231 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:05:40.133330 kubelet[2231]: W0912 17:05:40.133300 2231 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:05:40.138086 kubelet[2231]: I0912 17:05:40.135932 2231 server.go:1274] "Started kubelet" Sep 12 17:05:40.138086 kubelet[2231]: I0912 17:05:40.136196 2231 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:05:40.138086 kubelet[2231]: I0912 17:05:40.136209 2231 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:05:40.138086 kubelet[2231]: I0912 17:05:40.136625 2231 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:05:40.138086 kubelet[2231]: I0912 17:05:40.137330 2231 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:05:40.138086 kubelet[2231]: I0912 17:05:40.137351 2231 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:05:40.138980 kubelet[2231]: I0912 17:05:40.138627 2231 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:05:40.140698 kubelet[2231]: E0912 17:05:40.140425 2231 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:05:40.140698 kubelet[2231]: I0912 17:05:40.140465 2231 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:05:40.140698 kubelet[2231]: I0912 17:05:40.140623 2231 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:05:40.140698 kubelet[2231]: I0912 17:05:40.140681 2231 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:05:40.141142 kubelet[2231]: W0912 17:05:40.140946 2231 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Sep 12 17:05:40.141142 kubelet[2231]: E0912 17:05:40.140987 2231 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:40.141142 kubelet[2231]: E0912 17:05:40.141081 2231 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:05:40.141243 kubelet[2231]: E0912 17:05:40.141142 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Sep 12 17:05:40.141243 kubelet[2231]: I0912 17:05:40.141239 2231 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:05:40.141342 kubelet[2231]: I0912 17:05:40.141321 2231 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:05:40.144167 kubelet[2231]: I0912 17:05:40.144135 2231 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:05:40.153211 kubelet[2231]: E0912 17:05:40.151765 2231 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186497df5aa99a63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:05:40.135901795 +0000 UTC m=+0.633620492,LastTimestamp:2025-09-12 17:05:40.135901795 +0000 UTC m=+0.633620492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:05:40.162946 kubelet[2231]: I0912 17:05:40.162890 2231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:05:40.164592 kubelet[2231]: I0912 17:05:40.164481 2231 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:05:40.164592 kubelet[2231]: I0912 17:05:40.164496 2231 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:05:40.164592 kubelet[2231]: I0912 17:05:40.164516 2231 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:05:40.164789 kubelet[2231]: I0912 17:05:40.164752 2231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:05:40.164817 kubelet[2231]: I0912 17:05:40.164808 2231 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:05:40.165109 kubelet[2231]: I0912 17:05:40.164968 2231 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:05:40.165109 kubelet[2231]: E0912 17:05:40.165041 2231 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:05:40.165448 kubelet[2231]: W0912 17:05:40.165418 2231 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Sep 12 17:05:40.165482 kubelet[2231]: E0912 17:05:40.165451 2231 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:40.169200 kubelet[2231]: I0912 17:05:40.168771 2231 policy_none.go:49] "None policy: Start" Sep 12 17:05:40.169257 kubelet[2231]: I0912 17:05:40.169182 2231 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:05:40.169282 kubelet[2231]: I0912 17:05:40.169258 2231 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:05:40.175830 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:05:40.186179 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:05:40.189235 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:05:40.203995 kubelet[2231]: I0912 17:05:40.203960 2231 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:05:40.204214 kubelet[2231]: I0912 17:05:40.204191 2231 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:05:40.204241 kubelet[2231]: I0912 17:05:40.204211 2231 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:05:40.204576 kubelet[2231]: I0912 17:05:40.204467 2231 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:05:40.205571 kubelet[2231]: E0912 17:05:40.205542 2231 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:05:40.275652 systemd[1]: Created slice kubepods-burstable-pod12b6698c78639629842536dae55688b1.slice - libcontainer container kubepods-burstable-pod12b6698c78639629842536dae55688b1.slice. Sep 12 17:05:40.295523 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 17:05:40.299470 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 17:05:40.305403 kubelet[2231]: I0912 17:05:40.305365 2231 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:05:40.305741 kubelet[2231]: E0912 17:05:40.305716 2231 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Sep 12 17:05:40.342040 kubelet[2231]: E0912 17:05:40.341872 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Sep 12 17:05:40.441298 kubelet[2231]: I0912 17:05:40.441247 2231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12b6698c78639629842536dae55688b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12b6698c78639629842536dae55688b1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:05:40.441298 kubelet[2231]: I0912 17:05:40.441281 2231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12b6698c78639629842536dae55688b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12b6698c78639629842536dae55688b1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:05:40.441528 kubelet[2231]: I0912 17:05:40.441357 2231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:40.441528 kubelet[2231]: I0912 17:05:40.441416 2231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:40.441528 kubelet[2231]: I0912 17:05:40.441502 2231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:40.441598 kubelet[2231]: I0912 17:05:40.441552 2231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:40.441659 kubelet[2231]: I0912 17:05:40.441639 2231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:05:40.441688 kubelet[2231]: I0912 17:05:40.441662 2231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:40.441711 kubelet[2231]: I0912 17:05:40.441686 2231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12b6698c78639629842536dae55688b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12b6698c78639629842536dae55688b1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:05:40.507581 kubelet[2231]: I0912 17:05:40.507546 2231 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:05:40.508016 kubelet[2231]: E0912 17:05:40.507977 2231 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Sep 12 17:05:40.594689 kubelet[2231]: E0912 17:05:40.594522 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:40.595524 containerd[1485]: time="2025-09-12T17:05:40.595478299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12b6698c78639629842536dae55688b1,Namespace:kube-system,Attempt:0,}" Sep 12 17:05:40.598822 kubelet[2231]: E0912 17:05:40.598795 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:40.599282 containerd[1485]: time="2025-09-12T17:05:40.599242550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:05:40.602580 kubelet[2231]: E0912 17:05:40.602551 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:40.602996 containerd[1485]: time="2025-09-12T17:05:40.602965764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:05:40.742714 kubelet[2231]: E0912 17:05:40.742666 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Sep 12 17:05:40.909973 kubelet[2231]: I0912 17:05:40.909825 2231 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:05:40.910436 kubelet[2231]: E0912 17:05:40.910365 2231 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Sep 12 17:05:40.955256 kubelet[2231]: W0912 17:05:40.955166 2231 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Sep 12 17:05:40.955256 kubelet[2231]: E0912 17:05:40.955256 2231 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:41.047738 kubelet[2231]: W0912 17:05:41.047645 2231 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Sep 12 17:05:41.047738 kubelet[2231]: E0912 17:05:41.047740 2231 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:41.506904 kubelet[2231]: W0912 17:05:41.506817 2231 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Sep 12 17:05:41.506904 kubelet[2231]: E0912 17:05:41.506904 2231 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:41.543852 kubelet[2231]: E0912 17:05:41.543789 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="1.6s" Sep 12 17:05:41.648688 kubelet[2231]: W0912 17:05:41.648573 2231 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Sep 12 17:05:41.648688 kubelet[2231]: E0912 17:05:41.648685 2231 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:41.712396 kubelet[2231]: I0912 17:05:41.712360 2231 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:05:41.712868 kubelet[2231]: E0912 17:05:41.712782 2231 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Sep 12 17:05:42.121890 kubelet[2231]: E0912 17:05:42.121833 2231 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:05:42.166192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36697728.mount: Deactivated successfully. Sep 12 17:05:42.171608 containerd[1485]: time="2025-09-12T17:05:42.171568404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:05:42.174394 containerd[1485]: time="2025-09-12T17:05:42.174308844Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:05:42.175401 containerd[1485]: time="2025-09-12T17:05:42.175362320Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:05:42.177206 containerd[1485]: time="2025-09-12T17:05:42.177170572Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:05:42.178177 containerd[1485]: time="2025-09-12T17:05:42.178118480Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:05:42.179094 containerd[1485]: time="2025-09-12T17:05:42.179054866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:05:42.179896 containerd[1485]: time="2025-09-12T17:05:42.179847964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:05:42.181427 containerd[1485]: time="2025-09-12T17:05:42.181385768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:05:42.183692 containerd[1485]: time="2025-09-12T17:05:42.183645647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.584289374s" Sep 12 17:05:42.184466 containerd[1485]: time="2025-09-12T17:05:42.184426963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.588835121s" Sep 12 17:05:42.187276 containerd[1485]: time="2025-09-12T17:05:42.187237965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.584192543s" Sep 12 17:05:42.347995 containerd[1485]: time="2025-09-12T17:05:42.347681852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:05:42.347995 containerd[1485]: time="2025-09-12T17:05:42.347756452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:05:42.347995 containerd[1485]: time="2025-09-12T17:05:42.347775909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:42.347995 containerd[1485]: time="2025-09-12T17:05:42.347876447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:42.349048 containerd[1485]: time="2025-09-12T17:05:42.347973479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:05:42.349048 containerd[1485]: time="2025-09-12T17:05:42.348209793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:05:42.349048 containerd[1485]: time="2025-09-12T17:05:42.348230932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:42.349385 containerd[1485]: time="2025-09-12T17:05:42.349214527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:42.355894 containerd[1485]: time="2025-09-12T17:05:42.355745868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:05:42.358063 containerd[1485]: time="2025-09-12T17:05:42.356976987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:05:42.358063 containerd[1485]: time="2025-09-12T17:05:42.357014468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:42.358063 containerd[1485]: time="2025-09-12T17:05:42.357115146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:42.410941 systemd[1]: Started cri-containerd-33e137501afed8d94f4d5904a464b3e33a13e00e7a69006232f1a85d202605ce.scope - libcontainer container 33e137501afed8d94f4d5904a464b3e33a13e00e7a69006232f1a85d202605ce. Sep 12 17:05:42.416810 systemd[1]: Started cri-containerd-6db0dd15c2bfac4cce58f87f4b30f835637c19cce380f5180dee784e4a327ff7.scope - libcontainer container 6db0dd15c2bfac4cce58f87f4b30f835637c19cce380f5180dee784e4a327ff7. Sep 12 17:05:42.435168 systemd[1]: Started cri-containerd-9602cca44fd0011d724e942a466962d311431c850cb05bb99156dbb928640d8b.scope - libcontainer container 9602cca44fd0011d724e942a466962d311431c850cb05bb99156dbb928640d8b. Sep 12 17:05:42.466510 containerd[1485]: time="2025-09-12T17:05:42.466460046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"33e137501afed8d94f4d5904a464b3e33a13e00e7a69006232f1a85d202605ce\"" Sep 12 17:05:42.467603 kubelet[2231]: E0912 17:05:42.467570 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:42.474483 containerd[1485]: time="2025-09-12T17:05:42.474380873Z" level=info msg="CreateContainer within sandbox \"33e137501afed8d94f4d5904a464b3e33a13e00e7a69006232f1a85d202605ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:05:42.477521 containerd[1485]: time="2025-09-12T17:05:42.477494173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:12b6698c78639629842536dae55688b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6db0dd15c2bfac4cce58f87f4b30f835637c19cce380f5180dee784e4a327ff7\"" Sep 12 17:05:42.478368 kubelet[2231]: E0912 17:05:42.478342 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:42.480676 containerd[1485]: time="2025-09-12T17:05:42.480649822Z" level=info msg="CreateContainer within sandbox \"6db0dd15c2bfac4cce58f87f4b30f835637c19cce380f5180dee784e4a327ff7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:05:42.495817 containerd[1485]: time="2025-09-12T17:05:42.495759644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9602cca44fd0011d724e942a466962d311431c850cb05bb99156dbb928640d8b\"" Sep 12 17:05:42.496582 kubelet[2231]: E0912 17:05:42.496558 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:42.498259 containerd[1485]: time="2025-09-12T17:05:42.498230268Z" level=info msg="CreateContainer within sandbox \"9602cca44fd0011d724e942a466962d311431c850cb05bb99156dbb928640d8b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:05:42.503815 containerd[1485]: time="2025-09-12T17:05:42.503780238Z" level=info msg="CreateContainer within sandbox \"33e137501afed8d94f4d5904a464b3e33a13e00e7a69006232f1a85d202605ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55ff363d4f80aa902300c7f1409c03e0d4f7aa9071da9e2bec8b2e2954fd9a96\"" Sep 12 17:05:42.505076 containerd[1485]: time="2025-09-12T17:05:42.504263595Z" level=info msg="StartContainer for \"55ff363d4f80aa902300c7f1409c03e0d4f7aa9071da9e2bec8b2e2954fd9a96\"" Sep 12 17:05:42.507154 containerd[1485]: time="2025-09-12T17:05:42.507116266Z" level=info msg="CreateContainer within sandbox \"6db0dd15c2bfac4cce58f87f4b30f835637c19cce380f5180dee784e4a327ff7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b8e1da94d248c4e4c226642e440ce2f4335f5673725591e59d827bfa03f25d86\"" Sep 12 17:05:42.509356 containerd[1485]: time="2025-09-12T17:05:42.508271643Z" level=info msg="StartContainer for \"b8e1da94d248c4e4c226642e440ce2f4335f5673725591e59d827bfa03f25d86\"" Sep 12 17:05:42.520154 containerd[1485]: time="2025-09-12T17:05:42.520044936Z" level=info msg="CreateContainer within sandbox \"9602cca44fd0011d724e942a466962d311431c850cb05bb99156dbb928640d8b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f33c5a6c9f4aaf2638b1f0f5ed63e92d6236f889f801f22f2925f69add57751\"" Sep 12 17:05:42.521049 containerd[1485]: time="2025-09-12T17:05:42.520985330Z" level=info msg="StartContainer for \"6f33c5a6c9f4aaf2638b1f0f5ed63e92d6236f889f801f22f2925f69add57751\"" Sep 12 17:05:42.534180 systemd[1]: Started cri-containerd-55ff363d4f80aa902300c7f1409c03e0d4f7aa9071da9e2bec8b2e2954fd9a96.scope - libcontainer container 55ff363d4f80aa902300c7f1409c03e0d4f7aa9071da9e2bec8b2e2954fd9a96. Sep 12 17:05:42.537491 systemd[1]: Started cri-containerd-b8e1da94d248c4e4c226642e440ce2f4335f5673725591e59d827bfa03f25d86.scope - libcontainer container b8e1da94d248c4e4c226642e440ce2f4335f5673725591e59d827bfa03f25d86. Sep 12 17:05:42.569259 systemd[1]: Started cri-containerd-6f33c5a6c9f4aaf2638b1f0f5ed63e92d6236f889f801f22f2925f69add57751.scope - libcontainer container 6f33c5a6c9f4aaf2638b1f0f5ed63e92d6236f889f801f22f2925f69add57751. Sep 12 17:05:42.599623 containerd[1485]: time="2025-09-12T17:05:42.599578849Z" level=info msg="StartContainer for \"b8e1da94d248c4e4c226642e440ce2f4335f5673725591e59d827bfa03f25d86\" returns successfully" Sep 12 17:05:42.599769 containerd[1485]: time="2025-09-12T17:05:42.599670802Z" level=info msg="StartContainer for \"55ff363d4f80aa902300c7f1409c03e0d4f7aa9071da9e2bec8b2e2954fd9a96\" returns successfully" Sep 12 17:05:42.634098 containerd[1485]: time="2025-09-12T17:05:42.634041892Z" level=info msg="StartContainer for \"6f33c5a6c9f4aaf2638b1f0f5ed63e92d6236f889f801f22f2925f69add57751\" returns successfully" Sep 12 17:05:43.181717 kubelet[2231]: E0912 17:05:43.181108 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:43.183304 kubelet[2231]: E0912 17:05:43.183083 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:43.183304 kubelet[2231]: E0912 17:05:43.183260 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:43.316684 kubelet[2231]: I0912 17:05:43.316297 2231 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:05:44.082211 kubelet[2231]: E0912 17:05:44.082163 2231 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:05:44.132611 kubelet[2231]: I0912 17:05:44.132563 2231 apiserver.go:52] "Watching apiserver" Sep 12 17:05:44.141611 kubelet[2231]: I0912 17:05:44.141561 2231 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:05:44.183486 kubelet[2231]: E0912 17:05:44.183450 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:44.221572 kubelet[2231]: I0912 17:05:44.221533 2231 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:05:44.221572 kubelet[2231]: E0912 17:05:44.221565 2231 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:05:45.558657 kubelet[2231]: E0912 17:05:45.558612 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:45.995458 systemd[1]: Reload requested from client PID 2511 ('systemctl') (unit session-7.scope)... Sep 12 17:05:45.995477 systemd[1]: Reloading... Sep 12 17:05:46.081051 zram_generator::config[2558]: No configuration found. Sep 12 17:05:46.186368 kubelet[2231]: E0912 17:05:46.186331 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:46.199119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:05:46.319047 systemd[1]: Reloading finished in 323 ms. Sep 12 17:05:46.342864 kubelet[2231]: I0912 17:05:46.342661 2231 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:05:46.342768 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:05:46.364561 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:05:46.364868 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:05:46.364926 systemd[1]: kubelet.service: Consumed 1.187s CPU time, 132.5M memory peak. Sep 12 17:05:46.373436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:05:46.559673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:05:46.564109 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:05:46.607283 kubelet[2600]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:05:46.607283 kubelet[2600]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:05:46.607283 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:05:46.607283 kubelet[2600]: I0912 17:05:46.606396 2600 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:05:46.615026 kubelet[2600]: I0912 17:05:46.614076 2600 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:05:46.615026 kubelet[2600]: I0912 17:05:46.614103 2600 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:05:46.615026 kubelet[2600]: I0912 17:05:46.614349 2600 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:05:46.615766 kubelet[2600]: I0912 17:05:46.615748 2600 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:05:46.618130 kubelet[2600]: I0912 17:05:46.617804 2600 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:05:46.623123 kubelet[2600]: E0912 17:05:46.623082 2600 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:05:46.623123 kubelet[2600]: I0912 17:05:46.623114 2600 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:05:46.628505 kubelet[2600]: I0912 17:05:46.628471 2600 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:05:46.628657 kubelet[2600]: I0912 17:05:46.628641 2600 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:05:46.628858 kubelet[2600]: I0912 17:05:46.628821 2600 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:05:46.629040 kubelet[2600]: I0912 17:05:46.628849 2600 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:05:46.629124 kubelet[2600]: I0912 17:05:46.629053 2600 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:05:46.629124 kubelet[2600]: I0912 17:05:46.629063 2600 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:05:46.629124 kubelet[2600]: I0912 17:05:46.629100 2600 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:05:46.629242 kubelet[2600]: I0912 17:05:46.629228 2600 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:05:46.629274 kubelet[2600]: I0912 17:05:46.629244 2600 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:05:46.629303 kubelet[2600]: I0912 17:05:46.629281 2600 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:05:46.629303 kubelet[2600]: I0912 17:05:46.629293 2600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:05:46.630277 kubelet[2600]: I0912 17:05:46.630248 2600 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:05:46.630699 kubelet[2600]: I0912 17:05:46.630672 2600 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:05:46.631130 kubelet[2600]: I0912 17:05:46.631110 2600 server.go:1274] "Started kubelet" Sep 12 17:05:46.634347 kubelet[2600]: I0912 17:05:46.631693 2600 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:05:46.634347 kubelet[2600]: I0912 17:05:46.631728 2600 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:05:46.634347 kubelet[2600]: I0912 17:05:46.631981 2600 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:05:46.634347 kubelet[2600]: I0912 17:05:46.632777 2600 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:05:46.635174 kubelet[2600]: I0912 17:05:46.635153 2600 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:05:46.637999 kubelet[2600]: I0912 17:05:46.637959 2600 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:05:46.641295 kubelet[2600]: I0912 17:05:46.641270 2600 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:05:46.642188 kubelet[2600]: E0912 17:05:46.641724 2600 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:05:46.642188 kubelet[2600]: E0912 17:05:46.641785 2600 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:05:46.642590 kubelet[2600]: I0912 17:05:46.642574 2600 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:05:46.642673 kubelet[2600]: I0912 17:05:46.642649 2600 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:05:46.642890 kubelet[2600]: I0912 17:05:46.642869 2600 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:05:46.643395 kubelet[2600]: I0912 17:05:46.642938 2600 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:05:46.645041 kubelet[2600]: I0912 17:05:46.645023 2600 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:05:46.653933 kubelet[2600]: I0912 17:05:46.653881 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:05:46.655396 kubelet[2600]: I0912 17:05:46.655379 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:05:46.655494 kubelet[2600]: I0912 17:05:46.655483 2600 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:05:46.655574 kubelet[2600]: I0912 17:05:46.655563 2600 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:05:46.655670 kubelet[2600]: E0912 17:05:46.655651 2600 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:05:46.681341 kubelet[2600]: I0912 17:05:46.681303 2600 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:05:46.681341 kubelet[2600]: I0912 17:05:46.681322 2600 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:05:46.681341 kubelet[2600]: I0912 17:05:46.681342 2600 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:05:46.681661 kubelet[2600]: I0912 17:05:46.681616 2600 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:05:46.681661 kubelet[2600]: I0912 17:05:46.681625 2600 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:05:46.681661 kubelet[2600]: I0912 17:05:46.681648 2600 policy_none.go:49] "None policy: Start" Sep 12 17:05:46.682437 kubelet[2600]: I0912 17:05:46.682405 2600 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:05:46.682437 kubelet[2600]: I0912 17:05:46.682429 2600 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:05:46.682619 kubelet[2600]: I0912 17:05:46.682554 2600 state_mem.go:75] "Updated machine memory state" Sep 12 17:05:46.686999 kubelet[2600]: I0912 17:05:46.686937 2600 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:05:46.686999 kubelet[2600]: I0912 17:05:46.687146 2600 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:05:46.686999 kubelet[2600]: I0912 17:05:46.687157 2600 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:05:46.687408 kubelet[2600]: I0912 17:05:46.687383 2600 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:05:46.764572 kubelet[2600]: E0912 17:05:46.764512 2600 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:05:46.792454 kubelet[2600]: I0912 17:05:46.792412 2600 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:05:46.799229 kubelet[2600]: I0912 17:05:46.799193 2600 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 17:05:46.799492 kubelet[2600]: I0912 17:05:46.799451 2600 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:05:46.845573 kubelet[2600]: I0912 17:05:46.845479 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12b6698c78639629842536dae55688b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"12b6698c78639629842536dae55688b1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:05:46.845573 kubelet[2600]: I0912 17:05:46.845570 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:46.845811 kubelet[2600]: I0912 17:05:46.845605 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:46.845811 kubelet[2600]: I0912 17:05:46.845628 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:46.845811 kubelet[2600]: I0912 17:05:46.845655 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:46.845811 kubelet[2600]: I0912 17:05:46.845682 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:05:46.845811 kubelet[2600]: I0912 17:05:46.845703 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12b6698c78639629842536dae55688b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"12b6698c78639629842536dae55688b1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:05:46.845932 kubelet[2600]: I0912 17:05:46.845734 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12b6698c78639629842536dae55688b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"12b6698c78639629842536dae55688b1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:05:46.845932 kubelet[2600]: I0912 17:05:46.845755 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:05:46.991616 sudo[2636]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:05:46.992048 sudo[2636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:05:47.061707 kubelet[2600]: E0912 17:05:47.061642 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:47.065363 kubelet[2600]: E0912 17:05:47.065335 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:47.065525 kubelet[2600]: E0912 17:05:47.065431 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:47.512446 sudo[2636]: pam_unix(sudo:session): session closed for user root Sep 12 17:05:47.630429 kubelet[2600]: I0912 17:05:47.630376 2600 apiserver.go:52] "Watching apiserver" Sep 12 17:05:47.643029 kubelet[2600]: I0912 17:05:47.642958 2600 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:05:47.667677 kubelet[2600]: E0912 17:05:47.667481 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:47.667677 kubelet[2600]: E0912 17:05:47.667480 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:48.221078 kubelet[2600]: E0912 17:05:48.220980 2600 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:05:48.221380 kubelet[2600]: E0912 17:05:48.221330 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:48.275141 kubelet[2600]: I0912 17:05:48.275058 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.274997103 podStartE2EDuration="3.274997103s" podCreationTimestamp="2025-09-12 17:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:05:48.27471265 +0000 UTC m=+1.706320518" watchObservedRunningTime="2025-09-12 17:05:48.274997103 +0000 UTC m=+1.706604971" Sep 12 17:05:48.289038 kubelet[2600]: I0912 17:05:48.288966 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.288945189 podStartE2EDuration="2.288945189s" podCreationTimestamp="2025-09-12 17:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:05:48.281983444 +0000 UTC m=+1.713591312" watchObservedRunningTime="2025-09-12 17:05:48.288945189 +0000 UTC m=+1.720553057" Sep 12 17:05:48.669337 kubelet[2600]: E0912 17:05:48.669205 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:48.920809 sudo[1671]: pam_unix(sudo:session): session closed for user root Sep 12 17:05:48.922410 sshd[1670]: Connection closed by 10.0.0.1 port 55284 Sep 12 17:05:48.923150 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Sep 12 17:05:48.927595 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:55284.service: Deactivated successfully. Sep 12 17:05:48.930323 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:05:48.930592 systemd[1]: session-7.scope: Consumed 5.025s CPU time, 252.4M memory peak. Sep 12 17:05:48.931998 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:05:48.933034 systemd-logind[1468]: Removed session 7. Sep 12 17:05:49.048324 kubelet[2600]: E0912 17:05:49.048271 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:50.910548 kubelet[2600]: E0912 17:05:50.910498 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:51.825469 kubelet[2600]: I0912 17:05:51.825426 2600 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:05:51.825897 containerd[1485]: time="2025-09-12T17:05:51.825846452Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:05:51.826569 kubelet[2600]: I0912 17:05:51.826226 2600 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:05:53.055073 kubelet[2600]: I0912 17:05:53.054837 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.054812628 podStartE2EDuration="7.054812628s" podCreationTimestamp="2025-09-12 17:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:05:48.289646967 +0000 UTC m=+1.721254835" watchObservedRunningTime="2025-09-12 17:05:53.054812628 +0000 UTC m=+6.486420506" Sep 12 17:05:53.063107 systemd[1]: Created slice kubepods-besteffort-pod20647506_c75e_4229_acc0_664ce8d7b0f9.slice - libcontainer container kubepods-besteffort-pod20647506_c75e_4229_acc0_664ce8d7b0f9.slice. Sep 12 17:05:53.078436 kubelet[2600]: I0912 17:05:53.078383 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20647506-c75e-4229-acc0-664ce8d7b0f9-lib-modules\") pod \"kube-proxy-msj7h\" (UID: \"20647506-c75e-4229-acc0-664ce8d7b0f9\") " pod="kube-system/kube-proxy-msj7h" Sep 12 17:05:53.078436 kubelet[2600]: I0912 17:05:53.078418 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20647506-c75e-4229-acc0-664ce8d7b0f9-kube-proxy\") pod \"kube-proxy-msj7h\" (UID: \"20647506-c75e-4229-acc0-664ce8d7b0f9\") " pod="kube-system/kube-proxy-msj7h" Sep 12 17:05:53.078436 kubelet[2600]: I0912 17:05:53.078437 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20647506-c75e-4229-acc0-664ce8d7b0f9-xtables-lock\") pod \"kube-proxy-msj7h\" (UID: \"20647506-c75e-4229-acc0-664ce8d7b0f9\") " pod="kube-system/kube-proxy-msj7h" Sep 12 17:05:53.078671 kubelet[2600]: I0912 17:05:53.078453 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t644q\" (UniqueName: \"kubernetes.io/projected/20647506-c75e-4229-acc0-664ce8d7b0f9-kube-api-access-t644q\") pod \"kube-proxy-msj7h\" (UID: \"20647506-c75e-4229-acc0-664ce8d7b0f9\") " pod="kube-system/kube-proxy-msj7h" Sep 12 17:05:53.533396 systemd[1]: Created slice kubepods-besteffort-pod379ac0bb_c8e5_4521_8c60_2f1dfd26ce42.slice - libcontainer container kubepods-besteffort-pod379ac0bb_c8e5_4521_8c60_2f1dfd26ce42.slice. Sep 12 17:05:53.538260 systemd[1]: Created slice kubepods-burstable-pod09637d68_1a17_4002_8a5f_78efaeb9307a.slice - libcontainer container kubepods-burstable-pod09637d68_1a17_4002_8a5f_78efaeb9307a.slice. Sep 12 17:05:53.581922 kubelet[2600]: I0912 17:05:53.581842 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-hostproc\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.581922 kubelet[2600]: I0912 17:05:53.581898 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cni-path\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.581922 kubelet[2600]: I0912 17:05:53.581923 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-lib-modules\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582222 kubelet[2600]: I0912 17:05:53.581949 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-run\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582222 kubelet[2600]: I0912 17:05:53.581972 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-host-proc-sys-net\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582222 kubelet[2600]: I0912 17:05:53.582022 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-host-proc-sys-kernel\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582222 kubelet[2600]: I0912 17:05:53.582050 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/379ac0bb-c8e5-4521-8c60-2f1dfd26ce42-cilium-config-path\") pod \"cilium-operator-5d85765b45-kx587\" (UID: \"379ac0bb-c8e5-4521-8c60-2f1dfd26ce42\") " pod="kube-system/cilium-operator-5d85765b45-kx587" Sep 12 17:05:53.582222 kubelet[2600]: I0912 17:05:53.582077 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jppq\" (UniqueName: \"kubernetes.io/projected/379ac0bb-c8e5-4521-8c60-2f1dfd26ce42-kube-api-access-5jppq\") pod \"cilium-operator-5d85765b45-kx587\" (UID: \"379ac0bb-c8e5-4521-8c60-2f1dfd26ce42\") " pod="kube-system/cilium-operator-5d85765b45-kx587" Sep 12 17:05:53.582412 kubelet[2600]: I0912 17:05:53.582097 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-cgroup\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582412 kubelet[2600]: I0912 17:05:53.582115 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-etc-cni-netd\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582412 kubelet[2600]: I0912 17:05:53.582133 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-bpf-maps\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582412 kubelet[2600]: I0912 17:05:53.582152 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-config-path\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582412 kubelet[2600]: I0912 17:05:53.582212 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-xtables-lock\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582412 kubelet[2600]: I0912 17:05:53.582240 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09637d68-1a17-4002-8a5f-78efaeb9307a-hubble-tls\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582565 kubelet[2600]: I0912 17:05:53.582265 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvbg8\" (UniqueName: \"kubernetes.io/projected/09637d68-1a17-4002-8a5f-78efaeb9307a-kube-api-access-jvbg8\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.582565 kubelet[2600]: I0912 17:05:53.582287 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09637d68-1a17-4002-8a5f-78efaeb9307a-clustermesh-secrets\") pod \"cilium-pjqbb\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " pod="kube-system/cilium-pjqbb" Sep 12 17:05:53.678530 kubelet[2600]: E0912 17:05:53.678490 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:53.679334 containerd[1485]: time="2025-09-12T17:05:53.679292246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-msj7h,Uid:20647506-c75e-4229-acc0-664ce8d7b0f9,Namespace:kube-system,Attempt:0,}" Sep 12 17:05:53.719855 containerd[1485]: time="2025-09-12T17:05:53.719673209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:05:53.719855 containerd[1485]: time="2025-09-12T17:05:53.719838462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:05:53.720115 containerd[1485]: time="2025-09-12T17:05:53.719874390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:53.720115 containerd[1485]: time="2025-09-12T17:05:53.720040365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:53.744169 systemd[1]: Started cri-containerd-14837fa1722c13908cffba5fae8d6551e7f7fab3e5abcc5d9d8deb463820ad65.scope - libcontainer container 14837fa1722c13908cffba5fae8d6551e7f7fab3e5abcc5d9d8deb463820ad65. Sep 12 17:05:53.768941 containerd[1485]: time="2025-09-12T17:05:53.768890672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-msj7h,Uid:20647506-c75e-4229-acc0-664ce8d7b0f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"14837fa1722c13908cffba5fae8d6551e7f7fab3e5abcc5d9d8deb463820ad65\"" Sep 12 17:05:53.770099 kubelet[2600]: E0912 17:05:53.770058 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:53.772488 containerd[1485]: time="2025-09-12T17:05:53.772441717Z" level=info msg="CreateContainer within sandbox \"14837fa1722c13908cffba5fae8d6551e7f7fab3e5abcc5d9d8deb463820ad65\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:05:53.792701 containerd[1485]: time="2025-09-12T17:05:53.792547933Z" level=info msg="CreateContainer within sandbox \"14837fa1722c13908cffba5fae8d6551e7f7fab3e5abcc5d9d8deb463820ad65\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"319e757b9dafd085862705b78903c92de51752c7562f125ee1e7d5632fa78309\"" Sep 12 17:05:53.794416 containerd[1485]: time="2025-09-12T17:05:53.794372758Z" level=info msg="StartContainer for \"319e757b9dafd085862705b78903c92de51752c7562f125ee1e7d5632fa78309\"" Sep 12 17:05:53.835182 systemd[1]: Started cri-containerd-319e757b9dafd085862705b78903c92de51752c7562f125ee1e7d5632fa78309.scope - libcontainer container 319e757b9dafd085862705b78903c92de51752c7562f125ee1e7d5632fa78309. Sep 12 17:05:53.837254 kubelet[2600]: E0912 17:05:53.837226 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:53.838423 containerd[1485]: time="2025-09-12T17:05:53.837994415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kx587,Uid:379ac0bb-c8e5-4521-8c60-2f1dfd26ce42,Namespace:kube-system,Attempt:0,}" Sep 12 17:05:53.842807 kubelet[2600]: E0912 17:05:53.842779 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:53.843378 containerd[1485]: time="2025-09-12T17:05:53.843345238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pjqbb,Uid:09637d68-1a17-4002-8a5f-78efaeb9307a,Namespace:kube-system,Attempt:0,}" Sep 12 17:05:53.882185 containerd[1485]: time="2025-09-12T17:05:53.882072790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:05:53.882185 containerd[1485]: time="2025-09-12T17:05:53.882158252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:05:53.882185 containerd[1485]: time="2025-09-12T17:05:53.882176436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:53.882438 containerd[1485]: time="2025-09-12T17:05:53.882332463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:53.886053 containerd[1485]: time="2025-09-12T17:05:53.884585602Z" level=info msg="StartContainer for \"319e757b9dafd085862705b78903c92de51752c7562f125ee1e7d5632fa78309\" returns successfully" Sep 12 17:05:53.894699 containerd[1485]: time="2025-09-12T17:05:53.892833306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:05:53.894699 containerd[1485]: time="2025-09-12T17:05:53.892898780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:05:53.894699 containerd[1485]: time="2025-09-12T17:05:53.892909491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:53.895197 containerd[1485]: time="2025-09-12T17:05:53.895070876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:05:53.908241 systemd[1]: Started cri-containerd-2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673.scope - libcontainer container 2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673. Sep 12 17:05:53.911474 systemd[1]: Started cri-containerd-c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6.scope - libcontainer container c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6. Sep 12 17:05:53.950706 containerd[1485]: time="2025-09-12T17:05:53.950658157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pjqbb,Uid:09637d68-1a17-4002-8a5f-78efaeb9307a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\"" Sep 12 17:05:53.952727 kubelet[2600]: E0912 17:05:53.952681 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:53.957526 containerd[1485]: time="2025-09-12T17:05:53.957235820Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:05:53.958826 containerd[1485]: time="2025-09-12T17:05:53.958457931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kx587,Uid:379ac0bb-c8e5-4521-8c60-2f1dfd26ce42,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673\"" Sep 12 17:05:53.961137 kubelet[2600]: E0912 17:05:53.961112 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:54.682091 kubelet[2600]: E0912 17:05:54.682038 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:54.690631 kubelet[2600]: I0912 17:05:54.690576 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-msj7h" podStartSLOduration=2.69055806 podStartE2EDuration="2.69055806s" podCreationTimestamp="2025-09-12 17:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:05:54.69048444 +0000 UTC m=+8.122092308" watchObservedRunningTime="2025-09-12 17:05:54.69055806 +0000 UTC m=+8.122165918" Sep 12 17:05:54.795126 kubelet[2600]: E0912 17:05:54.795080 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:55.684556 kubelet[2600]: E0912 17:05:55.684514 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:05:57.288768 update_engine[1472]: I20250912 17:05:57.288672 1472 update_attempter.cc:509] Updating boot flags... Sep 12 17:05:57.490105 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2977) Sep 12 17:05:57.547036 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2978) Sep 12 17:05:57.595041 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2978) Sep 12 17:05:59.052898 kubelet[2600]: E0912 17:05:59.052840 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:00.916051 kubelet[2600]: E0912 17:06:00.915988 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:02.945722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount172816623.mount: Deactivated successfully. Sep 12 17:06:11.412424 containerd[1485]: time="2025-09-12T17:06:11.412353464Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:11.413389 containerd[1485]: time="2025-09-12T17:06:11.413358528Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:06:11.415093 containerd[1485]: time="2025-09-12T17:06:11.415039302Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:11.416796 containerd[1485]: time="2025-09-12T17:06:11.416759872Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.459470521s" Sep 12 17:06:11.416870 containerd[1485]: time="2025-09-12T17:06:11.416797293Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:06:11.417862 containerd[1485]: time="2025-09-12T17:06:11.417833334Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:06:11.419017 containerd[1485]: time="2025-09-12T17:06:11.418973801Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:06:11.434841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300759202.mount: Deactivated successfully. Sep 12 17:06:11.436357 containerd[1485]: time="2025-09-12T17:06:11.434894771Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\"" Sep 12 17:06:11.437600 containerd[1485]: time="2025-09-12T17:06:11.437546485Z" level=info msg="StartContainer for \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\"" Sep 12 17:06:11.474275 systemd[1]: Started cri-containerd-e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b.scope - libcontainer container e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b. Sep 12 17:06:11.507906 containerd[1485]: time="2025-09-12T17:06:11.507840594Z" level=info msg="StartContainer for \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\" returns successfully" Sep 12 17:06:11.523051 systemd[1]: cri-containerd-e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b.scope: Deactivated successfully. Sep 12 17:06:11.713214 kubelet[2600]: E0912 17:06:11.713057 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:12.025405 containerd[1485]: time="2025-09-12T17:06:12.025209283Z" level=info msg="shim disconnected" id=e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b namespace=k8s.io Sep 12 17:06:12.025405 containerd[1485]: time="2025-09-12T17:06:12.025278945Z" level=warning msg="cleaning up after shim disconnected" id=e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b namespace=k8s.io Sep 12 17:06:12.025405 containerd[1485]: time="2025-09-12T17:06:12.025290597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:06:12.431058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b-rootfs.mount: Deactivated successfully. Sep 12 17:06:12.716339 kubelet[2600]: E0912 17:06:12.716188 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:12.718630 containerd[1485]: time="2025-09-12T17:06:12.718585329Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:06:13.234727 containerd[1485]: time="2025-09-12T17:06:13.234671387Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\"" Sep 12 17:06:13.235506 containerd[1485]: time="2025-09-12T17:06:13.235456245Z" level=info msg="StartContainer for \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\"" Sep 12 17:06:13.273273 systemd[1]: Started cri-containerd-7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6.scope - libcontainer container 7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6. Sep 12 17:06:13.304995 containerd[1485]: time="2025-09-12T17:06:13.304942280Z" level=info msg="StartContainer for \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\" returns successfully" Sep 12 17:06:13.319511 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:06:13.319848 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:06:13.320294 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:06:13.329398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:06:13.329728 systemd[1]: cri-containerd-7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6.scope: Deactivated successfully. Sep 12 17:06:13.345118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:06:13.360795 containerd[1485]: time="2025-09-12T17:06:13.360723851Z" level=info msg="shim disconnected" id=7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6 namespace=k8s.io Sep 12 17:06:13.360974 containerd[1485]: time="2025-09-12T17:06:13.360799443Z" level=warning msg="cleaning up after shim disconnected" id=7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6 namespace=k8s.io Sep 12 17:06:13.360974 containerd[1485]: time="2025-09-12T17:06:13.360810553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:06:13.431389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6-rootfs.mount: Deactivated successfully. Sep 12 17:06:13.643410 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:54344.service - OpenSSH per-connection server daemon (10.0.0.1:54344). Sep 12 17:06:13.719128 kubelet[2600]: E0912 17:06:13.719093 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:13.720727 containerd[1485]: time="2025-09-12T17:06:13.720691454Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:06:13.873742 sshd[3143]: Accepted publickey for core from 10.0.0.1 port 54344 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:13.875445 sshd-session[3143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:13.880589 systemd-logind[1468]: New session 8 of user core. Sep 12 17:06:13.886180 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:06:13.912717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount110561821.mount: Deactivated successfully. Sep 12 17:06:13.930233 containerd[1485]: time="2025-09-12T17:06:13.930177563Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\"" Sep 12 17:06:13.932040 containerd[1485]: time="2025-09-12T17:06:13.930830201Z" level=info msg="StartContainer for \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\"" Sep 12 17:06:13.968230 systemd[1]: Started cri-containerd-62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209.scope - libcontainer container 62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209. Sep 12 17:06:14.017602 systemd[1]: cri-containerd-62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209.scope: Deactivated successfully. Sep 12 17:06:14.026291 containerd[1485]: time="2025-09-12T17:06:14.026241342Z" level=info msg="StartContainer for \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\" returns successfully" Sep 12 17:06:14.062472 sshd[3145]: Connection closed by 10.0.0.1 port 54344 Sep 12 17:06:14.061029 sshd-session[3143]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:14.066586 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:54344.service: Deactivated successfully. Sep 12 17:06:14.069250 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:06:14.070511 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:06:14.074971 systemd-logind[1468]: Removed session 8. Sep 12 17:06:14.121514 containerd[1485]: time="2025-09-12T17:06:14.121433382Z" level=info msg="shim disconnected" id=62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209 namespace=k8s.io Sep 12 17:06:14.122037 containerd[1485]: time="2025-09-12T17:06:14.121793119Z" level=warning msg="cleaning up after shim disconnected" id=62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209 namespace=k8s.io Sep 12 17:06:14.122037 containerd[1485]: time="2025-09-12T17:06:14.121816122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:06:14.288225 containerd[1485]: time="2025-09-12T17:06:14.288072308Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:14.289101 containerd[1485]: time="2025-09-12T17:06:14.289062642Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:06:14.290102 containerd[1485]: time="2025-09-12T17:06:14.290067102Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:06:14.291752 containerd[1485]: time="2025-09-12T17:06:14.291708571Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.873843768s" Sep 12 17:06:14.291752 containerd[1485]: time="2025-09-12T17:06:14.291741863Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:06:14.296227 containerd[1485]: time="2025-09-12T17:06:14.296155539Z" level=info msg="CreateContainer within sandbox \"2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:06:14.311223 containerd[1485]: time="2025-09-12T17:06:14.311177707Z" level=info msg="CreateContainer within sandbox \"2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\"" Sep 12 17:06:14.311594 containerd[1485]: time="2025-09-12T17:06:14.311561270Z" level=info msg="StartContainer for \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\"" Sep 12 17:06:14.348155 systemd[1]: Started cri-containerd-9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b.scope - libcontainer container 9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b. Sep 12 17:06:14.380223 containerd[1485]: time="2025-09-12T17:06:14.380146411Z" level=info msg="StartContainer for \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\" returns successfully" Sep 12 17:06:14.723143 kubelet[2600]: E0912 17:06:14.723101 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:14.726592 containerd[1485]: time="2025-09-12T17:06:14.726534575Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:06:14.729465 kubelet[2600]: E0912 17:06:14.729430 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:15.288528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612426674.mount: Deactivated successfully. Sep 12 17:06:15.643933 containerd[1485]: time="2025-09-12T17:06:15.643873044Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\"" Sep 12 17:06:15.644481 containerd[1485]: time="2025-09-12T17:06:15.644390748Z" level=info msg="StartContainer for \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\"" Sep 12 17:06:15.677292 systemd[1]: Started cri-containerd-a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213.scope - libcontainer container a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213. Sep 12 17:06:15.716316 systemd[1]: cri-containerd-a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213.scope: Deactivated successfully. Sep 12 17:06:15.720220 containerd[1485]: time="2025-09-12T17:06:15.720152387Z" level=info msg="StartContainer for \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\" returns successfully" Sep 12 17:06:15.739814 kubelet[2600]: E0912 17:06:15.739770 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:15.741787 kubelet[2600]: E0912 17:06:15.741476 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:15.748288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213-rootfs.mount: Deactivated successfully. Sep 12 17:06:15.773951 kubelet[2600]: I0912 17:06:15.773581 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-kx587" podStartSLOduration=2.443011891 podStartE2EDuration="22.77355661s" podCreationTimestamp="2025-09-12 17:05:53 +0000 UTC" firstStartedPulling="2025-09-12 17:05:53.961894668 +0000 UTC m=+7.393502536" lastFinishedPulling="2025-09-12 17:06:14.292439387 +0000 UTC m=+27.724047255" observedRunningTime="2025-09-12 17:06:15.349294797 +0000 UTC m=+28.780902665" watchObservedRunningTime="2025-09-12 17:06:15.77355661 +0000 UTC m=+29.205164478" Sep 12 17:06:15.979256 containerd[1485]: time="2025-09-12T17:06:15.979033394Z" level=info msg="shim disconnected" id=a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213 namespace=k8s.io Sep 12 17:06:15.979256 containerd[1485]: time="2025-09-12T17:06:15.979119054Z" level=warning msg="cleaning up after shim disconnected" id=a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213 namespace=k8s.io Sep 12 17:06:15.979256 containerd[1485]: time="2025-09-12T17:06:15.979131077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:06:16.751923 kubelet[2600]: E0912 17:06:16.751874 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:16.757511 containerd[1485]: time="2025-09-12T17:06:16.757446738Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:06:16.786772 containerd[1485]: time="2025-09-12T17:06:16.786706617Z" level=info msg="CreateContainer within sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\"" Sep 12 17:06:16.787388 containerd[1485]: time="2025-09-12T17:06:16.787330019Z" level=info msg="StartContainer for \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\"" Sep 12 17:06:16.828219 systemd[1]: Started cri-containerd-4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2.scope - libcontainer container 4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2. Sep 12 17:06:16.886904 containerd[1485]: time="2025-09-12T17:06:16.886805847Z" level=info msg="StartContainer for \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\" returns successfully" Sep 12 17:06:17.113485 kubelet[2600]: I0912 17:06:17.113101 2600 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:06:17.252898 systemd[1]: Created slice kubepods-burstable-pod9c690f27_6019_487f_8034_8db7e44aec51.slice - libcontainer container kubepods-burstable-pod9c690f27_6019_487f_8034_8db7e44aec51.slice. Sep 12 17:06:17.258691 systemd[1]: Created slice kubepods-burstable-podf943fdc9_4273_4e27_a107_120501f61b20.slice - libcontainer container kubepods-burstable-podf943fdc9_4273_4e27_a107_120501f61b20.slice. Sep 12 17:06:17.326525 kubelet[2600]: I0912 17:06:17.326438 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f943fdc9-4273-4e27-a107-120501f61b20-config-volume\") pod \"coredns-7c65d6cfc9-prck8\" (UID: \"f943fdc9-4273-4e27-a107-120501f61b20\") " pod="kube-system/coredns-7c65d6cfc9-prck8" Sep 12 17:06:17.326525 kubelet[2600]: I0912 17:06:17.326521 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cwp8\" (UniqueName: \"kubernetes.io/projected/9c690f27-6019-487f-8034-8db7e44aec51-kube-api-access-7cwp8\") pod \"coredns-7c65d6cfc9-cs568\" (UID: \"9c690f27-6019-487f-8034-8db7e44aec51\") " pod="kube-system/coredns-7c65d6cfc9-cs568" Sep 12 17:06:17.326525 kubelet[2600]: I0912 17:06:17.326547 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c690f27-6019-487f-8034-8db7e44aec51-config-volume\") pod \"coredns-7c65d6cfc9-cs568\" (UID: \"9c690f27-6019-487f-8034-8db7e44aec51\") " pod="kube-system/coredns-7c65d6cfc9-cs568" Sep 12 17:06:17.326762 kubelet[2600]: I0912 17:06:17.326622 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5564b\" (UniqueName: \"kubernetes.io/projected/f943fdc9-4273-4e27-a107-120501f61b20-kube-api-access-5564b\") pod \"coredns-7c65d6cfc9-prck8\" (UID: \"f943fdc9-4273-4e27-a107-120501f61b20\") " pod="kube-system/coredns-7c65d6cfc9-prck8" Sep 12 17:06:17.557109 kubelet[2600]: E0912 17:06:17.556920 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:17.558366 containerd[1485]: time="2025-09-12T17:06:17.558290441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cs568,Uid:9c690f27-6019-487f-8034-8db7e44aec51,Namespace:kube-system,Attempt:0,}" Sep 12 17:06:17.561819 kubelet[2600]: E0912 17:06:17.561583 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:17.561987 containerd[1485]: time="2025-09-12T17:06:17.561957868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-prck8,Uid:f943fdc9-4273-4e27-a107-120501f61b20,Namespace:kube-system,Attempt:0,}" Sep 12 17:06:17.762883 kubelet[2600]: E0912 17:06:17.761539 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:18.763463 kubelet[2600]: E0912 17:06:18.763413 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:19.073300 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:54348.service - OpenSSH per-connection server daemon (10.0.0.1:54348). Sep 12 17:06:19.123212 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 54348 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:19.125071 sshd-session[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:19.129625 systemd-logind[1468]: New session 9 of user core. Sep 12 17:06:19.134249 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:06:19.254285 sshd[3476]: Connection closed by 10.0.0.1 port 54348 Sep 12 17:06:19.254692 sshd-session[3474]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:19.258099 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:54348.service: Deactivated successfully. Sep 12 17:06:19.260236 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:06:19.261800 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:06:19.262785 systemd-logind[1468]: Removed session 9. Sep 12 17:06:19.524063 systemd-networkd[1417]: cilium_host: Link UP Sep 12 17:06:19.524303 systemd-networkd[1417]: cilium_net: Link UP Sep 12 17:06:19.524309 systemd-networkd[1417]: cilium_net: Gained carrier Sep 12 17:06:19.524637 systemd-networkd[1417]: cilium_host: Gained carrier Sep 12 17:06:19.637365 systemd-networkd[1417]: cilium_vxlan: Link UP Sep 12 17:06:19.637526 systemd-networkd[1417]: cilium_vxlan: Gained carrier Sep 12 17:06:19.765748 kubelet[2600]: E0912 17:06:19.765697 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:19.856050 kernel: NET: Registered PF_ALG protocol family Sep 12 17:06:19.910253 systemd-networkd[1417]: cilium_net: Gained IPv6LL Sep 12 17:06:20.375202 systemd-networkd[1417]: cilium_host: Gained IPv6LL Sep 12 17:06:20.686982 systemd-networkd[1417]: lxc_health: Link UP Sep 12 17:06:20.687360 systemd-networkd[1417]: lxc_health: Gained carrier Sep 12 17:06:21.078265 systemd-networkd[1417]: cilium_vxlan: Gained IPv6LL Sep 12 17:06:21.219151 systemd-networkd[1417]: lxc039154807fdf: Link UP Sep 12 17:06:21.236164 kernel: eth0: renamed from tmp79a00 Sep 12 17:06:21.247085 kernel: eth0: renamed from tmpc7dc5 Sep 12 17:06:21.253181 systemd-networkd[1417]: lxcf56d5c357e40: Link UP Sep 12 17:06:21.254340 systemd-networkd[1417]: lxcf56d5c357e40: Gained carrier Sep 12 17:06:21.254612 systemd-networkd[1417]: lxc039154807fdf: Gained carrier Sep 12 17:06:21.844424 kubelet[2600]: E0912 17:06:21.844379 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:21.860643 kubelet[2600]: I0912 17:06:21.860583 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pjqbb" podStartSLOduration=11.397747564 podStartE2EDuration="28.860564225s" podCreationTimestamp="2025-09-12 17:05:53 +0000 UTC" firstStartedPulling="2025-09-12 17:05:53.954828648 +0000 UTC m=+7.386436516" lastFinishedPulling="2025-09-12 17:06:11.417645309 +0000 UTC m=+24.849253177" observedRunningTime="2025-09-12 17:06:17.780904262 +0000 UTC m=+31.212512130" watchObservedRunningTime="2025-09-12 17:06:21.860564225 +0000 UTC m=+35.292172093" Sep 12 17:06:22.742223 systemd-networkd[1417]: lxc_health: Gained IPv6LL Sep 12 17:06:22.742589 systemd-networkd[1417]: lxc039154807fdf: Gained IPv6LL Sep 12 17:06:22.772126 kubelet[2600]: E0912 17:06:22.772088 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:22.934315 systemd-networkd[1417]: lxcf56d5c357e40: Gained IPv6LL Sep 12 17:06:23.773236 kubelet[2600]: E0912 17:06:23.773186 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:24.271552 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:50172.service - OpenSSH per-connection server daemon (10.0.0.1:50172). Sep 12 17:06:24.333448 sshd[3872]: Accepted publickey for core from 10.0.0.1 port 50172 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:24.335144 sshd-session[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:24.340422 systemd-logind[1468]: New session 10 of user core. Sep 12 17:06:24.347158 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:06:24.479355 sshd[3874]: Connection closed by 10.0.0.1 port 50172 Sep 12 17:06:24.481245 sshd-session[3872]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:24.485108 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:06:24.485710 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:50172.service: Deactivated successfully. Sep 12 17:06:24.490400 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:06:24.493812 systemd-logind[1468]: Removed session 10. Sep 12 17:06:24.686715 containerd[1485]: time="2025-09-12T17:06:24.686218578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:06:24.686715 containerd[1485]: time="2025-09-12T17:06:24.686301535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:06:24.686715 containerd[1485]: time="2025-09-12T17:06:24.686317665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:06:24.686715 containerd[1485]: time="2025-09-12T17:06:24.686410028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:06:24.686715 containerd[1485]: time="2025-09-12T17:06:24.686646433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:06:24.686715 containerd[1485]: time="2025-09-12T17:06:24.686710693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:06:24.686715 containerd[1485]: time="2025-09-12T17:06:24.686728206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:06:24.687382 containerd[1485]: time="2025-09-12T17:06:24.686815279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:06:24.706762 systemd[1]: run-containerd-runc-k8s.io-79a009854a6aeb3d6b1d2d6885a57f439725aa314f0fd9f5a9fe3c9530cc6dab-runc.uo8Pot.mount: Deactivated successfully. Sep 12 17:06:24.721163 systemd[1]: Started cri-containerd-79a009854a6aeb3d6b1d2d6885a57f439725aa314f0fd9f5a9fe3c9530cc6dab.scope - libcontainer container 79a009854a6aeb3d6b1d2d6885a57f439725aa314f0fd9f5a9fe3c9530cc6dab. Sep 12 17:06:24.722852 systemd[1]: Started cri-containerd-c7dc536830010fa88f36a7846c630cea532e1ee8fac906783327115272290450.scope - libcontainer container c7dc536830010fa88f36a7846c630cea532e1ee8fac906783327115272290450. Sep 12 17:06:24.734440 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:06:24.736555 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:06:24.761352 containerd[1485]: time="2025-09-12T17:06:24.761203106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cs568,Uid:9c690f27-6019-487f-8034-8db7e44aec51,Namespace:kube-system,Attempt:0,} returns sandbox id \"79a009854a6aeb3d6b1d2d6885a57f439725aa314f0fd9f5a9fe3c9530cc6dab\"" Sep 12 17:06:24.762089 kubelet[2600]: E0912 17:06:24.761865 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:24.764757 containerd[1485]: time="2025-09-12T17:06:24.764548623Z" level=info msg="CreateContainer within sandbox \"79a009854a6aeb3d6b1d2d6885a57f439725aa314f0fd9f5a9fe3c9530cc6dab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:06:24.766702 containerd[1485]: time="2025-09-12T17:06:24.766645442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-prck8,Uid:f943fdc9-4273-4e27-a107-120501f61b20,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7dc536830010fa88f36a7846c630cea532e1ee8fac906783327115272290450\"" Sep 12 17:06:24.767520 kubelet[2600]: E0912 17:06:24.767298 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:24.768985 containerd[1485]: time="2025-09-12T17:06:24.768951715Z" level=info msg="CreateContainer within sandbox \"c7dc536830010fa88f36a7846c630cea532e1ee8fac906783327115272290450\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:06:25.043945 containerd[1485]: time="2025-09-12T17:06:25.043331427Z" level=info msg="CreateContainer within sandbox \"79a009854a6aeb3d6b1d2d6885a57f439725aa314f0fd9f5a9fe3c9530cc6dab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a934dafab607af69ca0285598891d10e9de59b599609d52f78eb053c8043514\"" Sep 12 17:06:25.044092 containerd[1485]: time="2025-09-12T17:06:25.044034638Z" level=info msg="StartContainer for \"9a934dafab607af69ca0285598891d10e9de59b599609d52f78eb053c8043514\"" Sep 12 17:06:25.048423 containerd[1485]: time="2025-09-12T17:06:25.048376044Z" level=info msg="CreateContainer within sandbox \"c7dc536830010fa88f36a7846c630cea532e1ee8fac906783327115272290450\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf86b5418bc64614fe8b6ebb5eb65e3ee0c7570b6e3138b22be91ab6074369bb\"" Sep 12 17:06:25.048940 containerd[1485]: time="2025-09-12T17:06:25.048899498Z" level=info msg="StartContainer for \"bf86b5418bc64614fe8b6ebb5eb65e3ee0c7570b6e3138b22be91ab6074369bb\"" Sep 12 17:06:25.075192 systemd[1]: Started cri-containerd-9a934dafab607af69ca0285598891d10e9de59b599609d52f78eb053c8043514.scope - libcontainer container 9a934dafab607af69ca0285598891d10e9de59b599609d52f78eb053c8043514. Sep 12 17:06:25.078723 systemd[1]: Started cri-containerd-bf86b5418bc64614fe8b6ebb5eb65e3ee0c7570b6e3138b22be91ab6074369bb.scope - libcontainer container bf86b5418bc64614fe8b6ebb5eb65e3ee0c7570b6e3138b22be91ab6074369bb. Sep 12 17:06:25.178349 containerd[1485]: time="2025-09-12T17:06:25.178263964Z" level=info msg="StartContainer for \"bf86b5418bc64614fe8b6ebb5eb65e3ee0c7570b6e3138b22be91ab6074369bb\" returns successfully" Sep 12 17:06:25.178349 containerd[1485]: time="2025-09-12T17:06:25.178323716Z" level=info msg="StartContainer for \"9a934dafab607af69ca0285598891d10e9de59b599609d52f78eb053c8043514\" returns successfully" Sep 12 17:06:25.693293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778273098.mount: Deactivated successfully. Sep 12 17:06:25.783307 kubelet[2600]: E0912 17:06:25.783255 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:25.785649 kubelet[2600]: E0912 17:06:25.785490 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:25.794175 kubelet[2600]: I0912 17:06:25.793999 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-prck8" podStartSLOduration=32.793974586 podStartE2EDuration="32.793974586s" podCreationTimestamp="2025-09-12 17:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:06:25.793840142 +0000 UTC m=+39.225448010" watchObservedRunningTime="2025-09-12 17:06:25.793974586 +0000 UTC m=+39.225582454" Sep 12 17:06:25.812108 kubelet[2600]: I0912 17:06:25.804481 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cs568" podStartSLOduration=32.804453342 podStartE2EDuration="32.804453342s" podCreationTimestamp="2025-09-12 17:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:06:25.803652096 +0000 UTC m=+39.235259964" watchObservedRunningTime="2025-09-12 17:06:25.804453342 +0000 UTC m=+39.236061220" Sep 12 17:06:26.786610 kubelet[2600]: E0912 17:06:26.786573 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:26.786610 kubelet[2600]: E0912 17:06:26.786624 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:27.788870 kubelet[2600]: E0912 17:06:27.788831 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:27.789459 kubelet[2600]: E0912 17:06:27.788981 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:06:29.497550 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:50184.service - OpenSSH per-connection server daemon (10.0.0.1:50184). Sep 12 17:06:29.545897 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 50184 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:29.547730 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:29.552722 systemd-logind[1468]: New session 11 of user core. Sep 12 17:06:29.568187 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:06:29.690208 sshd[4069]: Connection closed by 10.0.0.1 port 50184 Sep 12 17:06:29.690671 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:29.695503 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:50184.service: Deactivated successfully. Sep 12 17:06:29.697720 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:06:29.698531 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:06:29.699610 systemd-logind[1468]: Removed session 11. Sep 12 17:06:34.703753 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:55704.service - OpenSSH per-connection server daemon (10.0.0.1:55704). Sep 12 17:06:34.747179 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 55704 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:34.748858 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:34.753509 systemd-logind[1468]: New session 12 of user core. Sep 12 17:06:34.765166 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:06:34.882878 sshd[4086]: Connection closed by 10.0.0.1 port 55704 Sep 12 17:06:34.883397 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:34.900951 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:55704.service: Deactivated successfully. Sep 12 17:06:34.903228 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:06:34.904964 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:06:34.910266 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:55712.service - OpenSSH per-connection server daemon (10.0.0.1:55712). Sep 12 17:06:34.911187 systemd-logind[1468]: Removed session 12. Sep 12 17:06:34.952082 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 55712 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:34.953674 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:34.958033 systemd-logind[1468]: New session 13 of user core. Sep 12 17:06:34.969212 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:06:35.124261 sshd[4103]: Connection closed by 10.0.0.1 port 55712 Sep 12 17:06:35.126462 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:35.138797 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:55712.service: Deactivated successfully. Sep 12 17:06:35.143460 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:06:35.144605 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:06:35.151361 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:55718.service - OpenSSH per-connection server daemon (10.0.0.1:55718). Sep 12 17:06:35.152298 systemd-logind[1468]: Removed session 13. Sep 12 17:06:35.194460 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 55718 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:35.195951 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:35.200870 systemd-logind[1468]: New session 14 of user core. Sep 12 17:06:35.213151 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:06:35.328440 sshd[4117]: Connection closed by 10.0.0.1 port 55718 Sep 12 17:06:35.328881 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:35.333872 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:55718.service: Deactivated successfully. Sep 12 17:06:35.336267 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:06:35.336940 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:06:35.337842 systemd-logind[1468]: Removed session 14. Sep 12 17:06:40.341528 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:56752.service - OpenSSH per-connection server daemon (10.0.0.1:56752). Sep 12 17:06:40.384795 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 56752 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:40.386481 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:40.391264 systemd-logind[1468]: New session 15 of user core. Sep 12 17:06:40.404221 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:06:40.522270 sshd[4132]: Connection closed by 10.0.0.1 port 56752 Sep 12 17:06:40.522665 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:40.527365 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:56752.service: Deactivated successfully. Sep 12 17:06:40.529733 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:06:40.530500 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:06:40.531432 systemd-logind[1468]: Removed session 15. Sep 12 17:06:45.538603 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:56756.service - OpenSSH per-connection server daemon (10.0.0.1:56756). Sep 12 17:06:45.593923 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 56756 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:45.595563 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:45.600324 systemd-logind[1468]: New session 16 of user core. Sep 12 17:06:45.615231 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:06:45.781462 sshd[4147]: Connection closed by 10.0.0.1 port 56756 Sep 12 17:06:45.781875 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:45.785913 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:56756.service: Deactivated successfully. Sep 12 17:06:45.788167 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:06:45.788885 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:06:45.790100 systemd-logind[1468]: Removed session 16. Sep 12 17:06:50.794037 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:44964.service - OpenSSH per-connection server daemon (10.0.0.1:44964). Sep 12 17:06:50.836730 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 44964 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:50.838214 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:50.842566 systemd-logind[1468]: New session 17 of user core. Sep 12 17:06:50.852245 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:06:50.961410 sshd[4165]: Connection closed by 10.0.0.1 port 44964 Sep 12 17:06:50.961844 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:50.972812 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:44964.service: Deactivated successfully. Sep 12 17:06:50.975425 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:06:50.977257 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:06:50.982254 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:44972.service - OpenSSH per-connection server daemon (10.0.0.1:44972). Sep 12 17:06:50.983330 systemd-logind[1468]: Removed session 17. Sep 12 17:06:51.023082 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 44972 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:51.024588 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:51.028987 systemd-logind[1468]: New session 18 of user core. Sep 12 17:06:51.039130 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:06:51.693099 sshd[4180]: Connection closed by 10.0.0.1 port 44972 Sep 12 17:06:51.693732 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:51.709955 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:44972.service: Deactivated successfully. Sep 12 17:06:51.711926 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:06:51.713741 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:06:51.722347 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:44974.service - OpenSSH per-connection server daemon (10.0.0.1:44974). Sep 12 17:06:51.723741 systemd-logind[1468]: Removed session 18. Sep 12 17:06:51.765456 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 44974 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:51.766989 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:51.772301 systemd-logind[1468]: New session 19 of user core. Sep 12 17:06:51.786215 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:06:53.451966 sshd[4193]: Connection closed by 10.0.0.1 port 44974 Sep 12 17:06:53.455344 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:53.462859 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:44974.service: Deactivated successfully. Sep 12 17:06:53.465121 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:06:53.472226 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:06:53.484568 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:44986.service - OpenSSH per-connection server daemon (10.0.0.1:44986). Sep 12 17:06:53.485885 systemd-logind[1468]: Removed session 19. Sep 12 17:06:53.524515 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 44986 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:53.528124 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:53.533364 systemd-logind[1468]: New session 20 of user core. Sep 12 17:06:53.553189 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:06:53.829744 sshd[4230]: Connection closed by 10.0.0.1 port 44986 Sep 12 17:06:53.831851 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:53.846307 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:44990.service - OpenSSH per-connection server daemon (10.0.0.1:44990). Sep 12 17:06:53.846925 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:44986.service: Deactivated successfully. Sep 12 17:06:53.849974 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:06:53.852084 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:06:53.853539 systemd-logind[1468]: Removed session 20. Sep 12 17:06:53.885514 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 44990 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:53.888435 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:53.894133 systemd-logind[1468]: New session 21 of user core. Sep 12 17:06:53.901174 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:06:54.012640 sshd[4244]: Connection closed by 10.0.0.1 port 44990 Sep 12 17:06:54.013056 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:54.017583 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:44990.service: Deactivated successfully. Sep 12 17:06:54.019973 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:06:54.021036 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:06:54.021970 systemd-logind[1468]: Removed session 21. Sep 12 17:06:59.026352 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:45000.service - OpenSSH per-connection server daemon (10.0.0.1:45000). Sep 12 17:06:59.070460 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 45000 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:06:59.072159 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:06:59.077089 systemd-logind[1468]: New session 22 of user core. Sep 12 17:06:59.086155 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:06:59.204657 sshd[4262]: Connection closed by 10.0.0.1 port 45000 Sep 12 17:06:59.205204 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Sep 12 17:06:59.210277 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:45000.service: Deactivated successfully. Sep 12 17:06:59.212396 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:06:59.213281 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:06:59.214534 systemd-logind[1468]: Removed session 22. Sep 12 17:07:04.218179 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:40528.service - OpenSSH per-connection server daemon (10.0.0.1:40528). Sep 12 17:07:04.261206 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 40528 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:04.262620 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:04.267205 systemd-logind[1468]: New session 23 of user core. Sep 12 17:07:04.283191 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:07:04.395066 sshd[4281]: Connection closed by 10.0.0.1 port 40528 Sep 12 17:07:04.395427 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:04.399402 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:40528.service: Deactivated successfully. Sep 12 17:07:04.401816 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:07:04.402700 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:07:04.403544 systemd-logind[1468]: Removed session 23. Sep 12 17:07:09.413421 systemd[1]: Started sshd@23-10.0.0.33:22-10.0.0.1:40534.service - OpenSSH per-connection server daemon (10.0.0.1:40534). Sep 12 17:07:09.456535 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 40534 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:09.458126 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:09.462311 systemd-logind[1468]: New session 24 of user core. Sep 12 17:07:09.474216 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:07:09.581890 sshd[4296]: Connection closed by 10.0.0.1 port 40534 Sep 12 17:07:09.582306 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:09.586412 systemd[1]: sshd@23-10.0.0.33:22-10.0.0.1:40534.service: Deactivated successfully. Sep 12 17:07:09.588919 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:07:09.589877 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:07:09.591132 systemd-logind[1468]: Removed session 24. Sep 12 17:07:11.656713 kubelet[2600]: E0912 17:07:11.656662 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:11.657245 kubelet[2600]: E0912 17:07:11.656738 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:14.595488 systemd[1]: Started sshd@24-10.0.0.33:22-10.0.0.1:50238.service - OpenSSH per-connection server daemon (10.0.0.1:50238). Sep 12 17:07:14.640363 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 50238 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:14.642313 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:14.647770 systemd-logind[1468]: New session 25 of user core. Sep 12 17:07:14.653203 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:07:14.768356 sshd[4311]: Connection closed by 10.0.0.1 port 50238 Sep 12 17:07:14.768847 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:14.780562 systemd[1]: sshd@24-10.0.0.33:22-10.0.0.1:50238.service: Deactivated successfully. Sep 12 17:07:14.782897 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:07:14.784817 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:07:14.791359 systemd[1]: Started sshd@25-10.0.0.33:22-10.0.0.1:50242.service - OpenSSH per-connection server daemon (10.0.0.1:50242). Sep 12 17:07:14.792524 systemd-logind[1468]: Removed session 25. Sep 12 17:07:14.832376 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 50242 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:14.834119 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:14.839542 systemd-logind[1468]: New session 26 of user core. Sep 12 17:07:14.849264 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:07:16.294555 containerd[1485]: time="2025-09-12T17:07:16.294373735Z" level=info msg="StopContainer for \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\" with timeout 30 (s)" Sep 12 17:07:16.296296 containerd[1485]: time="2025-09-12T17:07:16.296130771Z" level=info msg="Stop container \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\" with signal terminated" Sep 12 17:07:16.310952 systemd[1]: cri-containerd-9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b.scope: Deactivated successfully. Sep 12 17:07:16.335420 containerd[1485]: time="2025-09-12T17:07:16.335321931Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:07:16.338860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b-rootfs.mount: Deactivated successfully. Sep 12 17:07:16.339708 containerd[1485]: time="2025-09-12T17:07:16.339675172Z" level=info msg="StopContainer for \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\" with timeout 2 (s)" Sep 12 17:07:16.340227 containerd[1485]: time="2025-09-12T17:07:16.340207205Z" level=info msg="Stop container \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\" with signal terminated" Sep 12 17:07:16.348355 systemd-networkd[1417]: lxc_health: Link DOWN Sep 12 17:07:16.348363 systemd-networkd[1417]: lxc_health: Lost carrier Sep 12 17:07:16.358758 containerd[1485]: time="2025-09-12T17:07:16.358687546Z" level=info msg="shim disconnected" id=9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b namespace=k8s.io Sep 12 17:07:16.358758 containerd[1485]: time="2025-09-12T17:07:16.358757480Z" level=warning msg="cleaning up after shim disconnected" id=9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b namespace=k8s.io Sep 12 17:07:16.358904 containerd[1485]: time="2025-09-12T17:07:16.358767779Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:07:16.371300 systemd[1]: cri-containerd-4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2.scope: Deactivated successfully. Sep 12 17:07:16.371980 systemd[1]: cri-containerd-4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2.scope: Consumed 7.225s CPU time, 123.7M memory peak, 572K read from disk, 13.3M written to disk. Sep 12 17:07:16.383017 containerd[1485]: time="2025-09-12T17:07:16.382957723Z" level=info msg="StopContainer for \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\" returns successfully" Sep 12 17:07:16.388105 containerd[1485]: time="2025-09-12T17:07:16.388058326Z" level=info msg="StopPodSandbox for \"2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673\"" Sep 12 17:07:16.403718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2-rootfs.mount: Deactivated successfully. Sep 12 17:07:16.404756 containerd[1485]: time="2025-09-12T17:07:16.388307270Z" level=info msg="Container to stop \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:07:16.410425 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673-shm.mount: Deactivated successfully. Sep 12 17:07:16.411655 containerd[1485]: time="2025-09-12T17:07:16.411447365Z" level=info msg="shim disconnected" id=4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2 namespace=k8s.io Sep 12 17:07:16.411655 containerd[1485]: time="2025-09-12T17:07:16.411640583Z" level=warning msg="cleaning up after shim disconnected" id=4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2 namespace=k8s.io Sep 12 17:07:16.411655 containerd[1485]: time="2025-09-12T17:07:16.411652586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:07:16.417333 systemd[1]: cri-containerd-2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673.scope: Deactivated successfully. Sep 12 17:07:16.436397 containerd[1485]: time="2025-09-12T17:07:16.436335359Z" level=info msg="StopContainer for \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\" returns successfully" Sep 12 17:07:16.436996 containerd[1485]: time="2025-09-12T17:07:16.436971200Z" level=info msg="StopPodSandbox for \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\"" Sep 12 17:07:16.437080 containerd[1485]: time="2025-09-12T17:07:16.437031134Z" level=info msg="Container to stop \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:07:16.437080 containerd[1485]: time="2025-09-12T17:07:16.437071050Z" level=info msg="Container to stop \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:07:16.437181 containerd[1485]: time="2025-09-12T17:07:16.437080939Z" level=info msg="Container to stop \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:07:16.437181 containerd[1485]: time="2025-09-12T17:07:16.437091068Z" level=info msg="Container to stop \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:07:16.437181 containerd[1485]: time="2025-09-12T17:07:16.437102941Z" level=info msg="Container to stop \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:07:16.441367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6-shm.mount: Deactivated successfully. Sep 12 17:07:16.447151 systemd[1]: cri-containerd-c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6.scope: Deactivated successfully. Sep 12 17:07:16.459856 containerd[1485]: time="2025-09-12T17:07:16.459779524Z" level=info msg="shim disconnected" id=2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673 namespace=k8s.io Sep 12 17:07:16.459856 containerd[1485]: time="2025-09-12T17:07:16.459848905Z" level=warning msg="cleaning up after shim disconnected" id=2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673 namespace=k8s.io Sep 12 17:07:16.459856 containerd[1485]: time="2025-09-12T17:07:16.459860287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:07:16.478670 containerd[1485]: time="2025-09-12T17:07:16.478607627Z" level=info msg="TearDown network for sandbox \"2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673\" successfully" Sep 12 17:07:16.478670 containerd[1485]: time="2025-09-12T17:07:16.478652583Z" level=info msg="StopPodSandbox for \"2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673\" returns successfully" Sep 12 17:07:16.480982 containerd[1485]: time="2025-09-12T17:07:16.480933947Z" level=info msg="shim disconnected" id=c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6 namespace=k8s.io Sep 12 17:07:16.481345 containerd[1485]: time="2025-09-12T17:07:16.481154948Z" level=warning msg="cleaning up after shim disconnected" id=c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6 namespace=k8s.io Sep 12 17:07:16.481345 containerd[1485]: time="2025-09-12T17:07:16.481168464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:07:16.501198 containerd[1485]: time="2025-09-12T17:07:16.501124796Z" level=info msg="TearDown network for sandbox \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" successfully" Sep 12 17:07:16.501198 containerd[1485]: time="2025-09-12T17:07:16.501176464Z" level=info msg="StopPodSandbox for \"c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6\" returns successfully" Sep 12 17:07:16.576967 kubelet[2600]: I0912 17:07:16.576805 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jppq\" (UniqueName: \"kubernetes.io/projected/379ac0bb-c8e5-4521-8c60-2f1dfd26ce42-kube-api-access-5jppq\") pod \"379ac0bb-c8e5-4521-8c60-2f1dfd26ce42\" (UID: \"379ac0bb-c8e5-4521-8c60-2f1dfd26ce42\") " Sep 12 17:07:16.576967 kubelet[2600]: I0912 17:07:16.576872 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/379ac0bb-c8e5-4521-8c60-2f1dfd26ce42-cilium-config-path\") pod \"379ac0bb-c8e5-4521-8c60-2f1dfd26ce42\" (UID: \"379ac0bb-c8e5-4521-8c60-2f1dfd26ce42\") " Sep 12 17:07:16.580857 kubelet[2600]: I0912 17:07:16.580791 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379ac0bb-c8e5-4521-8c60-2f1dfd26ce42-kube-api-access-5jppq" (OuterVolumeSpecName: "kube-api-access-5jppq") pod "379ac0bb-c8e5-4521-8c60-2f1dfd26ce42" (UID: "379ac0bb-c8e5-4521-8c60-2f1dfd26ce42"). InnerVolumeSpecName "kube-api-access-5jppq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:07:16.581057 kubelet[2600]: I0912 17:07:16.581027 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379ac0bb-c8e5-4521-8c60-2f1dfd26ce42-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "379ac0bb-c8e5-4521-8c60-2f1dfd26ce42" (UID: "379ac0bb-c8e5-4521-8c60-2f1dfd26ce42"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:07:16.666354 systemd[1]: Removed slice kubepods-besteffort-pod379ac0bb_c8e5_4521_8c60_2f1dfd26ce42.slice - libcontainer container kubepods-besteffort-pod379ac0bb_c8e5_4521_8c60_2f1dfd26ce42.slice. Sep 12 17:07:16.677387 kubelet[2600]: I0912 17:07:16.677351 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-host-proc-sys-kernel\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677387 kubelet[2600]: I0912 17:07:16.677390 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvbg8\" (UniqueName: \"kubernetes.io/projected/09637d68-1a17-4002-8a5f-78efaeb9307a-kube-api-access-jvbg8\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677497 kubelet[2600]: I0912 17:07:16.677405 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-lib-modules\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677497 kubelet[2600]: I0912 17:07:16.677421 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-cgroup\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677497 kubelet[2600]: I0912 17:07:16.677435 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-bpf-maps\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677497 kubelet[2600]: I0912 17:07:16.677452 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-config-path\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677497 kubelet[2600]: I0912 17:07:16.677467 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09637d68-1a17-4002-8a5f-78efaeb9307a-hubble-tls\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677497 kubelet[2600]: I0912 17:07:16.677482 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09637d68-1a17-4002-8a5f-78efaeb9307a-clustermesh-secrets\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677644 kubelet[2600]: I0912 17:07:16.677479 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.677644 kubelet[2600]: I0912 17:07:16.677477 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.677644 kubelet[2600]: I0912 17:07:16.677503 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-host-proc-sys-net\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677644 kubelet[2600]: I0912 17:07:16.677517 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-xtables-lock\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677644 kubelet[2600]: I0912 17:07:16.677533 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-hostproc\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677761 kubelet[2600]: I0912 17:07:16.677549 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cni-path\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677761 kubelet[2600]: I0912 17:07:16.677561 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-run\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677761 kubelet[2600]: I0912 17:07:16.677577 2600 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-etc-cni-netd\") pod \"09637d68-1a17-4002-8a5f-78efaeb9307a\" (UID: \"09637d68-1a17-4002-8a5f-78efaeb9307a\") " Sep 12 17:07:16.677761 kubelet[2600]: I0912 17:07:16.677607 2600 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.677761 kubelet[2600]: I0912 17:07:16.677621 2600 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jppq\" (UniqueName: \"kubernetes.io/projected/379ac0bb-c8e5-4521-8c60-2f1dfd26ce42-kube-api-access-5jppq\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.677761 kubelet[2600]: I0912 17:07:16.677633 2600 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/379ac0bb-c8e5-4521-8c60-2f1dfd26ce42-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.677761 kubelet[2600]: I0912 17:07:16.677642 2600 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.677993 kubelet[2600]: I0912 17:07:16.677665 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.681064 kubelet[2600]: I0912 17:07:16.681031 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:07:16.681148 kubelet[2600]: I0912 17:07:16.681070 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.681148 kubelet[2600]: I0912 17:07:16.681066 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.681148 kubelet[2600]: I0912 17:07:16.681098 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.681148 kubelet[2600]: I0912 17:07:16.681122 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.681148 kubelet[2600]: I0912 17:07:16.681139 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cni-path" (OuterVolumeSpecName: "cni-path") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.681272 kubelet[2600]: I0912 17:07:16.681155 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-hostproc" (OuterVolumeSpecName: "hostproc") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.681272 kubelet[2600]: I0912 17:07:16.681170 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:07:16.684155 kubelet[2600]: I0912 17:07:16.684132 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09637d68-1a17-4002-8a5f-78efaeb9307a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:07:16.686589 kubelet[2600]: I0912 17:07:16.686560 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09637d68-1a17-4002-8a5f-78efaeb9307a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:07:16.690586 kubelet[2600]: I0912 17:07:16.690549 2600 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09637d68-1a17-4002-8a5f-78efaeb9307a-kube-api-access-jvbg8" (OuterVolumeSpecName: "kube-api-access-jvbg8") pod "09637d68-1a17-4002-8a5f-78efaeb9307a" (UID: "09637d68-1a17-4002-8a5f-78efaeb9307a"). InnerVolumeSpecName "kube-api-access-jvbg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:07:16.709777 kubelet[2600]: E0912 17:07:16.709734 2600 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:07:16.778328 kubelet[2600]: I0912 17:07:16.778243 2600 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778328 kubelet[2600]: I0912 17:07:16.778298 2600 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778328 kubelet[2600]: I0912 17:07:16.778311 2600 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778328 kubelet[2600]: I0912 17:07:16.778325 2600 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778328 kubelet[2600]: I0912 17:07:16.778339 2600 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvbg8\" (UniqueName: \"kubernetes.io/projected/09637d68-1a17-4002-8a5f-78efaeb9307a-kube-api-access-jvbg8\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778629 kubelet[2600]: I0912 17:07:16.778354 2600 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778629 kubelet[2600]: I0912 17:07:16.778364 2600 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778629 kubelet[2600]: I0912 17:07:16.778374 2600 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09637d68-1a17-4002-8a5f-78efaeb9307a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778629 kubelet[2600]: I0912 17:07:16.778384 2600 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09637d68-1a17-4002-8a5f-78efaeb9307a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778629 kubelet[2600]: I0912 17:07:16.778393 2600 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09637d68-1a17-4002-8a5f-78efaeb9307a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778629 kubelet[2600]: I0912 17:07:16.778402 2600 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.778629 kubelet[2600]: I0912 17:07:16.778412 2600 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09637d68-1a17-4002-8a5f-78efaeb9307a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 17:07:16.895709 kubelet[2600]: I0912 17:07:16.895670 2600 scope.go:117] "RemoveContainer" containerID="4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2" Sep 12 17:07:16.902044 containerd[1485]: time="2025-09-12T17:07:16.901643204Z" level=info msg="RemoveContainer for \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\"" Sep 12 17:07:16.906304 systemd[1]: Removed slice kubepods-burstable-pod09637d68_1a17_4002_8a5f_78efaeb9307a.slice - libcontainer container kubepods-burstable-pod09637d68_1a17_4002_8a5f_78efaeb9307a.slice. Sep 12 17:07:16.906541 systemd[1]: kubepods-burstable-pod09637d68_1a17_4002_8a5f_78efaeb9307a.slice: Consumed 7.349s CPU time, 124M memory peak, 592K read from disk, 13.3M written to disk. Sep 12 17:07:16.910793 containerd[1485]: time="2025-09-12T17:07:16.910733244Z" level=info msg="RemoveContainer for \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\" returns successfully" Sep 12 17:07:16.911101 kubelet[2600]: I0912 17:07:16.911070 2600 scope.go:117] "RemoveContainer" containerID="a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213" Sep 12 17:07:16.912022 containerd[1485]: time="2025-09-12T17:07:16.911981872Z" level=info msg="RemoveContainer for \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\"" Sep 12 17:07:16.947067 containerd[1485]: time="2025-09-12T17:07:16.946524471Z" level=info msg="RemoveContainer for \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\" returns successfully" Sep 12 17:07:16.947235 kubelet[2600]: I0912 17:07:16.946795 2600 scope.go:117] "RemoveContainer" containerID="62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209" Sep 12 17:07:16.947815 containerd[1485]: time="2025-09-12T17:07:16.947793477Z" level=info msg="RemoveContainer for \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\"" Sep 12 17:07:16.952947 containerd[1485]: time="2025-09-12T17:07:16.952918848Z" level=info msg="RemoveContainer for \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\" returns successfully" Sep 12 17:07:16.953197 kubelet[2600]: I0912 17:07:16.953077 2600 scope.go:117] "RemoveContainer" containerID="7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6" Sep 12 17:07:16.954019 containerd[1485]: time="2025-09-12T17:07:16.953984106Z" level=info msg="RemoveContainer for \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\"" Sep 12 17:07:16.958489 containerd[1485]: time="2025-09-12T17:07:16.958436955Z" level=info msg="RemoveContainer for \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\" returns successfully" Sep 12 17:07:16.958682 kubelet[2600]: I0912 17:07:16.958658 2600 scope.go:117] "RemoveContainer" containerID="e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b" Sep 12 17:07:16.959904 containerd[1485]: time="2025-09-12T17:07:16.959623394Z" level=info msg="RemoveContainer for \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\"" Sep 12 17:07:16.963258 containerd[1485]: time="2025-09-12T17:07:16.963225584Z" level=info msg="RemoveContainer for \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\" returns successfully" Sep 12 17:07:16.963415 kubelet[2600]: I0912 17:07:16.963393 2600 scope.go:117] "RemoveContainer" containerID="4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2" Sep 12 17:07:16.963742 containerd[1485]: time="2025-09-12T17:07:16.963680590Z" level=error msg="ContainerStatus for \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\": not found" Sep 12 17:07:16.963888 kubelet[2600]: E0912 17:07:16.963856 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\": not found" containerID="4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2" Sep 12 17:07:16.963992 kubelet[2600]: I0912 17:07:16.963896 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2"} err="failed to get container status \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a40c41ea7ef05716e14effea5a816511a30e2853a0777d521d2adf6675ecfd2\": not found" Sep 12 17:07:16.963992 kubelet[2600]: I0912 17:07:16.963984 2600 scope.go:117] "RemoveContainer" containerID="a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213" Sep 12 17:07:16.964181 containerd[1485]: time="2025-09-12T17:07:16.964152088Z" level=error msg="ContainerStatus for \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\": not found" Sep 12 17:07:16.964337 kubelet[2600]: E0912 17:07:16.964305 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\": not found" containerID="a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213" Sep 12 17:07:16.964405 kubelet[2600]: I0912 17:07:16.964340 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213"} err="failed to get container status \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1abb5c0a45e35117ed991414615751e8441b60d781c1566b3eb0c4160a37213\": not found" Sep 12 17:07:16.964405 kubelet[2600]: I0912 17:07:16.964358 2600 scope.go:117] "RemoveContainer" containerID="62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209" Sep 12 17:07:16.964518 containerd[1485]: time="2025-09-12T17:07:16.964489922Z" level=error msg="ContainerStatus for \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\": not found" Sep 12 17:07:16.964602 kubelet[2600]: E0912 17:07:16.964584 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\": not found" containerID="62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209" Sep 12 17:07:16.964635 kubelet[2600]: I0912 17:07:16.964605 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209"} err="failed to get container status \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\": rpc error: code = NotFound desc = an error occurred when try to find container \"62b15801d150d66453cb1e49dab72d21b50f28773d04d5ee30528febc59d3209\": not found" Sep 12 17:07:16.964635 kubelet[2600]: I0912 17:07:16.964620 2600 scope.go:117] "RemoveContainer" containerID="7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6" Sep 12 17:07:16.964792 containerd[1485]: time="2025-09-12T17:07:16.964759905Z" level=error msg="ContainerStatus for \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\": not found" Sep 12 17:07:16.964915 kubelet[2600]: E0912 17:07:16.964901 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\": not found" containerID="7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6" Sep 12 17:07:16.964946 kubelet[2600]: I0912 17:07:16.964917 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6"} err="failed to get container status \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d9caea5aa6b0d83c2d3c27fd227553c4975c8a137d6a5180937c2b563e370b6\": not found" Sep 12 17:07:16.964946 kubelet[2600]: I0912 17:07:16.964938 2600 scope.go:117] "RemoveContainer" containerID="e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b" Sep 12 17:07:16.965128 containerd[1485]: time="2025-09-12T17:07:16.965084985Z" level=error msg="ContainerStatus for \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\": not found" Sep 12 17:07:16.965248 kubelet[2600]: E0912 17:07:16.965229 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\": not found" containerID="e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b" Sep 12 17:07:16.965280 kubelet[2600]: I0912 17:07:16.965246 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b"} err="failed to get container status \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e52eaa708fcc623b2cc805d79e0d26d5c1d740d6de07b18b5fb7d41b411c915b\": not found" Sep 12 17:07:16.965280 kubelet[2600]: I0912 17:07:16.965258 2600 scope.go:117] "RemoveContainer" containerID="9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b" Sep 12 17:07:16.966024 containerd[1485]: time="2025-09-12T17:07:16.965989327Z" level=info msg="RemoveContainer for \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\"" Sep 12 17:07:16.969217 containerd[1485]: time="2025-09-12T17:07:16.969175184Z" level=info msg="RemoveContainer for \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\" returns successfully" Sep 12 17:07:16.969376 kubelet[2600]: I0912 17:07:16.969315 2600 scope.go:117] "RemoveContainer" containerID="9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b" Sep 12 17:07:16.969486 containerd[1485]: time="2025-09-12T17:07:16.969446331Z" level=error msg="ContainerStatus for \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\": not found" Sep 12 17:07:16.969660 kubelet[2600]: E0912 17:07:16.969535 2600 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\": not found" containerID="9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b" Sep 12 17:07:16.969660 kubelet[2600]: I0912 17:07:16.969551 2600 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b"} err="failed to get container status \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\": rpc error: code = NotFound desc = an error occurred when try to find container \"9dedc25ee4b371f4ee66bb1f0aea6444021a856697964e1db1ccce309290891b\": not found" Sep 12 17:07:17.305995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c96adff8b8fe0af1bbda103bbe98b0c88339c65445da8d40afb8d851a39664a6-rootfs.mount: Deactivated successfully. Sep 12 17:07:17.306156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b0897b9cbf60515778c5cbe279594492b3232f4ef25760ecf3ae7ff00799673-rootfs.mount: Deactivated successfully. Sep 12 17:07:17.306244 systemd[1]: var-lib-kubelet-pods-09637d68\x2d1a17\x2d4002\x2d8a5f\x2d78efaeb9307a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvbg8.mount: Deactivated successfully. Sep 12 17:07:17.306332 systemd[1]: var-lib-kubelet-pods-379ac0bb\x2dc8e5\x2d4521\x2d8c60\x2d2f1dfd26ce42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5jppq.mount: Deactivated successfully. Sep 12 17:07:17.306425 systemd[1]: var-lib-kubelet-pods-09637d68\x2d1a17\x2d4002\x2d8a5f\x2d78efaeb9307a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:07:17.306508 systemd[1]: var-lib-kubelet-pods-09637d68\x2d1a17\x2d4002\x2d8a5f\x2d78efaeb9307a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:07:18.253360 sshd[4326]: Connection closed by 10.0.0.1 port 50242 Sep 12 17:07:18.253803 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:18.264717 systemd[1]: sshd@25-10.0.0.33:22-10.0.0.1:50242.service: Deactivated successfully. Sep 12 17:07:18.266630 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:07:18.268076 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:07:18.285311 systemd[1]: Started sshd@26-10.0.0.33:22-10.0.0.1:50256.service - OpenSSH per-connection server daemon (10.0.0.1:50256). Sep 12 17:07:18.286371 systemd-logind[1468]: Removed session 26. Sep 12 17:07:18.324606 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 50256 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:18.326247 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:18.330920 systemd-logind[1468]: New session 27 of user core. Sep 12 17:07:18.339181 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:07:18.649204 kubelet[2600]: I0912 17:07:18.649137 2600 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:07:18Z","lastTransitionTime":"2025-09-12T17:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:07:18.658548 kubelet[2600]: I0912 17:07:18.658504 2600 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09637d68-1a17-4002-8a5f-78efaeb9307a" path="/var/lib/kubelet/pods/09637d68-1a17-4002-8a5f-78efaeb9307a/volumes" Sep 12 17:07:18.659528 kubelet[2600]: I0912 17:07:18.659397 2600 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379ac0bb-c8e5-4521-8c60-2f1dfd26ce42" path="/var/lib/kubelet/pods/379ac0bb-c8e5-4521-8c60-2f1dfd26ce42/volumes" Sep 12 17:07:18.878471 sshd[4488]: Connection closed by 10.0.0.1 port 50256 Sep 12 17:07:18.880399 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:18.892510 systemd[1]: sshd@26-10.0.0.33:22-10.0.0.1:50256.service: Deactivated successfully. Sep 12 17:07:18.894682 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:07:18.895770 kubelet[2600]: E0912 17:07:18.895229 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09637d68-1a17-4002-8a5f-78efaeb9307a" containerName="mount-bpf-fs" Sep 12 17:07:18.895861 kubelet[2600]: E0912 17:07:18.895786 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09637d68-1a17-4002-8a5f-78efaeb9307a" containerName="clean-cilium-state" Sep 12 17:07:18.895861 kubelet[2600]: E0912 17:07:18.895799 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09637d68-1a17-4002-8a5f-78efaeb9307a" containerName="apply-sysctl-overwrites" Sep 12 17:07:18.895861 kubelet[2600]: E0912 17:07:18.895806 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09637d68-1a17-4002-8a5f-78efaeb9307a" containerName="mount-cgroup" Sep 12 17:07:18.895861 kubelet[2600]: E0912 17:07:18.895813 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="379ac0bb-c8e5-4521-8c60-2f1dfd26ce42" containerName="cilium-operator" Sep 12 17:07:18.895861 kubelet[2600]: E0912 17:07:18.895819 2600 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09637d68-1a17-4002-8a5f-78efaeb9307a" containerName="cilium-agent" Sep 12 17:07:18.895994 kubelet[2600]: I0912 17:07:18.895875 2600 memory_manager.go:354] "RemoveStaleState removing state" podUID="379ac0bb-c8e5-4521-8c60-2f1dfd26ce42" containerName="cilium-operator" Sep 12 17:07:18.895994 kubelet[2600]: I0912 17:07:18.895885 2600 memory_manager.go:354] "RemoveStaleState removing state" podUID="09637d68-1a17-4002-8a5f-78efaeb9307a" containerName="cilium-agent" Sep 12 17:07:18.901243 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:07:18.915480 systemd[1]: Started sshd@27-10.0.0.33:22-10.0.0.1:50260.service - OpenSSH per-connection server daemon (10.0.0.1:50260). Sep 12 17:07:18.918170 systemd-logind[1468]: Removed session 27. Sep 12 17:07:18.928911 systemd[1]: Created slice kubepods-burstable-pod22b7ef8c_af36_439d_a1e1_88c199c3b1d9.slice - libcontainer container kubepods-burstable-pod22b7ef8c_af36_439d_a1e1_88c199c3b1d9.slice. Sep 12 17:07:18.954474 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 50260 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:18.956247 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:18.961792 systemd-logind[1468]: New session 28 of user core. Sep 12 17:07:18.971174 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:07:18.991040 kubelet[2600]: I0912 17:07:18.990919 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-host-proc-sys-kernel\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991040 kubelet[2600]: I0912 17:07:18.990962 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-cilium-cgroup\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991040 kubelet[2600]: I0912 17:07:18.990982 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-lib-modules\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991040 kubelet[2600]: I0912 17:07:18.990998 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-cni-path\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991040 kubelet[2600]: I0912 17:07:18.991036 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-clustermesh-secrets\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991311 kubelet[2600]: I0912 17:07:18.991054 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-cilium-ipsec-secrets\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991311 kubelet[2600]: I0912 17:07:18.991075 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-hubble-tls\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991311 kubelet[2600]: I0912 17:07:18.991089 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-cilium-run\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991311 kubelet[2600]: I0912 17:07:18.991103 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-bpf-maps\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991311 kubelet[2600]: I0912 17:07:18.991124 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-etc-cni-netd\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991311 kubelet[2600]: I0912 17:07:18.991141 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-xtables-lock\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991444 kubelet[2600]: I0912 17:07:18.991156 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-cilium-config-path\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991444 kubelet[2600]: I0912 17:07:18.991173 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-hostproc\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991444 kubelet[2600]: I0912 17:07:18.991189 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-host-proc-sys-net\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:18.991444 kubelet[2600]: I0912 17:07:18.991275 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdrc\" (UniqueName: \"kubernetes.io/projected/22b7ef8c-af36-439d-a1e1-88c199c3b1d9-kube-api-access-wcdrc\") pod \"cilium-5ltfx\" (UID: \"22b7ef8c-af36-439d-a1e1-88c199c3b1d9\") " pod="kube-system/cilium-5ltfx" Sep 12 17:07:19.023735 sshd[4502]: Connection closed by 10.0.0.1 port 50260 Sep 12 17:07:19.024182 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:19.037209 systemd[1]: sshd@27-10.0.0.33:22-10.0.0.1:50260.service: Deactivated successfully. Sep 12 17:07:19.039456 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:07:19.041325 systemd-logind[1468]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:07:19.047296 systemd[1]: Started sshd@28-10.0.0.33:22-10.0.0.1:50272.service - OpenSSH per-connection server daemon (10.0.0.1:50272). Sep 12 17:07:19.049073 systemd-logind[1468]: Removed session 28. Sep 12 17:07:19.086281 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 50272 ssh2: RSA SHA256:ZFmHg3MiK6LSPrD+69AUXyon1mzFIQ8hFzwK9Q40PAs Sep 12 17:07:19.088229 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:07:19.094550 systemd-logind[1468]: New session 29 of user core. Sep 12 17:07:19.101411 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 17:07:19.231566 kubelet[2600]: E0912 17:07:19.231408 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:19.232593 containerd[1485]: time="2025-09-12T17:07:19.232549220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5ltfx,Uid:22b7ef8c-af36-439d-a1e1-88c199c3b1d9,Namespace:kube-system,Attempt:0,}" Sep 12 17:07:19.254799 containerd[1485]: time="2025-09-12T17:07:19.254499707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:07:19.254799 containerd[1485]: time="2025-09-12T17:07:19.254565180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:07:19.254799 containerd[1485]: time="2025-09-12T17:07:19.254584337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:07:19.254799 containerd[1485]: time="2025-09-12T17:07:19.254674869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:07:19.282179 systemd[1]: Started cri-containerd-f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe.scope - libcontainer container f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe. Sep 12 17:07:19.305722 containerd[1485]: time="2025-09-12T17:07:19.305631073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5ltfx,Uid:22b7ef8c-af36-439d-a1e1-88c199c3b1d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\"" Sep 12 17:07:19.306412 kubelet[2600]: E0912 17:07:19.306370 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:19.316634 containerd[1485]: time="2025-09-12T17:07:19.316576865Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:07:19.330160 containerd[1485]: time="2025-09-12T17:07:19.330066195Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e0d8b45f2dad6884a0212a8e382fd4dc550f999218371dae796cb6117664e767\"" Sep 12 17:07:19.330789 containerd[1485]: time="2025-09-12T17:07:19.330714830Z" level=info msg="StartContainer for \"e0d8b45f2dad6884a0212a8e382fd4dc550f999218371dae796cb6117664e767\"" Sep 12 17:07:19.366220 systemd[1]: Started cri-containerd-e0d8b45f2dad6884a0212a8e382fd4dc550f999218371dae796cb6117664e767.scope - libcontainer container e0d8b45f2dad6884a0212a8e382fd4dc550f999218371dae796cb6117664e767. Sep 12 17:07:19.394820 containerd[1485]: time="2025-09-12T17:07:19.394761024Z" level=info msg="StartContainer for \"e0d8b45f2dad6884a0212a8e382fd4dc550f999218371dae796cb6117664e767\" returns successfully" Sep 12 17:07:19.408175 systemd[1]: cri-containerd-e0d8b45f2dad6884a0212a8e382fd4dc550f999218371dae796cb6117664e767.scope: Deactivated successfully. Sep 12 17:07:19.444203 containerd[1485]: time="2025-09-12T17:07:19.444126491Z" level=info msg="shim disconnected" id=e0d8b45f2dad6884a0212a8e382fd4dc550f999218371dae796cb6117664e767 namespace=k8s.io Sep 12 17:07:19.444203 containerd[1485]: time="2025-09-12T17:07:19.444187728Z" level=warning msg="cleaning up after shim disconnected" id=e0d8b45f2dad6884a0212a8e382fd4dc550f999218371dae796cb6117664e767 namespace=k8s.io Sep 12 17:07:19.444203 containerd[1485]: time="2025-09-12T17:07:19.444196334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:07:19.911651 kubelet[2600]: E0912 17:07:19.911608 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:19.913487 containerd[1485]: time="2025-09-12T17:07:19.913431634Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:07:19.927424 containerd[1485]: time="2025-09-12T17:07:19.927367223Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0aa19e0556838bbd3736b53d690c210ad505d18605d3fdb36a7c58f29a6c9d3\"" Sep 12 17:07:19.928639 containerd[1485]: time="2025-09-12T17:07:19.928580301Z" level=info msg="StartContainer for \"b0aa19e0556838bbd3736b53d690c210ad505d18605d3fdb36a7c58f29a6c9d3\"" Sep 12 17:07:19.962232 systemd[1]: Started cri-containerd-b0aa19e0556838bbd3736b53d690c210ad505d18605d3fdb36a7c58f29a6c9d3.scope - libcontainer container b0aa19e0556838bbd3736b53d690c210ad505d18605d3fdb36a7c58f29a6c9d3. Sep 12 17:07:19.994217 containerd[1485]: time="2025-09-12T17:07:19.994164732Z" level=info msg="StartContainer for \"b0aa19e0556838bbd3736b53d690c210ad505d18605d3fdb36a7c58f29a6c9d3\" returns successfully" Sep 12 17:07:20.003209 systemd[1]: cri-containerd-b0aa19e0556838bbd3736b53d690c210ad505d18605d3fdb36a7c58f29a6c9d3.scope: Deactivated successfully. Sep 12 17:07:20.030037 containerd[1485]: time="2025-09-12T17:07:20.029935941Z" level=info msg="shim disconnected" id=b0aa19e0556838bbd3736b53d690c210ad505d18605d3fdb36a7c58f29a6c9d3 namespace=k8s.io Sep 12 17:07:20.030325 containerd[1485]: time="2025-09-12T17:07:20.030072831Z" level=warning msg="cleaning up after shim disconnected" id=b0aa19e0556838bbd3736b53d690c210ad505d18605d3fdb36a7c58f29a6c9d3 namespace=k8s.io Sep 12 17:07:20.030325 containerd[1485]: time="2025-09-12T17:07:20.030148425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:07:20.915604 kubelet[2600]: E0912 17:07:20.915563 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:20.918264 containerd[1485]: time="2025-09-12T17:07:20.918084203Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:07:20.944464 containerd[1485]: time="2025-09-12T17:07:20.944411257Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c0ac8267acc4d56a6fc0989785456641ede9989aea285b9acdb8a4698ccb8a99\"" Sep 12 17:07:20.946520 containerd[1485]: time="2025-09-12T17:07:20.945351596Z" level=info msg="StartContainer for \"c0ac8267acc4d56a6fc0989785456641ede9989aea285b9acdb8a4698ccb8a99\"" Sep 12 17:07:20.981249 systemd[1]: Started cri-containerd-c0ac8267acc4d56a6fc0989785456641ede9989aea285b9acdb8a4698ccb8a99.scope - libcontainer container c0ac8267acc4d56a6fc0989785456641ede9989aea285b9acdb8a4698ccb8a99. Sep 12 17:07:21.018946 containerd[1485]: time="2025-09-12T17:07:21.018886533Z" level=info msg="StartContainer for \"c0ac8267acc4d56a6fc0989785456641ede9989aea285b9acdb8a4698ccb8a99\" returns successfully" Sep 12 17:07:21.026097 systemd[1]: cri-containerd-c0ac8267acc4d56a6fc0989785456641ede9989aea285b9acdb8a4698ccb8a99.scope: Deactivated successfully. Sep 12 17:07:21.057103 containerd[1485]: time="2025-09-12T17:07:21.056679891Z" level=info msg="shim disconnected" id=c0ac8267acc4d56a6fc0989785456641ede9989aea285b9acdb8a4698ccb8a99 namespace=k8s.io Sep 12 17:07:21.057103 containerd[1485]: time="2025-09-12T17:07:21.056752299Z" level=warning msg="cleaning up after shim disconnected" id=c0ac8267acc4d56a6fc0989785456641ede9989aea285b9acdb8a4698ccb8a99 namespace=k8s.io Sep 12 17:07:21.057103 containerd[1485]: time="2025-09-12T17:07:21.056763470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:07:21.710788 kubelet[2600]: E0912 17:07:21.710733 2600 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:07:21.919278 kubelet[2600]: E0912 17:07:21.919236 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:21.921049 containerd[1485]: time="2025-09-12T17:07:21.920960973Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:07:21.975859 containerd[1485]: time="2025-09-12T17:07:21.975454649Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a32d164c9bf2fa9ae2c8e7c5743366d0c984aee3a46f18b948ca05fc6a122936\"" Sep 12 17:07:21.979612 containerd[1485]: time="2025-09-12T17:07:21.979541058Z" level=info msg="StartContainer for \"a32d164c9bf2fa9ae2c8e7c5743366d0c984aee3a46f18b948ca05fc6a122936\"" Sep 12 17:07:22.028156 systemd[1]: Started cri-containerd-a32d164c9bf2fa9ae2c8e7c5743366d0c984aee3a46f18b948ca05fc6a122936.scope - libcontainer container a32d164c9bf2fa9ae2c8e7c5743366d0c984aee3a46f18b948ca05fc6a122936. Sep 12 17:07:22.054289 systemd[1]: cri-containerd-a32d164c9bf2fa9ae2c8e7c5743366d0c984aee3a46f18b948ca05fc6a122936.scope: Deactivated successfully. Sep 12 17:07:22.087543 containerd[1485]: time="2025-09-12T17:07:22.087494453Z" level=info msg="StartContainer for \"a32d164c9bf2fa9ae2c8e7c5743366d0c984aee3a46f18b948ca05fc6a122936\" returns successfully" Sep 12 17:07:22.107967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a32d164c9bf2fa9ae2c8e7c5743366d0c984aee3a46f18b948ca05fc6a122936-rootfs.mount: Deactivated successfully. Sep 12 17:07:22.113021 containerd[1485]: time="2025-09-12T17:07:22.112936124Z" level=info msg="shim disconnected" id=a32d164c9bf2fa9ae2c8e7c5743366d0c984aee3a46f18b948ca05fc6a122936 namespace=k8s.io Sep 12 17:07:22.113021 containerd[1485]: time="2025-09-12T17:07:22.112998773Z" level=warning msg="cleaning up after shim disconnected" id=a32d164c9bf2fa9ae2c8e7c5743366d0c984aee3a46f18b948ca05fc6a122936 namespace=k8s.io Sep 12 17:07:22.113021 containerd[1485]: time="2025-09-12T17:07:22.113036575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:07:22.922170 kubelet[2600]: E0912 17:07:22.922138 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:22.923726 containerd[1485]: time="2025-09-12T17:07:22.923690166Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:07:22.944344 containerd[1485]: time="2025-09-12T17:07:22.944295926Z" level=info msg="CreateContainer within sandbox \"f3513b9210a097e89eea0de3dfcd8193828149007ebaddce602f134e829351fe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3c535dccfb557f8f4e72239ab28c194db9bfe1ed603d5e872ede2fcdc8bab900\"" Sep 12 17:07:22.945953 containerd[1485]: time="2025-09-12T17:07:22.944834399Z" level=info msg="StartContainer for \"3c535dccfb557f8f4e72239ab28c194db9bfe1ed603d5e872ede2fcdc8bab900\"" Sep 12 17:07:22.987194 systemd[1]: Started cri-containerd-3c535dccfb557f8f4e72239ab28c194db9bfe1ed603d5e872ede2fcdc8bab900.scope - libcontainer container 3c535dccfb557f8f4e72239ab28c194db9bfe1ed603d5e872ede2fcdc8bab900. Sep 12 17:07:23.018618 containerd[1485]: time="2025-09-12T17:07:23.018553430Z" level=info msg="StartContainer for \"3c535dccfb557f8f4e72239ab28c194db9bfe1ed603d5e872ede2fcdc8bab900\" returns successfully" Sep 12 17:07:23.480066 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 17:07:23.928039 kubelet[2600]: E0912 17:07:23.927965 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:23.989867 kubelet[2600]: I0912 17:07:23.989762 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5ltfx" podStartSLOduration=5.989741033 podStartE2EDuration="5.989741033s" podCreationTimestamp="2025-09-12 17:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:07:23.989603281 +0000 UTC m=+97.421211149" watchObservedRunningTime="2025-09-12 17:07:23.989741033 +0000 UTC m=+97.421348901" Sep 12 17:07:25.233084 kubelet[2600]: E0912 17:07:25.233037 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:25.656916 kubelet[2600]: E0912 17:07:25.656877 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:26.657660 kubelet[2600]: E0912 17:07:26.657596 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:26.751263 systemd-networkd[1417]: lxc_health: Link UP Sep 12 17:07:26.752677 systemd-networkd[1417]: lxc_health: Gained carrier Sep 12 17:07:27.233689 kubelet[2600]: E0912 17:07:27.233616 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:27.894257 systemd-networkd[1417]: lxc_health: Gained IPv6LL Sep 12 17:07:27.936776 kubelet[2600]: E0912 17:07:27.936734 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:28.938913 kubelet[2600]: E0912 17:07:28.938868 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:31.656891 kubelet[2600]: E0912 17:07:31.656828 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:07:31.766832 kubelet[2600]: E0912 17:07:31.766784 2600 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48518->127.0.0.1:40631: write tcp 127.0.0.1:48518->127.0.0.1:40631: write: broken pipe Sep 12 17:07:34.087812 sshd[4519]: Connection closed by 10.0.0.1 port 50272 Sep 12 17:07:34.110386 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Sep 12 17:07:34.115398 systemd[1]: sshd@28-10.0.0.33:22-10.0.0.1:50272.service: Deactivated successfully. Sep 12 17:07:34.117939 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 17:07:34.118858 systemd-logind[1468]: Session 29 logged out. Waiting for processes to exit. Sep 12 17:07:34.119894 systemd-logind[1468]: Removed session 29.