Sep 12 10:17:50.973333 kernel: Linux version 6.6.105-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 08:42:12 -00 2025 Sep 12 10:17:50.973368 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:17:50.973381 kernel: BIOS-provided physical RAM map: Sep 12 10:17:50.973388 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 10:17:50.973395 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 10:17:50.973401 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 10:17:50.973409 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 10:17:50.973416 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 10:17:50.973423 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 10:17:50.973429 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 10:17:50.973436 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 12 10:17:50.973445 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 10:17:50.973455 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 10:17:50.973462 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 10:17:50.973473 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 10:17:50.973480 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 10:17:50.973491 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 12 10:17:50.973498 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 12 10:17:50.973505 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 12 10:17:50.973512 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 12 10:17:50.973520 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 10:17:50.973527 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 10:17:50.973534 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 10:17:50.973541 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 10:17:50.973548 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 10:17:50.973556 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 10:17:50.973563 kernel: NX (Execute Disable) protection: active Sep 12 10:17:50.973573 kernel: APIC: Static calls initialized Sep 12 10:17:50.973580 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 12 10:17:50.973587 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Sep 12 10:17:50.973595 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 12 10:17:50.973602 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Sep 12 10:17:50.973609 kernel: extended physical RAM map: Sep 12 10:17:50.973616 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 10:17:50.973623 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 10:17:50.973631 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 10:17:50.973638 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 10:17:50.973645 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 10:17:50.973653 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 12 10:17:50.973663 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 12 10:17:50.973674 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Sep 12 10:17:50.973681 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Sep 12 10:17:50.973688 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Sep 12 10:17:50.973696 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Sep 12 10:17:50.973703 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Sep 12 10:17:50.973716 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 12 10:17:50.973724 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 12 10:17:50.973731 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 12 10:17:50.973739 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 12 10:17:50.973746 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 10:17:50.973754 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Sep 12 10:17:50.973761 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Sep 12 10:17:50.973769 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Sep 12 10:17:50.973776 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Sep 12 10:17:50.973786 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 12 10:17:50.973794 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 10:17:50.973811 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 10:17:50.973818 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 10:17:50.973828 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 12 10:17:50.973835 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 10:17:50.973843 kernel: efi: EFI v2.7 by EDK II Sep 12 10:17:50.973850 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Sep 12 10:17:50.973859 kernel: random: crng init done Sep 12 10:17:50.973867 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 12 10:17:50.973874 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 12 10:17:50.973884 kernel: secureboot: Secure boot disabled Sep 12 10:17:50.973897 kernel: SMBIOS 2.8 present. Sep 12 10:17:50.973907 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 10:17:50.973918 kernel: Hypervisor detected: KVM Sep 12 10:17:50.973928 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 10:17:50.973937 kernel: kvm-clock: using sched offset of 4143916593 cycles Sep 12 10:17:50.973947 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 10:17:50.973958 kernel: tsc: Detected 2794.750 MHz processor Sep 12 10:17:50.973968 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 10:17:50.973977 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 10:17:50.973987 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 12 10:17:50.974003 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 10:17:50.974014 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 10:17:50.974022 kernel: Using GB pages for direct mapping Sep 12 10:17:50.974030 kernel: ACPI: Early table checksum verification disabled Sep 12 10:17:50.974038 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 10:17:50.974046 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 10:17:50.974053 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:17:50.974061 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:17:50.974069 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 10:17:50.974095 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:17:50.974104 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:17:50.974112 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:17:50.974119 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 10:17:50.974127 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 10:17:50.974135 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 10:17:50.974142 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 10:17:50.974150 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 10:17:50.974158 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 10:17:50.974169 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 10:17:50.974176 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 10:17:50.974184 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 10:17:50.974191 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 10:17:50.974199 kernel: No NUMA configuration found Sep 12 10:17:50.974207 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 12 10:17:50.974217 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Sep 12 10:17:50.974228 kernel: Zone ranges: Sep 12 10:17:50.974239 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 10:17:50.974253 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 12 10:17:50.974262 kernel: Normal empty Sep 12 10:17:50.974273 kernel: Movable zone start for each node Sep 12 10:17:50.974281 kernel: Early memory node ranges Sep 12 10:17:50.974289 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 10:17:50.974296 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 10:17:50.974304 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 10:17:50.974311 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 12 10:17:50.974319 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 12 10:17:50.974330 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 12 10:17:50.974337 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Sep 12 10:17:50.974345 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Sep 12 10:17:50.974353 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 12 10:17:50.974360 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 10:17:50.974370 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 10:17:50.974392 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 10:17:50.974407 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 10:17:50.974417 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 12 10:17:50.974427 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 12 10:17:50.974437 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 10:17:50.974451 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 10:17:50.974466 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 12 10:17:50.974477 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 10:17:50.974487 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 10:17:50.974498 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 10:17:50.974506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 10:17:50.974517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 10:17:50.974525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 10:17:50.974533 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 10:17:50.974541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 10:17:50.974549 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 10:17:50.974557 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 10:17:50.974565 kernel: TSC deadline timer available Sep 12 10:17:50.974573 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 10:17:50.974581 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 10:17:50.974591 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 10:17:50.974599 kernel: kvm-guest: setup PV sched yield Sep 12 10:17:50.974607 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 10:17:50.974615 kernel: Booting paravirtualized kernel on KVM Sep 12 10:17:50.974623 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 10:17:50.974631 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 10:17:50.974639 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 10:17:50.974647 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 10:17:50.974655 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 10:17:50.974666 kernel: kvm-guest: PV spinlocks enabled Sep 12 10:17:50.974674 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 10:17:50.974683 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:17:50.974691 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 10:17:50.974699 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 10:17:50.974711 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 10:17:50.974719 kernel: Fallback order for Node 0: 0 Sep 12 10:17:50.974727 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Sep 12 10:17:50.974737 kernel: Policy zone: DMA32 Sep 12 10:17:50.974745 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 10:17:50.974754 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 177824K reserved, 0K cma-reserved) Sep 12 10:17:50.974762 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 10:17:50.974770 kernel: ftrace: allocating 37946 entries in 149 pages Sep 12 10:17:50.974778 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 10:17:50.974786 kernel: Dynamic Preempt: voluntary Sep 12 10:17:50.974793 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 10:17:50.974811 kernel: rcu: RCU event tracing is enabled. Sep 12 10:17:50.974822 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 10:17:50.974830 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 10:17:50.974838 kernel: Rude variant of Tasks RCU enabled. Sep 12 10:17:50.974846 kernel: Tracing variant of Tasks RCU enabled. Sep 12 10:17:50.974855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 10:17:50.974863 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 10:17:50.974871 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 10:17:50.974879 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 10:17:50.974886 kernel: Console: colour dummy device 80x25 Sep 12 10:17:50.974897 kernel: printk: console [ttyS0] enabled Sep 12 10:17:50.974905 kernel: ACPI: Core revision 20230628 Sep 12 10:17:50.974913 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 10:17:50.974921 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 10:17:50.974929 kernel: x2apic enabled Sep 12 10:17:50.974937 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 10:17:50.974948 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 10:17:50.974956 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 10:17:50.974964 kernel: kvm-guest: setup PV IPIs Sep 12 10:17:50.974975 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 10:17:50.974983 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 10:17:50.974991 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 12 10:17:50.974999 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 10:17:50.975007 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 10:17:50.975015 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 10:17:50.975023 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 10:17:50.975030 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 10:17:50.975038 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 10:17:50.975049 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 10:17:50.975057 kernel: active return thunk: retbleed_return_thunk Sep 12 10:17:50.975065 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 10:17:50.975073 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 10:17:50.975095 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 10:17:50.975103 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 10:17:50.975112 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 10:17:50.975123 kernel: active return thunk: srso_return_thunk Sep 12 10:17:50.975131 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 10:17:50.975142 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 10:17:50.975150 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 10:17:50.975161 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 10:17:50.975172 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 10:17:50.975183 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 10:17:50.975193 kernel: Freeing SMP alternatives memory: 32K Sep 12 10:17:50.975204 kernel: pid_max: default: 32768 minimum: 301 Sep 12 10:17:50.975214 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 10:17:50.975222 kernel: landlock: Up and running. Sep 12 10:17:50.975233 kernel: SELinux: Initializing. Sep 12 10:17:50.975241 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 10:17:50.975250 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 10:17:50.975258 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 10:17:50.975266 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 10:17:50.975274 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 10:17:50.975282 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 10:17:50.975290 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 10:17:50.975300 kernel: ... version: 0 Sep 12 10:17:50.975308 kernel: ... bit width: 48 Sep 12 10:17:50.975316 kernel: ... generic registers: 6 Sep 12 10:17:50.975324 kernel: ... value mask: 0000ffffffffffff Sep 12 10:17:50.975332 kernel: ... max period: 00007fffffffffff Sep 12 10:17:50.975339 kernel: ... fixed-purpose events: 0 Sep 12 10:17:50.975347 kernel: ... event mask: 000000000000003f Sep 12 10:17:50.975355 kernel: signal: max sigframe size: 1776 Sep 12 10:17:50.975363 kernel: rcu: Hierarchical SRCU implementation. Sep 12 10:17:50.975371 kernel: rcu: Max phase no-delay instances is 400. Sep 12 10:17:50.975382 kernel: smp: Bringing up secondary CPUs ... Sep 12 10:17:50.975389 kernel: smpboot: x86: Booting SMP configuration: Sep 12 10:17:50.975397 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 10:17:50.975405 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 10:17:50.975413 kernel: smpboot: Max logical packages: 1 Sep 12 10:17:50.975421 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 12 10:17:50.975429 kernel: devtmpfs: initialized Sep 12 10:17:50.975436 kernel: x86/mm: Memory block size: 128MB Sep 12 10:17:50.975445 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 10:17:50.975455 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 10:17:50.975463 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 12 10:17:50.975471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 10:17:50.975479 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Sep 12 10:17:50.975487 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 10:17:50.975495 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 10:17:50.975503 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 10:17:50.975511 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 10:17:50.975522 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 10:17:50.975530 kernel: audit: initializing netlink subsys (disabled) Sep 12 10:17:50.975538 kernel: audit: type=2000 audit(1757672271.380:1): state=initialized audit_enabled=0 res=1 Sep 12 10:17:50.975546 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 10:17:50.975554 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 10:17:50.975561 kernel: cpuidle: using governor menu Sep 12 10:17:50.975569 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 10:17:50.975577 kernel: dca service started, version 1.12.1 Sep 12 10:17:50.975585 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 12 10:17:50.975596 kernel: PCI: Using configuration type 1 for base access Sep 12 10:17:50.975604 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 10:17:50.975612 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 10:17:50.975620 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 10:17:50.975628 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 10:17:50.975636 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 10:17:50.975643 kernel: ACPI: Added _OSI(Module Device) Sep 12 10:17:50.975651 kernel: ACPI: Added _OSI(Processor Device) Sep 12 10:17:50.975659 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 10:17:50.975670 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 10:17:50.975678 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 10:17:50.975685 kernel: ACPI: Interpreter enabled Sep 12 10:17:50.975693 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 10:17:50.975701 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 10:17:50.975709 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 10:17:50.975717 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 10:17:50.975725 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 10:17:50.975732 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 10:17:50.976019 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 10:17:50.976186 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 10:17:50.976319 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 10:17:50.976329 kernel: PCI host bridge to bus 0000:00 Sep 12 10:17:50.976557 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 10:17:50.976727 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 10:17:50.976869 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 10:17:50.976995 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 10:17:50.977136 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 10:17:50.977257 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 10:17:50.977383 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 10:17:50.977588 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 10:17:50.977753 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 10:17:50.977907 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 12 10:17:50.978077 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 12 10:17:50.978296 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 10:17:50.978459 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 12 10:17:50.978649 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 10:17:50.978856 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 10:17:50.978997 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 12 10:17:50.979169 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 12 10:17:50.979302 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 10:17:50.979453 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 10:17:50.979620 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 12 10:17:50.979766 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 12 10:17:50.979912 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 10:17:50.980066 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 10:17:50.980283 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 12 10:17:50.980433 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 12 10:17:50.980567 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 10:17:50.980777 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 12 10:17:50.980947 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 10:17:50.981098 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 10:17:50.981284 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 10:17:50.981430 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 12 10:17:50.981563 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 12 10:17:50.981736 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 10:17:50.981883 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 12 10:17:50.981895 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 10:17:50.981904 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 10:17:50.981912 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 10:17:50.981925 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 10:17:50.981933 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 10:17:50.981941 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 10:17:50.981949 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 10:17:50.981956 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 10:17:50.981964 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 10:17:50.981972 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 10:17:50.981980 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 10:17:50.981988 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 10:17:50.981999 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 10:17:50.982007 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 10:17:50.982015 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 10:17:50.982022 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 10:17:50.982030 kernel: iommu: Default domain type: Translated Sep 12 10:17:50.982038 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 10:17:50.982046 kernel: efivars: Registered efivars operations Sep 12 10:17:50.982054 kernel: PCI: Using ACPI for IRQ routing Sep 12 10:17:50.982062 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 10:17:50.982073 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 10:17:50.982096 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 12 10:17:50.982104 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Sep 12 10:17:50.982111 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Sep 12 10:17:50.982120 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 12 10:17:50.982127 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 12 10:17:50.982135 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Sep 12 10:17:50.982143 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 12 10:17:50.983929 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 10:17:50.984241 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 10:17:50.984431 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 10:17:50.984448 kernel: vgaarb: loaded Sep 12 10:17:50.984461 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 10:17:50.984473 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 10:17:50.984485 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 10:17:50.984496 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 10:17:50.984509 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 10:17:50.984534 kernel: pnp: PnP ACPI init Sep 12 10:17:50.984773 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 10:17:50.984795 kernel: pnp: PnP ACPI: found 6 devices Sep 12 10:17:50.984818 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 10:17:50.984830 kernel: NET: Registered PF_INET protocol family Sep 12 10:17:50.984875 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 10:17:50.984892 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 10:17:50.984904 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 10:17:50.984919 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 10:17:50.984931 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 10:17:50.984943 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 10:17:50.984955 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 10:17:50.984968 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 10:17:50.984979 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 10:17:50.984991 kernel: NET: Registered PF_XDP protocol family Sep 12 10:17:50.985208 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 12 10:17:50.986877 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 12 10:17:50.987072 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 10:17:50.987268 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 10:17:50.987444 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 10:17:50.987624 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 10:17:50.987792 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 10:17:50.987972 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 10:17:50.987991 kernel: PCI: CLS 0 bytes, default 64 Sep 12 10:17:50.988010 kernel: Initialise system trusted keyrings Sep 12 10:17:50.988022 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 10:17:50.988035 kernel: Key type asymmetric registered Sep 12 10:17:50.988047 kernel: Asymmetric key parser 'x509' registered Sep 12 10:17:50.988059 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 10:17:50.988072 kernel: io scheduler mq-deadline registered Sep 12 10:17:50.988103 kernel: io scheduler kyber registered Sep 12 10:17:50.988115 kernel: io scheduler bfq registered Sep 12 10:17:50.988127 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 10:17:50.988140 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 10:17:50.988158 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 10:17:50.988174 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 10:17:50.988186 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 10:17:50.988198 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 10:17:50.988210 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 10:17:50.988226 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 10:17:50.988238 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 10:17:50.988439 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 10:17:50.988458 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 10:17:50.988626 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 10:17:50.988813 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T10:17:50 UTC (1757672270) Sep 12 10:17:50.988989 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 10:17:50.989007 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 10:17:50.989027 kernel: efifb: probing for efifb Sep 12 10:17:50.989039 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 10:17:50.989051 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 10:17:50.989063 kernel: efifb: scrolling: redraw Sep 12 10:17:50.989075 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 10:17:50.989106 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 10:17:50.989118 kernel: fb0: EFI VGA frame buffer device Sep 12 10:17:50.989130 kernel: pstore: Using crash dump compression: deflate Sep 12 10:17:50.989142 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 10:17:50.989161 kernel: NET: Registered PF_INET6 protocol family Sep 12 10:17:50.989173 kernel: Segment Routing with IPv6 Sep 12 10:17:50.989185 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 10:17:50.989198 kernel: NET: Registered PF_PACKET protocol family Sep 12 10:17:50.989210 kernel: Key type dns_resolver registered Sep 12 10:17:50.989223 kernel: IPI shorthand broadcast: enabled Sep 12 10:17:50.989235 kernel: sched_clock: Marking stable (1200003010, 172915767)->(1514016981, -141098204) Sep 12 10:17:50.989247 kernel: registered taskstats version 1 Sep 12 10:17:50.989260 kernel: Loading compiled-in X.509 certificates Sep 12 10:17:50.989276 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.105-flatcar: 0972efc09ee0bcd53f8cdb5573e11871ce7b16a9' Sep 12 10:17:50.989287 kernel: Key type .fscrypt registered Sep 12 10:17:50.989299 kernel: Key type fscrypt-provisioning registered Sep 12 10:17:50.989311 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 10:17:50.989323 kernel: ima: Allocated hash algorithm: sha1 Sep 12 10:17:50.989335 kernel: ima: No architecture policies found Sep 12 10:17:50.989346 kernel: clk: Disabling unused clocks Sep 12 10:17:50.989358 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 12 10:17:50.989370 kernel: Write protecting the kernel read-only data: 38912k Sep 12 10:17:50.989386 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 12 10:17:50.989398 kernel: Run /init as init process Sep 12 10:17:50.989410 kernel: with arguments: Sep 12 10:17:50.989421 kernel: /init Sep 12 10:17:50.989433 kernel: with environment: Sep 12 10:17:50.989445 kernel: HOME=/ Sep 12 10:17:50.989457 kernel: TERM=linux Sep 12 10:17:50.989469 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 10:17:50.989483 systemd[1]: Successfully made /usr/ read-only. Sep 12 10:17:50.989505 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:17:50.989519 systemd[1]: Detected virtualization kvm. Sep 12 10:17:50.989532 systemd[1]: Detected architecture x86-64. Sep 12 10:17:50.989545 systemd[1]: Running in initrd. Sep 12 10:17:50.989557 systemd[1]: No hostname configured, using default hostname. Sep 12 10:17:50.989571 systemd[1]: Hostname set to . Sep 12 10:17:50.989583 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:17:50.989600 systemd[1]: Queued start job for default target initrd.target. Sep 12 10:17:50.989613 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:17:50.989626 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:17:50.989639 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 10:17:50.989652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:17:50.989665 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 10:17:50.989678 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 10:17:50.989698 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 10:17:50.989710 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 10:17:50.989724 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:17:50.989736 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:17:50.989749 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:17:50.989762 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:17:50.989774 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:17:50.989786 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:17:50.989813 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:17:50.989826 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:17:50.989839 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 10:17:50.989852 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 10:17:50.989865 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:17:50.989878 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:17:50.989891 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:17:50.989903 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:17:50.989916 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 10:17:50.989934 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:17:50.989947 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 10:17:50.989959 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 10:17:50.989971 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:17:50.989984 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:17:50.989997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:17:50.990009 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 10:17:50.990022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:17:50.990040 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 10:17:50.990053 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 10:17:50.990126 systemd-journald[194]: Collecting audit messages is disabled. Sep 12 10:17:50.990165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 10:17:50.990179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:17:50.990192 systemd-journald[194]: Journal started Sep 12 10:17:50.990219 systemd-journald[194]: Runtime Journal (/run/log/journal/1734c7d69df8478697e4ec396415b99e) is 6M, max 48.2M, 42.2M free. Sep 12 10:17:50.978682 systemd-modules-load[195]: Inserted module 'overlay' Sep 12 10:17:50.997648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:17:51.040135 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 10:17:51.042426 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 12 10:17:51.063248 kernel: Bridge firewalling registered Sep 12 10:17:51.066455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:17:51.066511 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:17:51.067573 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:17:51.072727 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:17:51.089397 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:17:51.099981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:17:51.100490 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:17:51.110326 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:17:51.113167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:17:51.126422 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:17:51.146382 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 10:17:51.161610 dracut-cmdline[233]: dracut-dracut-053 Sep 12 10:17:51.165346 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=87e444606a7368354f582e8f746f078f97e75cf74b35edd9ec39d0d73a54ead2 Sep 12 10:17:51.169922 systemd-resolved[226]: Positive Trust Anchors: Sep 12 10:17:51.169942 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:17:51.169973 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:17:51.172760 systemd-resolved[226]: Defaulting to hostname 'linux'. Sep 12 10:17:51.174450 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:17:51.181271 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:17:51.297125 kernel: SCSI subsystem initialized Sep 12 10:17:51.306115 kernel: Loading iSCSI transport class v2.0-870. Sep 12 10:17:51.318120 kernel: iscsi: registered transport (tcp) Sep 12 10:17:51.354126 kernel: iscsi: registered transport (qla4xxx) Sep 12 10:17:51.354221 kernel: QLogic iSCSI HBA Driver Sep 12 10:17:51.411771 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 10:17:51.424471 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 10:17:51.453154 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 10:17:51.453261 kernel: device-mapper: uevent: version 1.0.3 Sep 12 10:17:51.455026 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 10:17:51.501143 kernel: raid6: avx2x4 gen() 29213 MB/s Sep 12 10:17:51.521143 kernel: raid6: avx2x2 gen() 30149 MB/s Sep 12 10:17:51.542224 kernel: raid6: avx2x1 gen() 25176 MB/s Sep 12 10:17:51.542298 kernel: raid6: using algorithm avx2x2 gen() 30149 MB/s Sep 12 10:17:51.563639 kernel: raid6: .... xor() 19197 MB/s, rmw enabled Sep 12 10:17:51.563758 kernel: raid6: using avx2x2 recovery algorithm Sep 12 10:17:51.586149 kernel: xor: automatically using best checksumming function avx Sep 12 10:17:51.745151 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 10:17:51.763809 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:17:51.777269 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:17:51.798238 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 12 10:17:51.805630 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:17:51.814365 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 10:17:51.833981 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Sep 12 10:17:51.874551 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:17:51.882314 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:17:51.968130 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:17:51.974372 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 10:17:51.993032 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 10:17:51.995311 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:17:51.999327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:17:52.000822 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:17:52.014131 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 10:17:52.013201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 10:17:52.018049 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 10:17:52.018065 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 10:17:52.034831 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 10:17:52.034903 kernel: GPT:9289727 != 19775487 Sep 12 10:17:52.034915 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 10:17:52.034926 kernel: GPT:9289727 != 19775487 Sep 12 10:17:52.034936 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 10:17:52.034946 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:17:52.043328 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:17:52.051119 kernel: libata version 3.00 loaded. Sep 12 10:17:52.066113 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 10:17:52.066180 kernel: AES CTR mode by8 optimization enabled Sep 12 10:17:52.067173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:17:52.067527 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:17:52.077851 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 10:17:52.078465 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 10:17:52.078485 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 10:17:52.078698 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 10:17:52.072043 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:17:52.082516 kernel: scsi host0: ahci Sep 12 10:17:52.076319 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:17:52.084457 kernel: scsi host1: ahci Sep 12 10:17:52.076728 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:17:52.087164 kernel: scsi host2: ahci Sep 12 10:17:52.079335 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:17:52.089518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:17:52.094363 kernel: scsi host3: ahci Sep 12 10:17:52.097156 kernel: scsi host4: ahci Sep 12 10:17:52.100029 kernel: scsi host5: ahci Sep 12 10:17:52.100464 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 12 10:17:52.100834 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 12 10:17:52.102490 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 12 10:17:52.102522 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 12 10:17:52.106764 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 12 10:17:52.106823 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 12 10:17:52.111141 kernel: BTRFS: device fsid 2566299d-dd4a-4826-ba43-7397a17991fb devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (478) Sep 12 10:17:52.123129 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (479) Sep 12 10:17:52.125530 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 10:17:52.140806 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 10:17:52.154959 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 10:17:52.156384 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 10:17:52.181615 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 10:17:52.196325 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 10:17:52.196451 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:17:52.196541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:17:52.198675 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:17:52.202035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:17:52.203859 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:17:52.209527 disk-uuid[559]: Primary Header is updated. Sep 12 10:17:52.209527 disk-uuid[559]: Secondary Entries is updated. Sep 12 10:17:52.209527 disk-uuid[559]: Secondary Header is updated. Sep 12 10:17:52.213427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:17:52.219120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:17:52.222863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:17:52.236308 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 10:17:52.252941 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:17:52.419610 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 10:17:52.419713 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 10:17:52.419730 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 10:17:52.419743 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 10:17:52.421197 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 10:17:52.422126 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 10:17:52.423543 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 10:17:52.423635 kernel: ata3.00: applying bridge limits Sep 12 10:17:52.423668 kernel: ata3.00: configured for UDMA/100 Sep 12 10:17:52.425395 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 10:17:52.473126 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 10:17:52.473436 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 10:17:52.487152 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 10:17:53.255176 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 10:17:53.255450 disk-uuid[562]: The operation has completed successfully. Sep 12 10:17:53.294114 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 10:17:53.294326 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 10:17:53.378308 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 10:17:53.382653 sh[600]: Success Sep 12 10:17:53.398116 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 10:17:53.445137 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 10:17:53.473960 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 10:17:53.479969 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 10:17:53.498116 kernel: BTRFS info (device dm-0): first mount of filesystem 2566299d-dd4a-4826-ba43-7397a17991fb Sep 12 10:17:53.498199 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:17:53.498212 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 10:17:53.500403 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 10:17:53.500430 kernel: BTRFS info (device dm-0): using free space tree Sep 12 10:17:53.509074 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 10:17:53.509979 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 10:17:53.528353 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 10:17:53.531365 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 10:17:53.549787 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:17:53.549832 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:17:53.549845 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:17:53.554593 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:17:53.560113 kernel: BTRFS info (device vda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:17:53.614007 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 10:17:53.625321 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 10:17:53.682094 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:17:53.699388 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:17:53.786275 systemd-networkd[778]: lo: Link UP Sep 12 10:17:53.786286 systemd-networkd[778]: lo: Gained carrier Sep 12 10:17:53.788651 systemd-networkd[778]: Enumeration completed Sep 12 10:17:53.788833 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:17:53.789953 ignition[735]: Ignition 2.20.0 Sep 12 10:17:53.789239 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:17:53.789978 ignition[735]: Stage: fetch-offline Sep 12 10:17:53.789246 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:17:53.790166 ignition[735]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:17:53.790804 systemd-networkd[778]: eth0: Link UP Sep 12 10:17:53.790219 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:17:53.790809 systemd-networkd[778]: eth0: Gained carrier Sep 12 10:17:53.790427 ignition[735]: parsed url from cmdline: "" Sep 12 10:17:53.790818 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:17:53.790476 ignition[735]: no config URL provided Sep 12 10:17:53.791823 systemd[1]: Reached target network.target - Network. Sep 12 10:17:53.790489 ignition[735]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 10:17:53.790530 ignition[735]: no config at "/usr/lib/ignition/user.ign" Sep 12 10:17:53.790637 ignition[735]: op(1): [started] loading QEMU firmware config module Sep 12 10:17:53.790659 ignition[735]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 10:17:53.807140 ignition[735]: op(1): [finished] loading QEMU firmware config module Sep 12 10:17:53.820164 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 10:17:53.856704 ignition[735]: parsing config with SHA512: 8fbc4d110558f07106651c3c927728a4a43c32331b8bf6005c4e4c0f960743f70a07ce294de7e83b17e96b6e54bb247967b5f6fd9047e9557aa17d5cf445db5c Sep 12 10:17:53.869460 unknown[735]: fetched base config from "system" Sep 12 10:17:53.869487 unknown[735]: fetched user config from "qemu" Sep 12 10:17:53.870254 ignition[735]: fetch-offline: fetch-offline passed Sep 12 10:17:53.870455 ignition[735]: Ignition finished successfully Sep 12 10:17:53.873932 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:17:53.876076 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 10:17:53.886275 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 10:17:53.964584 ignition[791]: Ignition 2.20.0 Sep 12 10:17:53.964598 ignition[791]: Stage: kargs Sep 12 10:17:53.964869 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:17:53.964883 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:17:53.965852 ignition[791]: kargs: kargs passed Sep 12 10:17:53.965903 ignition[791]: Ignition finished successfully Sep 12 10:17:53.969566 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 10:17:53.978442 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 10:17:53.996811 ignition[799]: Ignition 2.20.0 Sep 12 10:17:53.996825 ignition[799]: Stage: disks Sep 12 10:17:53.996986 ignition[799]: no configs at "/usr/lib/ignition/base.d" Sep 12 10:17:53.996998 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:17:54.000575 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 10:17:53.997874 ignition[799]: disks: disks passed Sep 12 10:17:54.002332 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 10:17:53.997925 ignition[799]: Ignition finished successfully Sep 12 10:17:54.004148 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 10:17:54.005391 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:17:54.007194 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:17:54.008400 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:17:54.018336 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 10:17:54.031483 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 10:17:54.148058 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 10:17:54.161203 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 10:17:54.257128 kernel: EXT4-fs (vda9): mounted filesystem 4caafea7-bbab-4a47-b77b-37af606fc08b r/w with ordered data mode. Quota mode: none. Sep 12 10:17:54.258039 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 10:17:54.260843 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 10:17:54.277234 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:17:54.280295 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 10:17:54.283462 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 10:17:54.283535 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 10:17:54.292794 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (818) Sep 12 10:17:54.292834 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:17:54.292850 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:17:54.292866 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:17:54.283571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:17:54.295000 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:17:54.297519 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:17:54.299523 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 10:17:54.309352 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 10:17:54.351943 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 10:17:54.358367 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Sep 12 10:17:54.363841 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 10:17:54.368938 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 10:17:54.501015 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 10:17:54.515283 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 10:17:54.519068 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 10:17:54.529193 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 10:17:54.532921 kernel: BTRFS info (device vda6): last unmount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:17:54.588532 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 10:17:54.646290 ignition[932]: INFO : Ignition 2.20.0 Sep 12 10:17:54.646290 ignition[932]: INFO : Stage: mount Sep 12 10:17:54.684611 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:17:54.684611 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:17:54.684611 ignition[932]: INFO : mount: mount passed Sep 12 10:17:54.684611 ignition[932]: INFO : Ignition finished successfully Sep 12 10:17:54.649518 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 10:17:54.731419 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 10:17:54.775929 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 10:17:54.814138 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (945) Sep 12 10:17:54.816995 kernel: BTRFS info (device vda6): first mount of filesystem 36a15e30-b48e-4687-be9c-f68c3ae1825b Sep 12 10:17:54.817043 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 10:17:54.817059 kernel: BTRFS info (device vda6): using free space tree Sep 12 10:17:54.822113 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 10:17:54.823905 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 10:17:54.912230 ignition[962]: INFO : Ignition 2.20.0 Sep 12 10:17:54.912230 ignition[962]: INFO : Stage: files Sep 12 10:17:54.914421 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:17:54.914421 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:17:54.914421 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Sep 12 10:17:54.914421 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 10:17:54.914421 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 10:17:54.921184 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 10:17:54.921184 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 10:17:54.921184 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 10:17:54.921184 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 10:17:54.921184 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 10:17:54.917889 unknown[962]: wrote ssh authorized keys file for user: core Sep 12 10:17:54.947609 systemd-networkd[778]: eth0: Gained IPv6LL Sep 12 10:17:54.967649 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 10:17:55.049822 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 10:17:55.049822 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:17:55.053878 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 10:17:55.133743 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 10:17:55.369953 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 10:17:55.369953 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 10:17:55.381786 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 10:17:55.591029 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 10:17:56.438468 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 10:17:56.438468 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 10:17:56.443299 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:17:56.443299 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 10:17:56.443299 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 10:17:56.443299 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 10:17:56.443299 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 10:17:56.443299 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 10:17:56.443299 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 10:17:56.443299 ignition[962]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 10:17:56.483176 ignition[962]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 10:17:56.489059 ignition[962]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 10:17:56.491177 ignition[962]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 10:17:56.491177 ignition[962]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 10:17:56.491177 ignition[962]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 10:17:56.491177 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:17:56.491177 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 10:17:56.491177 ignition[962]: INFO : files: files passed Sep 12 10:17:56.491177 ignition[962]: INFO : Ignition finished successfully Sep 12 10:17:56.507189 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 10:17:56.520544 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 10:17:56.523964 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 10:17:56.529513 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 10:17:56.530586 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 10:17:56.555291 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 10:17:56.561429 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:17:56.561429 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:17:56.565063 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 10:17:56.568779 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:17:56.569470 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 10:17:56.579252 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 10:17:56.613148 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 10:17:56.614284 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 10:17:56.617029 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 10:17:56.619280 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 10:17:56.621553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 10:17:56.624199 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 10:17:56.644818 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:17:56.674804 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 10:17:56.689688 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:17:56.696351 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:17:56.699046 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 10:17:56.700891 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 10:17:56.701916 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 10:17:56.704503 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 10:17:56.706503 systemd[1]: Stopped target basic.target - Basic System. Sep 12 10:17:56.708329 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 10:17:56.710529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 10:17:56.712952 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 10:17:56.715452 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 10:17:56.717580 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 10:17:56.720187 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 10:17:56.722423 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 10:17:56.724565 systemd[1]: Stopped target swap.target - Swaps. Sep 12 10:17:56.726395 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 10:17:56.727615 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 10:17:56.730254 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:17:56.732577 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:17:56.749406 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 10:17:56.750437 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:17:56.752999 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 10:17:56.754024 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 10:17:56.756297 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 10:17:56.757395 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 10:17:56.759889 systemd[1]: Stopped target paths.target - Path Units. Sep 12 10:17:56.761933 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 10:17:56.767167 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:17:56.769994 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 10:17:56.771914 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 10:17:56.773896 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 10:17:56.774786 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 10:17:56.776870 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 10:17:56.777970 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 10:17:56.780350 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 10:17:56.781738 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 10:17:56.784602 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 10:17:56.785762 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 10:17:56.798451 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 10:17:56.800520 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 10:17:56.801603 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:17:56.805148 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 10:17:56.806076 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 10:17:56.806234 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:17:56.810521 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 10:17:56.810651 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 10:17:56.818743 ignition[1016]: INFO : Ignition 2.20.0 Sep 12 10:17:56.818743 ignition[1016]: INFO : Stage: umount Sep 12 10:17:56.818743 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 10:17:56.818743 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 10:17:56.831522 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 10:17:57.018619 ignition[1016]: INFO : umount: umount passed Sep 12 10:17:57.018619 ignition[1016]: INFO : Ignition finished successfully Sep 12 10:17:56.831720 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 10:17:57.014275 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 10:17:57.014446 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 10:17:57.017137 systemd[1]: Stopped target network.target - Network. Sep 12 10:17:57.019999 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 10:17:57.020100 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 10:17:57.020977 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 10:17:57.021047 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 10:17:57.021506 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 10:17:57.021571 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 10:17:57.025642 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 10:17:57.025711 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 10:17:57.027802 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 10:17:57.029787 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 10:17:57.036640 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 10:17:57.036793 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 10:17:57.043093 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 10:17:57.043419 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 10:17:57.043564 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 10:17:57.047530 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 10:17:57.048298 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 10:17:57.048363 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:17:57.056208 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 10:17:57.058541 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 10:17:57.059815 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 10:17:57.095858 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:17:57.095970 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:17:57.099600 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 10:17:57.099688 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 10:17:57.103225 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 10:17:57.103292 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:17:57.107852 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:17:57.150194 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 10:17:57.151202 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 10:17:57.152386 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:17:57.159451 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 10:17:57.160712 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:17:57.164056 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 10:17:57.164181 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 10:17:57.168774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 10:17:57.168844 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 10:17:57.171995 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 10:17:57.172057 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:17:57.173548 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 10:17:57.173617 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 10:17:57.221445 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 10:17:57.221557 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 10:17:57.224760 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 10:17:57.225890 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 10:17:57.238451 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 10:17:57.239640 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 10:17:57.239747 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:17:57.243153 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:17:57.243226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:17:57.248731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 10:17:57.248819 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 10:17:57.249437 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 10:17:57.249602 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 10:17:57.250811 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 10:17:57.250951 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 10:17:57.255436 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 10:17:57.258030 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 10:17:57.258168 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 10:17:57.265331 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 10:17:57.275667 systemd[1]: Switching root. Sep 12 10:17:57.344948 systemd-journald[194]: Journal stopped Sep 12 10:17:59.294438 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 12 10:17:59.294523 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 10:17:59.294542 kernel: SELinux: policy capability open_perms=1 Sep 12 10:17:59.294554 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 10:17:59.294566 kernel: SELinux: policy capability always_check_network=0 Sep 12 10:17:59.294578 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 10:17:59.294590 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 10:17:59.294619 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 10:17:59.294633 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 10:17:59.294648 kernel: audit: type=1403 audit(1757672278.357:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 10:17:59.294674 systemd[1]: Successfully loaded SELinux policy in 45.398ms. Sep 12 10:17:59.294699 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.925ms. Sep 12 10:17:59.294921 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 10:17:59.294937 systemd[1]: Detected virtualization kvm. Sep 12 10:17:59.294956 systemd[1]: Detected architecture x86-64. Sep 12 10:17:59.294969 systemd[1]: Detected first boot. Sep 12 10:17:59.294981 systemd[1]: Initializing machine ID from VM UUID. Sep 12 10:17:59.294993 zram_generator::config[1062]: No configuration found. Sep 12 10:17:59.295007 kernel: Guest personality initialized and is inactive Sep 12 10:17:59.295023 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 10:17:59.295035 kernel: Initialized host personality Sep 12 10:17:59.295047 kernel: NET: Registered PF_VSOCK protocol family Sep 12 10:17:59.295059 systemd[1]: Populated /etc with preset unit settings. Sep 12 10:17:59.295072 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 10:17:59.295116 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 10:17:59.295136 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 10:17:59.295152 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 10:17:59.295192 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 10:17:59.295218 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 10:17:59.295240 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 10:17:59.295257 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 10:17:59.295274 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 10:17:59.295290 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 10:17:59.295320 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 10:17:59.295335 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 10:17:59.295351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 10:17:59.295373 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 10:17:59.295388 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 10:17:59.295405 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 10:17:59.295420 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 10:17:59.295436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 10:17:59.295451 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 10:17:59.295468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 10:17:59.295484 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 10:17:59.295505 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 10:17:59.295522 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 10:17:59.295539 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 10:17:59.295555 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 10:17:59.295572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 10:17:59.295588 systemd[1]: Reached target slices.target - Slice Units. Sep 12 10:17:59.295614 systemd[1]: Reached target swap.target - Swaps. Sep 12 10:17:59.295629 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 10:17:59.295645 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 10:17:59.295666 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 10:17:59.295695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 10:17:59.295709 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 10:17:59.295722 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 10:17:59.295734 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 10:17:59.295747 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 10:17:59.295759 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 10:17:59.295776 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 10:17:59.295792 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:59.295813 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 10:17:59.295829 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 10:17:59.295845 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 10:17:59.295862 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 10:17:59.295878 systemd[1]: Reached target machines.target - Containers. Sep 12 10:17:59.295894 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 10:17:59.295910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:17:59.295925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 10:17:59.295942 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 10:17:59.295955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:17:59.295968 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:17:59.295981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:17:59.295993 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 10:17:59.296007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:17:59.296020 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 10:17:59.296033 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 10:17:59.296051 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 10:17:59.296068 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 10:17:59.296118 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 10:17:59.296135 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:17:59.296156 kernel: fuse: init (API version 7.39) Sep 12 10:17:59.296178 kernel: loop: module loaded Sep 12 10:17:59.296196 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 10:17:59.296213 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 10:17:59.296228 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 10:17:59.296283 systemd-journald[1133]: Collecting audit messages is disabled. Sep 12 10:17:59.296321 kernel: ACPI: bus type drm_connector registered Sep 12 10:17:59.296336 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 10:17:59.296348 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 10:17:59.296364 systemd-journald[1133]: Journal started Sep 12 10:17:59.296389 systemd-journald[1133]: Runtime Journal (/run/log/journal/1734c7d69df8478697e4ec396415b99e) is 6M, max 48.2M, 42.2M free. Sep 12 10:17:59.041634 systemd[1]: Queued start job for default target multi-user.target. Sep 12 10:17:59.056869 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 10:17:59.057534 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 10:17:59.302124 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 10:17:59.305631 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 10:17:59.305732 systemd[1]: Stopped verity-setup.service. Sep 12 10:17:59.309114 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:17:59.314008 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 10:17:59.315137 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 10:17:59.317834 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 10:17:59.319315 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 10:17:59.320635 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 10:17:59.322035 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 10:17:59.323310 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 10:17:59.324850 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 10:17:59.326508 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 10:17:59.328186 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 10:17:59.328452 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 10:17:59.330053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:17:59.330311 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:17:59.332024 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:17:59.332307 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:17:59.333905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:17:59.334147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:17:59.335880 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 10:17:59.336117 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 10:17:59.337736 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:17:59.337960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:17:59.339681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 10:17:59.341343 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 10:17:59.343038 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 10:17:59.344892 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 10:17:59.365035 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 10:17:59.379325 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 10:17:59.382310 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 10:17:59.383655 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 10:17:59.383693 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 10:17:59.386156 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 10:17:59.389074 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 10:17:59.395286 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 10:17:59.396837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:17:59.401417 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 10:17:59.405753 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 10:17:59.408184 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:17:59.416368 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 10:17:59.417796 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:17:59.420063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:17:59.427246 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 10:17:59.435580 systemd-journald[1133]: Time spent on flushing to /var/log/journal/1734c7d69df8478697e4ec396415b99e is 25.280ms for 1060 entries. Sep 12 10:17:59.435580 systemd-journald[1133]: System Journal (/var/log/journal/1734c7d69df8478697e4ec396415b99e) is 8M, max 195.6M, 187.6M free. Sep 12 10:17:59.515432 systemd-journald[1133]: Received client request to flush runtime journal. Sep 12 10:17:59.515483 kernel: loop0: detected capacity change from 0 to 147912 Sep 12 10:17:59.515513 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 10:17:59.435747 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 10:17:59.443373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 10:17:59.454403 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 10:17:59.455969 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 10:17:59.457600 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 10:17:59.476038 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 10:17:59.482599 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 10:17:59.491260 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 10:17:59.499254 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 10:17:59.515560 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:17:59.519113 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 10:17:59.535108 kernel: loop1: detected capacity change from 0 to 221472 Sep 12 10:17:59.544708 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 10:17:59.564019 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 10:17:59.566525 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 10:17:59.576557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 10:17:59.586146 kernel: loop2: detected capacity change from 0 to 138176 Sep 12 10:17:59.627792 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 12 10:17:59.627818 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 12 10:17:59.639429 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 10:17:59.647121 kernel: loop3: detected capacity change from 0 to 147912 Sep 12 10:17:59.674116 kernel: loop4: detected capacity change from 0 to 221472 Sep 12 10:17:59.690118 kernel: loop5: detected capacity change from 0 to 138176 Sep 12 10:17:59.711328 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 10:17:59.712181 (sd-merge)[1207]: Merged extensions into '/usr'. Sep 12 10:17:59.720808 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 10:17:59.720834 systemd[1]: Reloading... Sep 12 10:17:59.824117 zram_generator::config[1231]: No configuration found. Sep 12 10:17:59.954059 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 10:17:59.982221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:18:00.055770 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 10:18:00.056446 systemd[1]: Reloading finished in 334 ms. Sep 12 10:18:00.086005 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 10:18:00.090641 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 10:18:00.110871 systemd[1]: Starting ensure-sysext.service... Sep 12 10:18:00.113030 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 10:18:00.153517 systemd[1]: Reload requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Sep 12 10:18:00.153534 systemd[1]: Reloading... Sep 12 10:18:00.169071 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 10:18:00.169406 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 10:18:00.170456 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 10:18:00.170771 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 12 10:18:00.170858 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 12 10:18:00.175471 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:18:00.175660 systemd-tmpfiles[1273]: Skipping /boot Sep 12 10:18:00.195219 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 10:18:00.195393 systemd-tmpfiles[1273]: Skipping /boot Sep 12 10:18:00.244421 zram_generator::config[1302]: No configuration found. Sep 12 10:18:00.603605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:18:00.680065 systemd[1]: Reloading finished in 526 ms. Sep 12 10:18:00.720851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 10:18:00.730318 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:18:00.769503 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 10:18:00.772392 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 10:18:00.778001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 10:18:00.780550 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 10:18:00.785751 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:18:00.785967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:18:00.793414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:18:00.797151 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:18:00.802188 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:18:00.803791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:18:00.803902 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:18:00.803998 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:18:00.805914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:18:00.806179 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:18:00.808882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:18:00.809220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:18:00.816310 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:18:00.816533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:18:00.830101 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:18:00.830873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:18:00.836790 augenrules[1371]: No rules Sep 12 10:18:00.838392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:18:00.850228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:18:00.852705 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:18:00.854016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:18:00.854223 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:18:00.858343 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 10:18:00.859416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:18:00.861199 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 10:18:00.863007 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:18:00.863291 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:18:00.864899 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 10:18:00.866706 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 10:18:00.877614 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 10:18:00.879477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:18:00.879750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:18:00.881419 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:18:00.881690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:18:00.883418 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:18:00.883683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:18:00.947639 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:18:00.964416 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:18:00.965515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 10:18:00.967014 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 10:18:00.974139 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 10:18:00.978492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 10:18:00.981213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 10:18:00.982384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 10:18:00.983124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 10:18:00.985254 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 10:18:00.988419 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 10:18:00.989591 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 10:18:00.989745 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 10:18:00.992097 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 10:18:00.996861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 10:18:00.997149 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 10:18:01.000539 augenrules[1387]: /sbin/augenrules: No change Sep 12 10:18:01.009263 augenrules[1415]: No rules Sep 12 10:18:01.016184 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:18:01.016592 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:18:01.018448 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 10:18:01.018728 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 10:18:01.020679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 10:18:01.021002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 10:18:01.023155 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 10:18:01.023458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 10:18:01.025465 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 10:18:01.035470 systemd[1]: Finished ensure-sysext.service. Sep 12 10:18:01.044587 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 10:18:01.044684 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 10:18:01.051376 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 10:18:01.056629 systemd-udevd[1402]: Using default interface naming scheme 'v255'. Sep 12 10:18:01.078659 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 10:18:01.102952 systemd-resolved[1346]: Positive Trust Anchors: Sep 12 10:18:01.102973 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 10:18:01.103016 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 10:18:01.104296 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 10:18:01.111582 systemd-resolved[1346]: Defaulting to hostname 'linux'. Sep 12 10:18:01.122160 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 10:18:01.127198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 10:18:01.139602 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 10:18:01.158568 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 10:18:01.160377 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 10:18:01.166130 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1439) Sep 12 10:18:01.216964 systemd-networkd[1447]: lo: Link UP Sep 12 10:18:01.216977 systemd-networkd[1447]: lo: Gained carrier Sep 12 10:18:01.219340 systemd-networkd[1447]: Enumeration completed Sep 12 10:18:01.221165 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:18:01.221179 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 10:18:01.222350 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:18:01.222391 systemd-networkd[1447]: eth0: Link UP Sep 12 10:18:01.222396 systemd-networkd[1447]: eth0: Gained carrier Sep 12 10:18:01.222410 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 10:18:01.223745 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 10:18:01.225559 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 10:18:01.246253 systemd[1]: Reached target network.target - Network. Sep 12 10:18:01.250103 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 10:18:01.256249 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 10:18:01.263875 systemd-networkd[1447]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 10:18:01.271144 kernel: ACPI: button: Power Button [PWRF] Sep 12 10:18:01.271901 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Sep 12 10:18:01.272580 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 10:18:03.313387 systemd-resolved[1346]: Clock change detected. Flushing caches. Sep 12 10:18:03.313489 systemd-timesyncd[1427]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 10:18:03.313562 systemd-timesyncd[1427]: Initial clock synchronization to Fri 2025-09-12 10:18:03.313344 UTC. Sep 12 10:18:03.317034 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 10:18:03.354084 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 10:18:03.354465 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 10:18:03.354661 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 10:18:03.355861 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 10:18:03.359698 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 10:18:03.379631 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 10:18:03.396126 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 10:18:03.412156 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 10:18:03.414915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:18:03.456038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 10:18:03.456776 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:18:03.461651 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 10:18:03.486260 kernel: kvm_amd: TSC scaling supported Sep 12 10:18:03.486348 kernel: kvm_amd: Nested Virtualization enabled Sep 12 10:18:03.486367 kernel: kvm_amd: Nested Paging enabled Sep 12 10:18:03.487310 kernel: kvm_amd: LBR virtualization supported Sep 12 10:18:03.487350 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 10:18:03.488410 kernel: kvm_amd: Virtual GIF supported Sep 12 10:18:03.510096 kernel: EDAC MC: Ver: 3.0.0 Sep 12 10:18:03.531140 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 10:18:03.542463 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 10:18:03.556243 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 10:18:03.564601 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:18:03.598852 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 10:18:03.600574 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 10:18:03.601683 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 10:18:03.602887 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 10:18:03.604134 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 10:18:03.617503 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 10:18:03.618685 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 10:18:03.619942 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 10:18:03.621261 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 10:18:03.621295 systemd[1]: Reached target paths.target - Path Units. Sep 12 10:18:03.622242 systemd[1]: Reached target timers.target - Timer Units. Sep 12 10:18:03.624250 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 10:18:03.627250 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 10:18:03.631617 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 10:18:03.633015 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 10:18:03.638889 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 10:18:03.643593 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 10:18:03.645183 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 10:18:03.647815 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 10:18:03.649522 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 10:18:03.650736 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 10:18:03.651704 systemd[1]: Reached target basic.target - Basic System. Sep 12 10:18:03.652670 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:18:03.652704 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 10:18:03.653845 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 10:18:03.656171 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 10:18:03.660219 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 10:18:03.662711 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 10:18:03.670130 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 10:18:03.673151 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 10:18:03.673267 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 10:18:03.675450 jq[1488]: false Sep 12 10:18:03.677303 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 10:18:03.682205 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 10:18:03.686682 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 10:18:03.693296 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 10:18:03.704005 extend-filesystems[1489]: Found loop3 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found loop4 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found loop5 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found sr0 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found vda Sep 12 10:18:03.705394 extend-filesystems[1489]: Found vda1 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found vda2 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found vda3 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found usr Sep 12 10:18:03.705394 extend-filesystems[1489]: Found vda4 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found vda6 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found vda7 Sep 12 10:18:03.705394 extend-filesystems[1489]: Found vda9 Sep 12 10:18:03.705394 extend-filesystems[1489]: Checking size of /dev/vda9 Sep 12 10:18:03.735122 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 10:18:03.704314 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 10:18:03.735304 extend-filesystems[1489]: Resized partition /dev/vda9 Sep 12 10:18:03.708866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 10:18:03.738216 extend-filesystems[1509]: resize2fs 1.47.1 (20-May-2024) Sep 12 10:18:03.721352 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 10:18:03.740922 jq[1510]: true Sep 12 10:18:03.726560 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 10:18:03.733370 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 10:18:03.738688 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 10:18:03.738985 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 10:18:03.739472 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 10:18:03.739740 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 10:18:03.743407 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 10:18:03.743690 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 10:18:03.746142 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1429) Sep 12 10:18:03.752915 dbus-daemon[1487]: [system] SELinux support is enabled Sep 12 10:18:03.754074 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 10:18:03.760802 jq[1513]: true Sep 12 10:18:03.771118 update_engine[1504]: I20250912 10:18:03.769943 1504 main.cc:92] Flatcar Update Engine starting Sep 12 10:18:03.779096 update_engine[1504]: I20250912 10:18:03.779008 1504 update_check_scheduler.cc:74] Next update check in 11m46s Sep 12 10:18:03.783538 (ntainerd)[1518]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 10:18:03.784002 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 10:18:03.784046 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 10:18:03.791808 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 10:18:03.791830 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 10:18:03.798715 tar[1512]: linux-amd64/helm Sep 12 10:18:03.805190 systemd[1]: Started update-engine.service - Update Engine. Sep 12 10:18:03.821259 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 10:18:03.925221 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 10:18:03.990616 systemd-logind[1498]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 10:18:03.990655 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 10:18:03.991394 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 10:18:03.991394 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 10:18:03.991394 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 10:18:03.997527 extend-filesystems[1489]: Resized filesystem in /dev/vda9 Sep 12 10:18:03.992829 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 10:18:03.997931 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 10:18:03.993780 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 10:18:03.994088 systemd-logind[1498]: New seat seat0. Sep 12 10:18:03.994718 locksmithd[1541]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 10:18:04.000494 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 10:18:04.010591 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Sep 12 10:18:04.011999 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 10:18:04.015386 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 10:18:04.026964 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 10:18:04.041646 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 10:18:04.047968 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 10:18:04.048413 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 10:18:04.061031 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 10:18:04.298231 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 10:18:04.347606 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 10:18:04.372263 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 10:18:04.373805 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 10:18:04.632137 containerd[1518]: time="2025-09-12T10:18:04.631944918Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 10:18:04.663291 containerd[1518]: time="2025-09-12T10:18:04.663227724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.665927756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.105-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.665963273Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.665984613Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.666307979Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.666334769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.666442571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.666460976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.666809369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.666829377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.666846569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668079 containerd[1518]: time="2025-09-12T10:18:04.666876285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668393 containerd[1518]: time="2025-09-12T10:18:04.667024442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668393 containerd[1518]: time="2025-09-12T10:18:04.667404225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668393 containerd[1518]: time="2025-09-12T10:18:04.667637442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 10:18:04.668393 containerd[1518]: time="2025-09-12T10:18:04.667657099Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 10:18:04.668393 containerd[1518]: time="2025-09-12T10:18:04.667834642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 10:18:04.668393 containerd[1518]: time="2025-09-12T10:18:04.667926384Z" level=info msg="metadata content store policy set" policy=shared Sep 12 10:18:04.675338 containerd[1518]: time="2025-09-12T10:18:04.675261848Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 10:18:04.675338 containerd[1518]: time="2025-09-12T10:18:04.675324715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 10:18:04.675338 containerd[1518]: time="2025-09-12T10:18:04.675344252Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 10:18:04.675338 containerd[1518]: time="2025-09-12T10:18:04.675362366Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 10:18:04.675631 containerd[1518]: time="2025-09-12T10:18:04.675378586Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 10:18:04.675631 containerd[1518]: time="2025-09-12T10:18:04.675586296Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 10:18:04.675882 containerd[1518]: time="2025-09-12T10:18:04.675846915Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 10:18:04.676028 containerd[1518]: time="2025-09-12T10:18:04.676002096Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 10:18:04.676028 containerd[1518]: time="2025-09-12T10:18:04.676026221Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 10:18:04.676095 containerd[1518]: time="2025-09-12T10:18:04.676045307Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 10:18:04.676095 containerd[1518]: time="2025-09-12T10:18:04.676080172Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 10:18:04.676147 containerd[1518]: time="2025-09-12T10:18:04.676097064Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 10:18:04.676147 containerd[1518]: time="2025-09-12T10:18:04.676112182Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 10:18:04.676147 containerd[1518]: time="2025-09-12T10:18:04.676128462Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 10:18:04.676201 containerd[1518]: time="2025-09-12T10:18:04.676165131Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 10:18:04.676201 containerd[1518]: time="2025-09-12T10:18:04.676183876Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 10:18:04.676201 containerd[1518]: time="2025-09-12T10:18:04.676199295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 10:18:04.676265 containerd[1518]: time="2025-09-12T10:18:04.676212630Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 10:18:04.676265 containerd[1518]: time="2025-09-12T10:18:04.676235934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676265 containerd[1518]: time="2025-09-12T10:18:04.676251804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676324 containerd[1518]: time="2025-09-12T10:18:04.676266261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676324 containerd[1518]: time="2025-09-12T10:18:04.676280858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676324 containerd[1518]: time="2025-09-12T10:18:04.676295756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676324 containerd[1518]: time="2025-09-12T10:18:04.676319851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676405 containerd[1518]: time="2025-09-12T10:18:04.676333828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676405 containerd[1518]: time="2025-09-12T10:18:04.676348525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676405 containerd[1518]: time="2025-09-12T10:18:04.676389231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676469 containerd[1518]: time="2025-09-12T10:18:04.676409229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676469 containerd[1518]: time="2025-09-12T10:18:04.676432663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676469 containerd[1518]: time="2025-09-12T10:18:04.676449575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676469 containerd[1518]: time="2025-09-12T10:18:04.676463831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676565 containerd[1518]: time="2025-09-12T10:18:04.676480182Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 10:18:04.676565 containerd[1518]: time="2025-09-12T10:18:04.676515298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676565 containerd[1518]: time="2025-09-12T10:18:04.676529424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676565 containerd[1518]: time="2025-09-12T10:18:04.676541858Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 10:18:04.676643 containerd[1518]: time="2025-09-12T10:18:04.676592122Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 10:18:04.676643 containerd[1518]: time="2025-09-12T10:18:04.676612280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 10:18:04.676643 containerd[1518]: time="2025-09-12T10:18:04.676624272Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 10:18:04.676643 containerd[1518]: time="2025-09-12T10:18:04.676637477Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 10:18:04.676751 containerd[1518]: time="2025-09-12T10:18:04.676648948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.676751 containerd[1518]: time="2025-09-12T10:18:04.676662945Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 10:18:04.676751 containerd[1518]: time="2025-09-12T10:18:04.676676811Z" level=info msg="NRI interface is disabled by configuration." Sep 12 10:18:04.676751 containerd[1518]: time="2025-09-12T10:18:04.676687861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 10:18:04.677085 containerd[1518]: time="2025-09-12T10:18:04.677015135Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 10:18:04.677359 containerd[1518]: time="2025-09-12T10:18:04.677088462Z" level=info msg="Connect containerd service" Sep 12 10:18:04.677359 containerd[1518]: time="2025-09-12T10:18:04.677170927Z" level=info msg="using legacy CRI server" Sep 12 10:18:04.677359 containerd[1518]: time="2025-09-12T10:18:04.677181046Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 10:18:04.677359 containerd[1518]: time="2025-09-12T10:18:04.677305039Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 10:18:04.679303 containerd[1518]: time="2025-09-12T10:18:04.679261787Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:18:04.679903 containerd[1518]: time="2025-09-12T10:18:04.679508660Z" level=info msg="Start subscribing containerd event" Sep 12 10:18:04.679903 containerd[1518]: time="2025-09-12T10:18:04.679557121Z" level=info msg="Start recovering state" Sep 12 10:18:04.679903 containerd[1518]: time="2025-09-12T10:18:04.679649104Z" level=info msg="Start event monitor" Sep 12 10:18:04.679903 containerd[1518]: time="2025-09-12T10:18:04.679662619Z" level=info msg="Start snapshots syncer" Sep 12 10:18:04.679903 containerd[1518]: time="2025-09-12T10:18:04.679673339Z" level=info msg="Start cni network conf syncer for default" Sep 12 10:18:04.679903 containerd[1518]: time="2025-09-12T10:18:04.679682306Z" level=info msg="Start streaming server" Sep 12 10:18:04.679903 containerd[1518]: time="2025-09-12T10:18:04.679774369Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 10:18:04.679903 containerd[1518]: time="2025-09-12T10:18:04.679848227Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 10:18:04.680069 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 10:18:04.680427 containerd[1518]: time="2025-09-12T10:18:04.680209605Z" level=info msg="containerd successfully booted in 0.049970s" Sep 12 10:18:04.741610 tar[1512]: linux-amd64/LICENSE Sep 12 10:18:04.741610 tar[1512]: linux-amd64/README.md Sep 12 10:18:04.769787 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 10:18:04.923456 systemd-networkd[1447]: eth0: Gained IPv6LL Sep 12 10:18:04.927795 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 10:18:04.929945 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 10:18:04.940457 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 10:18:04.943827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:18:04.946538 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 10:18:04.975625 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 10:18:04.975964 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 10:18:04.978238 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 10:18:04.980996 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 10:18:06.152411 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 10:18:06.168616 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:60704.service - OpenSSH per-connection server daemon (10.0.0.1:60704). Sep 12 10:18:06.258260 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 60704 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:18:06.262346 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:06.272694 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 10:18:06.285703 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 10:18:06.296383 systemd-logind[1498]: New session 1 of user core. Sep 12 10:18:06.320643 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 10:18:06.340153 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 10:18:06.347167 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 10:18:06.351110 systemd-logind[1498]: New session c1 of user core. Sep 12 10:18:06.500802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:18:06.502793 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 10:18:06.507826 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:18:06.593242 systemd[1600]: Queued start job for default target default.target. Sep 12 10:18:06.640692 systemd[1600]: Created slice app.slice - User Application Slice. Sep 12 10:18:06.640728 systemd[1600]: Reached target paths.target - Paths. Sep 12 10:18:06.640781 systemd[1600]: Reached target timers.target - Timers. Sep 12 10:18:06.644390 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 10:18:06.666186 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 10:18:06.666399 systemd[1600]: Reached target sockets.target - Sockets. Sep 12 10:18:06.666479 systemd[1600]: Reached target basic.target - Basic System. Sep 12 10:18:06.666538 systemd[1600]: Reached target default.target - Main User Target. Sep 12 10:18:06.666598 systemd[1600]: Startup finished in 300ms. Sep 12 10:18:06.669785 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 10:18:06.681363 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 10:18:06.685147 systemd[1]: Startup finished in 1.353s (kernel) + 7.606s (initrd) + 6.332s (userspace) = 15.293s. Sep 12 10:18:06.812669 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:60710.service - OpenSSH per-connection server daemon (10.0.0.1:60710). Sep 12 10:18:06.872774 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 60710 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:18:06.873520 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:06.879188 systemd-logind[1498]: New session 2 of user core. Sep 12 10:18:06.880595 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 10:18:06.938783 sshd[1625]: Connection closed by 10.0.0.1 port 60710 Sep 12 10:18:06.939372 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:06.952700 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:60710.service: Deactivated successfully. Sep 12 10:18:06.955227 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 10:18:06.957126 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. Sep 12 10:18:06.967363 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:60726.service - OpenSSH per-connection server daemon (10.0.0.1:60726). Sep 12 10:18:06.968660 systemd-logind[1498]: Removed session 2. Sep 12 10:18:07.013650 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 60726 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:18:07.016021 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:07.022851 systemd-logind[1498]: New session 3 of user core. Sep 12 10:18:07.032374 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 10:18:07.084677 sshd[1636]: Connection closed by 10.0.0.1 port 60726 Sep 12 10:18:07.085016 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:07.099421 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:60726.service: Deactivated successfully. Sep 12 10:18:07.101795 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 10:18:07.103490 systemd-logind[1498]: Session 3 logged out. Waiting for processes to exit. Sep 12 10:18:07.133501 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:60734.service - OpenSSH per-connection server daemon (10.0.0.1:60734). Sep 12 10:18:07.134762 systemd-logind[1498]: Removed session 3. Sep 12 10:18:07.172318 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 60734 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:18:07.175109 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:07.180335 systemd-logind[1498]: New session 4 of user core. Sep 12 10:18:07.196205 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 10:18:07.255575 sshd[1644]: Connection closed by 10.0.0.1 port 60734 Sep 12 10:18:07.256106 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:07.266702 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:60734.service: Deactivated successfully. Sep 12 10:18:07.268855 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 10:18:07.270592 systemd-logind[1498]: Session 4 logged out. Waiting for processes to exit. Sep 12 10:18:07.280566 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:60744.service - OpenSSH per-connection server daemon (10.0.0.1:60744). Sep 12 10:18:07.282183 systemd-logind[1498]: Removed session 4. Sep 12 10:18:07.319564 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 60744 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:18:07.321283 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:07.326834 systemd-logind[1498]: New session 5 of user core. Sep 12 10:18:07.340373 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 10:18:07.418413 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 10:18:07.419108 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:18:07.441165 sudo[1653]: pam_unix(sudo:session): session closed for user root Sep 12 10:18:07.443594 sshd[1652]: Connection closed by 10.0.0.1 port 60744 Sep 12 10:18:07.444153 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:07.458579 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:60744.service: Deactivated successfully. Sep 12 10:18:07.460963 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 10:18:07.462862 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. Sep 12 10:18:07.474366 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:60760.service - OpenSSH per-connection server daemon (10.0.0.1:60760). Sep 12 10:18:07.475765 systemd-logind[1498]: Removed session 5. Sep 12 10:18:07.514800 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 60760 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:18:07.516985 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:07.522471 systemd-logind[1498]: New session 6 of user core. Sep 12 10:18:07.532194 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 10:18:07.591556 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 10:18:07.591947 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:18:07.596709 sudo[1663]: pam_unix(sudo:session): session closed for user root Sep 12 10:18:07.606280 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 10:18:07.606825 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:18:07.628355 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 10:18:07.701621 augenrules[1686]: No rules Sep 12 10:18:07.703721 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 10:18:07.704157 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 10:18:07.705828 sudo[1662]: pam_unix(sudo:session): session closed for user root Sep 12 10:18:07.707877 sshd[1661]: Connection closed by 10.0.0.1 port 60760 Sep 12 10:18:07.708318 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:07.721015 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:60760.service: Deactivated successfully. Sep 12 10:18:07.723191 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 10:18:07.724947 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. Sep 12 10:18:07.733535 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:60770.service - OpenSSH per-connection server daemon (10.0.0.1:60770). Sep 12 10:18:07.735152 systemd-logind[1498]: Removed session 6. Sep 12 10:18:07.773879 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 60770 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:18:07.775689 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:18:07.780443 systemd-logind[1498]: New session 7 of user core. Sep 12 10:18:07.797335 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 10:18:07.851402 kubelet[1611]: E0912 10:18:07.851212 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:18:07.854158 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 10:18:07.854578 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 10:18:07.856865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:18:07.857190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:18:07.857702 systemd[1]: kubelet.service: Consumed 2.127s CPU time, 265.7M memory peak. Sep 12 10:18:08.353323 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 10:18:08.353568 (dockerd)[1718]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 10:18:08.930397 dockerd[1718]: time="2025-09-12T10:18:08.930299702Z" level=info msg="Starting up" Sep 12 10:18:10.198578 dockerd[1718]: time="2025-09-12T10:18:10.198479008Z" level=info msg="Loading containers: start." Sep 12 10:18:10.465098 kernel: Initializing XFRM netlink socket Sep 12 10:18:10.580267 systemd-networkd[1447]: docker0: Link UP Sep 12 10:18:10.631982 dockerd[1718]: time="2025-09-12T10:18:10.631907567Z" level=info msg="Loading containers: done." Sep 12 10:18:10.662989 dockerd[1718]: time="2025-09-12T10:18:10.662247173Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 10:18:10.662989 dockerd[1718]: time="2025-09-12T10:18:10.662476914Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 10:18:10.662989 dockerd[1718]: time="2025-09-12T10:18:10.662681137Z" level=info msg="Daemon has completed initialization" Sep 12 10:18:10.713683 dockerd[1718]: time="2025-09-12T10:18:10.713588569Z" level=info msg="API listen on /run/docker.sock" Sep 12 10:18:10.713811 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 10:18:11.709668 containerd[1518]: time="2025-09-12T10:18:11.709606011Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 10:18:12.305674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135384646.mount: Deactivated successfully. Sep 12 10:18:13.631358 containerd[1518]: time="2025-09-12T10:18:13.631220758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:13.631982 containerd[1518]: time="2025-09-12T10:18:13.631911533Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 10:18:13.634256 containerd[1518]: time="2025-09-12T10:18:13.633665793Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:13.638092 containerd[1518]: time="2025-09-12T10:18:13.637980042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:13.639671 containerd[1518]: time="2025-09-12T10:18:13.639600220Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.92993615s" Sep 12 10:18:13.639671 containerd[1518]: time="2025-09-12T10:18:13.639654642Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 10:18:13.640752 containerd[1518]: time="2025-09-12T10:18:13.640704831Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 10:18:15.572535 containerd[1518]: time="2025-09-12T10:18:15.572446606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:15.573440 containerd[1518]: time="2025-09-12T10:18:15.573396768Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 10:18:15.574611 containerd[1518]: time="2025-09-12T10:18:15.574553386Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:15.577422 containerd[1518]: time="2025-09-12T10:18:15.577387791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:15.578647 containerd[1518]: time="2025-09-12T10:18:15.578594574Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.93783995s" Sep 12 10:18:15.578647 containerd[1518]: time="2025-09-12T10:18:15.578639458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 10:18:15.579347 containerd[1518]: time="2025-09-12T10:18:15.579301880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 10:18:17.251227 containerd[1518]: time="2025-09-12T10:18:17.251115159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:17.252968 containerd[1518]: time="2025-09-12T10:18:17.252892271Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 10:18:17.255665 containerd[1518]: time="2025-09-12T10:18:17.255574360Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:17.260417 containerd[1518]: time="2025-09-12T10:18:17.260263492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:17.262035 containerd[1518]: time="2025-09-12T10:18:17.261993366Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.682654036s" Sep 12 10:18:17.262035 containerd[1518]: time="2025-09-12T10:18:17.262033561Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 10:18:17.263067 containerd[1518]: time="2025-09-12T10:18:17.262747710Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 10:18:18.107592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 10:18:18.118261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:18:18.377017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:18:18.383559 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 10:18:18.652477 kubelet[1992]: E0912 10:18:18.652283 1992 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 10:18:18.660687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 10:18:18.660926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 10:18:18.661358 systemd[1]: kubelet.service: Consumed 336ms CPU time, 111.2M memory peak. Sep 12 10:18:19.912444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184847788.mount: Deactivated successfully. Sep 12 10:18:20.905963 containerd[1518]: time="2025-09-12T10:18:20.905852411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:20.907947 containerd[1518]: time="2025-09-12T10:18:20.907876305Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 10:18:20.909921 containerd[1518]: time="2025-09-12T10:18:20.909866587Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:20.912597 containerd[1518]: time="2025-09-12T10:18:20.912536072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:20.913455 containerd[1518]: time="2025-09-12T10:18:20.913290978Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 3.650495969s" Sep 12 10:18:20.913455 containerd[1518]: time="2025-09-12T10:18:20.913344669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 10:18:20.914179 containerd[1518]: time="2025-09-12T10:18:20.914104694Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 10:18:21.539739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247147566.mount: Deactivated successfully. Sep 12 10:18:22.587790 containerd[1518]: time="2025-09-12T10:18:22.587713399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:22.588471 containerd[1518]: time="2025-09-12T10:18:22.588379919Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 10:18:22.589883 containerd[1518]: time="2025-09-12T10:18:22.589841349Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:22.593037 containerd[1518]: time="2025-09-12T10:18:22.593003488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:22.594309 containerd[1518]: time="2025-09-12T10:18:22.594250737Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.680072625s" Sep 12 10:18:22.594309 containerd[1518]: time="2025-09-12T10:18:22.594295190Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 10:18:22.594896 containerd[1518]: time="2025-09-12T10:18:22.594872793Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 10:18:23.030467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475563924.mount: Deactivated successfully. Sep 12 10:18:23.037794 containerd[1518]: time="2025-09-12T10:18:23.037724901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:23.038744 containerd[1518]: time="2025-09-12T10:18:23.038687816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 10:18:23.040025 containerd[1518]: time="2025-09-12T10:18:23.039974799Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:23.043143 containerd[1518]: time="2025-09-12T10:18:23.042959596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:23.043826 containerd[1518]: time="2025-09-12T10:18:23.043761319Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 448.859671ms" Sep 12 10:18:23.043826 containerd[1518]: time="2025-09-12T10:18:23.043807726Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 10:18:23.044465 containerd[1518]: time="2025-09-12T10:18:23.044431065Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 10:18:23.685482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623728928.mount: Deactivated successfully. Sep 12 10:18:25.620638 containerd[1518]: time="2025-09-12T10:18:25.620550939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:25.622081 containerd[1518]: time="2025-09-12T10:18:25.621937840Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 10:18:25.623084 containerd[1518]: time="2025-09-12T10:18:25.623032302Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:25.627481 containerd[1518]: time="2025-09-12T10:18:25.627399690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:25.629146 containerd[1518]: time="2025-09-12T10:18:25.629098225Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.584633477s" Sep 12 10:18:25.629228 containerd[1518]: time="2025-09-12T10:18:25.629148109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 10:18:28.258839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:18:28.259028 systemd[1]: kubelet.service: Consumed 336ms CPU time, 111.2M memory peak. Sep 12 10:18:28.275354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:18:28.304463 systemd[1]: Reload requested from client PID 2147 ('systemctl') (unit session-7.scope)... Sep 12 10:18:28.304488 systemd[1]: Reloading... Sep 12 10:18:28.439097 zram_generator::config[2194]: No configuration found. Sep 12 10:18:28.560340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:18:28.690387 systemd[1]: Reloading finished in 385 ms. Sep 12 10:18:28.754606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:18:28.760760 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:18:28.764011 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:18:28.767180 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:18:28.767544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:18:28.767618 systemd[1]: kubelet.service: Consumed 166ms CPU time, 99.3M memory peak. Sep 12 10:18:28.769752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:18:28.947400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:18:28.966513 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:18:29.018475 kubelet[2242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:18:29.018475 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 10:18:29.018475 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:18:29.019234 kubelet[2242]: I0912 10:18:29.018596 2242 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:18:29.403486 kubelet[2242]: I0912 10:18:29.403428 2242 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 10:18:29.403486 kubelet[2242]: I0912 10:18:29.403470 2242 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:18:29.403754 kubelet[2242]: I0912 10:18:29.403733 2242 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 10:18:29.442080 kubelet[2242]: E0912 10:18:29.442006 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:29.442999 kubelet[2242]: I0912 10:18:29.442942 2242 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:18:29.452969 kubelet[2242]: E0912 10:18:29.452921 2242 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:18:29.452969 kubelet[2242]: I0912 10:18:29.452964 2242 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:18:29.460102 kubelet[2242]: I0912 10:18:29.460036 2242 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:18:29.460293 kubelet[2242]: I0912 10:18:29.460264 2242 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 10:18:29.460532 kubelet[2242]: I0912 10:18:29.460476 2242 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:18:29.460711 kubelet[2242]: I0912 10:18:29.460521 2242 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:18:29.460962 kubelet[2242]: I0912 10:18:29.460725 2242 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:18:29.460962 kubelet[2242]: I0912 10:18:29.460735 2242 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 10:18:29.460962 kubelet[2242]: I0912 10:18:29.460921 2242 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:18:29.463505 kubelet[2242]: I0912 10:18:29.463025 2242 kubelet.go:408] "Attempting to sync node with API server" Sep 12 10:18:29.463505 kubelet[2242]: I0912 10:18:29.463095 2242 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:18:29.463505 kubelet[2242]: I0912 10:18:29.463146 2242 kubelet.go:314] "Adding apiserver pod source" Sep 12 10:18:29.463505 kubelet[2242]: I0912 10:18:29.463181 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:18:29.470720 kubelet[2242]: W0912 10:18:29.470593 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 12 10:18:29.470720 kubelet[2242]: E0912 10:18:29.470701 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:29.470720 kubelet[2242]: W0912 10:18:29.470709 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 12 10:18:29.471037 kubelet[2242]: E0912 10:18:29.470775 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:29.473870 kubelet[2242]: I0912 10:18:29.473777 2242 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:18:29.474964 kubelet[2242]: I0912 10:18:29.474580 2242 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 10:18:29.475610 kubelet[2242]: W0912 10:18:29.475578 2242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 10:18:29.477982 kubelet[2242]: I0912 10:18:29.477944 2242 server.go:1274] "Started kubelet" Sep 12 10:18:29.478218 kubelet[2242]: I0912 10:18:29.478029 2242 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:18:29.479107 kubelet[2242]: I0912 10:18:29.478470 2242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:18:29.479107 kubelet[2242]: I0912 10:18:29.479093 2242 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:18:29.479262 kubelet[2242]: I0912 10:18:29.479233 2242 server.go:449] "Adding debug handlers to kubelet server" Sep 12 10:18:29.480265 kubelet[2242]: I0912 10:18:29.479841 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:18:29.480265 kubelet[2242]: I0912 10:18:29.479976 2242 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:18:29.484948 kubelet[2242]: E0912 10:18:29.483988 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:18:29.484948 kubelet[2242]: I0912 10:18:29.484043 2242 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 10:18:29.484948 kubelet[2242]: I0912 10:18:29.484302 2242 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 10:18:29.484948 kubelet[2242]: I0912 10:18:29.484362 2242 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:18:29.484948 kubelet[2242]: E0912 10:18:29.484399 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Sep 12 10:18:29.485520 kubelet[2242]: I0912 10:18:29.485495 2242 factory.go:221] Registration of the systemd container factory successfully Sep 12 10:18:29.485660 kubelet[2242]: I0912 10:18:29.485617 2242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:18:29.486440 kubelet[2242]: W0912 10:18:29.486130 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 12 10:18:29.486440 kubelet[2242]: E0912 10:18:29.486186 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:29.487121 kubelet[2242]: E0912 10:18:29.485681 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186481a725f88ea3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 10:18:29.477904035 +0000 UTC m=+0.503899843,LastTimestamp:2025-09-12 10:18:29.477904035 +0000 UTC m=+0.503899843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 10:18:29.487629 kubelet[2242]: I0912 10:18:29.487554 2242 factory.go:221] Registration of the containerd container factory successfully Sep 12 10:18:29.487670 kubelet[2242]: E0912 10:18:29.487643 2242 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:18:29.514633 kubelet[2242]: I0912 10:18:29.514526 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 10:18:29.515456 kubelet[2242]: I0912 10:18:29.515385 2242 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 10:18:29.515456 kubelet[2242]: I0912 10:18:29.515447 2242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 10:18:29.515543 kubelet[2242]: I0912 10:18:29.515496 2242 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:18:29.516519 kubelet[2242]: I0912 10:18:29.516458 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 10:18:29.516519 kubelet[2242]: I0912 10:18:29.516501 2242 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 10:18:29.517221 kubelet[2242]: I0912 10:18:29.516532 2242 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 10:18:29.517221 kubelet[2242]: E0912 10:18:29.516578 2242 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:18:29.519196 kubelet[2242]: W0912 10:18:29.519135 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 12 10:18:29.519275 kubelet[2242]: E0912 10:18:29.519200 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:29.584254 kubelet[2242]: E0912 10:18:29.584174 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:18:29.617760 kubelet[2242]: E0912 10:18:29.617669 2242 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 10:18:29.685247 kubelet[2242]: E0912 10:18:29.685034 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:18:29.685660 kubelet[2242]: E0912 10:18:29.685565 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Sep 12 10:18:29.786081 kubelet[2242]: E0912 10:18:29.786004 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:18:29.813647 kubelet[2242]: I0912 10:18:29.813551 2242 policy_none.go:49] "None policy: Start" Sep 12 10:18:29.814572 kubelet[2242]: I0912 10:18:29.814526 2242 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 10:18:29.814572 kubelet[2242]: I0912 10:18:29.814580 2242 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:18:29.818283 kubelet[2242]: E0912 10:18:29.818234 2242 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 10:18:29.825038 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 10:18:29.841381 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 10:18:29.846151 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 10:18:29.861732 kubelet[2242]: I0912 10:18:29.861667 2242 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 10:18:29.862016 kubelet[2242]: I0912 10:18:29.861986 2242 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:18:29.862077 kubelet[2242]: I0912 10:18:29.862010 2242 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:18:29.862607 kubelet[2242]: I0912 10:18:29.862306 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:18:29.863634 kubelet[2242]: E0912 10:18:29.863584 2242 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 10:18:29.964612 kubelet[2242]: I0912 10:18:29.964472 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 10:18:29.964939 kubelet[2242]: E0912 10:18:29.964905 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 12 10:18:30.087014 kubelet[2242]: E0912 10:18:30.086949 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Sep 12 10:18:30.166413 kubelet[2242]: I0912 10:18:30.166379 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 10:18:30.166875 kubelet[2242]: E0912 10:18:30.166819 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 12 10:18:30.231751 systemd[1]: Created slice kubepods-burstable-pod99c429d4a31c15b0b5805c6516e428af.slice - libcontainer container kubepods-burstable-pod99c429d4a31c15b0b5805c6516e428af.slice. Sep 12 10:18:30.254988 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 10:18:30.272988 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 10:18:30.288380 kubelet[2242]: I0912 10:18:30.288322 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:30.288380 kubelet[2242]: I0912 10:18:30.288391 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:30.288593 kubelet[2242]: I0912 10:18:30.288428 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 10:18:30.288593 kubelet[2242]: I0912 10:18:30.288460 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99c429d4a31c15b0b5805c6516e428af-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"99c429d4a31c15b0b5805c6516e428af\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:18:30.288593 kubelet[2242]: I0912 10:18:30.288491 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:30.288593 kubelet[2242]: I0912 10:18:30.288526 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:30.288593 kubelet[2242]: I0912 10:18:30.288568 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:30.288728 kubelet[2242]: I0912 10:18:30.288594 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99c429d4a31c15b0b5805c6516e428af-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"99c429d4a31c15b0b5805c6516e428af\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:18:30.288728 kubelet[2242]: I0912 10:18:30.288617 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99c429d4a31c15b0b5805c6516e428af-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"99c429d4a31c15b0b5805c6516e428af\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:18:30.553343 kubelet[2242]: E0912 10:18:30.553262 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:30.554454 containerd[1518]: time="2025-09-12T10:18:30.554405675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:99c429d4a31c15b0b5805c6516e428af,Namespace:kube-system,Attempt:0,}" Sep 12 10:18:30.570149 kubelet[2242]: I0912 10:18:30.570089 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 10:18:30.570623 kubelet[2242]: E0912 10:18:30.570577 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 12 10:18:30.571826 kubelet[2242]: E0912 10:18:30.571789 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:30.572521 containerd[1518]: time="2025-09-12T10:18:30.572469926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 10:18:30.575903 kubelet[2242]: E0912 10:18:30.575861 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:30.576530 containerd[1518]: time="2025-09-12T10:18:30.576493660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 10:18:30.630148 kubelet[2242]: W0912 10:18:30.629981 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 12 10:18:30.630148 kubelet[2242]: E0912 10:18:30.630117 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:30.884594 kubelet[2242]: W0912 10:18:30.884389 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 12 10:18:30.884594 kubelet[2242]: E0912 10:18:30.884496 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:30.887925 kubelet[2242]: E0912 10:18:30.887888 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" Sep 12 10:18:30.945197 kubelet[2242]: W0912 10:18:30.945092 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 12 10:18:30.945197 kubelet[2242]: E0912 10:18:30.945185 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:31.048503 kubelet[2242]: W0912 10:18:31.048301 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 12 10:18:31.048503 kubelet[2242]: E0912 10:18:31.048498 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:31.372419 kubelet[2242]: I0912 10:18:31.372372 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 10:18:31.372948 kubelet[2242]: E0912 10:18:31.372902 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 12 10:18:31.495162 kubelet[2242]: E0912 10:18:31.495099 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 12 10:18:31.895240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020513636.mount: Deactivated successfully. Sep 12 10:18:31.903514 containerd[1518]: time="2025-09-12T10:18:31.903435716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:18:31.907647 containerd[1518]: time="2025-09-12T10:18:31.907575068Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 10:18:31.908735 containerd[1518]: time="2025-09-12T10:18:31.908682484Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:18:31.910606 containerd[1518]: time="2025-09-12T10:18:31.910566807Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:18:31.911435 containerd[1518]: time="2025-09-12T10:18:31.911379731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:18:31.912662 containerd[1518]: time="2025-09-12T10:18:31.912629354Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:18:31.913470 containerd[1518]: time="2025-09-12T10:18:31.913400430Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 10:18:31.914719 containerd[1518]: time="2025-09-12T10:18:31.914670031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 10:18:31.915598 containerd[1518]: time="2025-09-12T10:18:31.915562434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.338960149s" Sep 12 10:18:31.918233 containerd[1518]: time="2025-09-12T10:18:31.918189870Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.363628503s" Sep 12 10:18:31.920649 containerd[1518]: time="2025-09-12T10:18:31.920599108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.34797854s" Sep 12 10:18:32.264335 containerd[1518]: time="2025-09-12T10:18:32.258866559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:32.264335 containerd[1518]: time="2025-09-12T10:18:32.264307691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:32.264335 containerd[1518]: time="2025-09-12T10:18:32.264321477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:32.264572 containerd[1518]: time="2025-09-12T10:18:32.264418579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:32.268793 containerd[1518]: time="2025-09-12T10:18:32.268654351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:32.268961 containerd[1518]: time="2025-09-12T10:18:32.268770669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:32.269357 containerd[1518]: time="2025-09-12T10:18:32.268945818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:32.269543 containerd[1518]: time="2025-09-12T10:18:32.269503032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:32.275923 containerd[1518]: time="2025-09-12T10:18:32.275022111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:32.275923 containerd[1518]: time="2025-09-12T10:18:32.275183022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:32.275923 containerd[1518]: time="2025-09-12T10:18:32.275226744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:32.275923 containerd[1518]: time="2025-09-12T10:18:32.275378429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:32.422464 systemd[1]: Started cri-containerd-327e10faa2dc47eff375f0ba708715478c9c8305b378d237895688fd539e8ea7.scope - libcontainer container 327e10faa2dc47eff375f0ba708715478c9c8305b378d237895688fd539e8ea7. Sep 12 10:18:32.428168 systemd[1]: Started cri-containerd-f22a36f243d1a1b15e59aed10d6918f4c3b014620819b306280b683f9408ad42.scope - libcontainer container f22a36f243d1a1b15e59aed10d6918f4c3b014620819b306280b683f9408ad42. Sep 12 10:18:32.448306 systemd[1]: Started cri-containerd-de3d2d212de4f9fcc606be1cda711410f4486f3e4bb32604a7d8e6b40229e914.scope - libcontainer container de3d2d212de4f9fcc606be1cda711410f4486f3e4bb32604a7d8e6b40229e914. Sep 12 10:18:32.489287 kubelet[2242]: E0912 10:18:32.489202 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="3.2s" Sep 12 10:18:32.525902 containerd[1518]: time="2025-09-12T10:18:32.525436578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"327e10faa2dc47eff375f0ba708715478c9c8305b378d237895688fd539e8ea7\"" Sep 12 10:18:32.527444 kubelet[2242]: E0912 10:18:32.527414 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:32.532682 containerd[1518]: time="2025-09-12T10:18:32.532574662Z" level=info msg="CreateContainer within sandbox \"327e10faa2dc47eff375f0ba708715478c9c8305b378d237895688fd539e8ea7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 10:18:32.533183 containerd[1518]: time="2025-09-12T10:18:32.533143639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:99c429d4a31c15b0b5805c6516e428af,Namespace:kube-system,Attempt:0,} returns sandbox id \"f22a36f243d1a1b15e59aed10d6918f4c3b014620819b306280b683f9408ad42\"" Sep 12 10:18:32.534884 kubelet[2242]: E0912 10:18:32.534851 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:32.537729 containerd[1518]: time="2025-09-12T10:18:32.537670396Z" level=info msg="CreateContainer within sandbox \"f22a36f243d1a1b15e59aed10d6918f4c3b014620819b306280b683f9408ad42\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 10:18:32.544500 containerd[1518]: time="2025-09-12T10:18:32.544448545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"de3d2d212de4f9fcc606be1cda711410f4486f3e4bb32604a7d8e6b40229e914\"" Sep 12 10:18:32.545147 kubelet[2242]: E0912 10:18:32.545115 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:32.546955 containerd[1518]: time="2025-09-12T10:18:32.546915431Z" level=info msg="CreateContainer within sandbox \"de3d2d212de4f9fcc606be1cda711410f4486f3e4bb32604a7d8e6b40229e914\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 10:18:32.562618 containerd[1518]: time="2025-09-12T10:18:32.562551969Z" level=info msg="CreateContainer within sandbox \"327e10faa2dc47eff375f0ba708715478c9c8305b378d237895688fd539e8ea7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"641bffdfdac666586a74c83e2c8cd0eb5ab20d98e99032e2b7771eca9d1b938b\"" Sep 12 10:18:32.563285 containerd[1518]: time="2025-09-12T10:18:32.563223949Z" level=info msg="StartContainer for \"641bffdfdac666586a74c83e2c8cd0eb5ab20d98e99032e2b7771eca9d1b938b\"" Sep 12 10:18:32.567453 containerd[1518]: time="2025-09-12T10:18:32.567414206Z" level=info msg="CreateContainer within sandbox \"f22a36f243d1a1b15e59aed10d6918f4c3b014620819b306280b683f9408ad42\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c86491bc566465bdcda89cb07c0ff4ac7bf1d1f62dc267c440e7f937e6976a47\"" Sep 12 10:18:32.568153 containerd[1518]: time="2025-09-12T10:18:32.568084042Z" level=info msg="StartContainer for \"c86491bc566465bdcda89cb07c0ff4ac7bf1d1f62dc267c440e7f937e6976a47\"" Sep 12 10:18:32.577242 containerd[1518]: time="2025-09-12T10:18:32.577185868Z" level=info msg="CreateContainer within sandbox \"de3d2d212de4f9fcc606be1cda711410f4486f3e4bb32604a7d8e6b40229e914\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1057a01608ca1581744b41bdb9bb2300153fccafd5243c651a7212f590d1faef\"" Sep 12 10:18:32.578034 containerd[1518]: time="2025-09-12T10:18:32.578010794Z" level=info msg="StartContainer for \"1057a01608ca1581744b41bdb9bb2300153fccafd5243c651a7212f590d1faef\"" Sep 12 10:18:32.597319 systemd[1]: Started cri-containerd-641bffdfdac666586a74c83e2c8cd0eb5ab20d98e99032e2b7771eca9d1b938b.scope - libcontainer container 641bffdfdac666586a74c83e2c8cd0eb5ab20d98e99032e2b7771eca9d1b938b. Sep 12 10:18:32.601229 systemd[1]: Started cri-containerd-c86491bc566465bdcda89cb07c0ff4ac7bf1d1f62dc267c440e7f937e6976a47.scope - libcontainer container c86491bc566465bdcda89cb07c0ff4ac7bf1d1f62dc267c440e7f937e6976a47. Sep 12 10:18:32.626211 systemd[1]: Started cri-containerd-1057a01608ca1581744b41bdb9bb2300153fccafd5243c651a7212f590d1faef.scope - libcontainer container 1057a01608ca1581744b41bdb9bb2300153fccafd5243c651a7212f590d1faef. Sep 12 10:18:32.671537 containerd[1518]: time="2025-09-12T10:18:32.671490078Z" level=info msg="StartContainer for \"c86491bc566465bdcda89cb07c0ff4ac7bf1d1f62dc267c440e7f937e6976a47\" returns successfully" Sep 12 10:18:32.676637 containerd[1518]: time="2025-09-12T10:18:32.675963205Z" level=info msg="StartContainer for \"641bffdfdac666586a74c83e2c8cd0eb5ab20d98e99032e2b7771eca9d1b938b\" returns successfully" Sep 12 10:18:32.681231 containerd[1518]: time="2025-09-12T10:18:32.681185237Z" level=info msg="StartContainer for \"1057a01608ca1581744b41bdb9bb2300153fccafd5243c651a7212f590d1faef\" returns successfully" Sep 12 10:18:32.975420 kubelet[2242]: I0912 10:18:32.974457 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 10:18:33.533143 kubelet[2242]: E0912 10:18:33.530205 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:33.533143 kubelet[2242]: E0912 10:18:33.533013 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:33.720912 kubelet[2242]: E0912 10:18:33.720852 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:34.570705 kubelet[2242]: E0912 10:18:34.570652 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:34.691010 kubelet[2242]: I0912 10:18:34.690951 2242 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 10:18:34.691010 kubelet[2242]: E0912 10:18:34.690998 2242 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 10:18:35.567981 kubelet[2242]: I0912 10:18:35.567919 2242 apiserver.go:52] "Watching apiserver" Sep 12 10:18:35.585201 kubelet[2242]: I0912 10:18:35.585147 2242 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 10:18:36.594043 kubelet[2242]: E0912 10:18:36.593983 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:36.919276 systemd[1]: Reload requested from client PID 2524 ('systemctl') (unit session-7.scope)... Sep 12 10:18:36.919298 systemd[1]: Reloading... Sep 12 10:18:37.021093 zram_generator::config[2568]: No configuration found. Sep 12 10:18:37.156016 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 10:18:37.291898 systemd[1]: Reloading finished in 372 ms. Sep 12 10:18:37.322077 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:18:37.341729 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 10:18:37.342194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:18:37.342272 systemd[1]: kubelet.service: Consumed 1.371s CPU time, 135.1M memory peak. Sep 12 10:18:37.351408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 10:18:37.608210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 10:18:37.613958 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 10:18:37.661663 kubelet[2613]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:18:37.661663 kubelet[2613]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 10:18:37.661663 kubelet[2613]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 10:18:37.662316 kubelet[2613]: I0912 10:18:37.661731 2613 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 10:18:37.824248 kubelet[2613]: I0912 10:18:37.824183 2613 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 10:18:37.824248 kubelet[2613]: I0912 10:18:37.824226 2613 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 10:18:37.824531 kubelet[2613]: I0912 10:18:37.824508 2613 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 10:18:37.825854 kubelet[2613]: I0912 10:18:37.825826 2613 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 10:18:37.827859 kubelet[2613]: I0912 10:18:37.827814 2613 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 10:18:37.830462 kubelet[2613]: E0912 10:18:37.830426 2613 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 10:18:37.830462 kubelet[2613]: I0912 10:18:37.830453 2613 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 10:18:37.836352 kubelet[2613]: I0912 10:18:37.836309 2613 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 10:18:37.836477 kubelet[2613]: I0912 10:18:37.836448 2613 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 10:18:37.836687 kubelet[2613]: I0912 10:18:37.836641 2613 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 10:18:37.836892 kubelet[2613]: I0912 10:18:37.836680 2613 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 10:18:37.836986 kubelet[2613]: I0912 10:18:37.836897 2613 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 10:18:37.836986 kubelet[2613]: I0912 10:18:37.836907 2613 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 10:18:37.836986 kubelet[2613]: I0912 10:18:37.836938 2613 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:18:37.837089 kubelet[2613]: I0912 10:18:37.837069 2613 kubelet.go:408] "Attempting to sync node with API server" Sep 12 10:18:37.837119 kubelet[2613]: I0912 10:18:37.837094 2613 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 10:18:37.837154 kubelet[2613]: I0912 10:18:37.837147 2613 kubelet.go:314] "Adding apiserver pod source" Sep 12 10:18:37.837188 kubelet[2613]: I0912 10:18:37.837163 2613 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 10:18:37.839035 kubelet[2613]: I0912 10:18:37.839009 2613 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 10:18:37.839549 kubelet[2613]: I0912 10:18:37.839523 2613 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 10:18:37.841677 kubelet[2613]: I0912 10:18:37.841327 2613 server.go:1274] "Started kubelet" Sep 12 10:18:37.844472 kubelet[2613]: I0912 10:18:37.844447 2613 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 10:18:37.853834 kubelet[2613]: I0912 10:18:37.853709 2613 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 10:18:37.853948 kubelet[2613]: I0912 10:18:37.853861 2613 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 10:18:37.854661 kubelet[2613]: I0912 10:18:37.854621 2613 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 10:18:37.855186 kubelet[2613]: I0912 10:18:37.855164 2613 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 10:18:37.855349 kubelet[2613]: I0912 10:18:37.855316 2613 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 10:18:37.855505 kubelet[2613]: E0912 10:18:37.855476 2613 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 10:18:37.856010 kubelet[2613]: I0912 10:18:37.855909 2613 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 10:18:37.856301 kubelet[2613]: I0912 10:18:37.856120 2613 reconciler.go:26] "Reconciler: start to sync state" Sep 12 10:18:37.859078 kubelet[2613]: E0912 10:18:37.858677 2613 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 10:18:37.859078 kubelet[2613]: I0912 10:18:37.858780 2613 server.go:449] "Adding debug handlers to kubelet server" Sep 12 10:18:37.862275 kubelet[2613]: I0912 10:18:37.862246 2613 factory.go:221] Registration of the containerd container factory successfully Sep 12 10:18:37.862275 kubelet[2613]: I0912 10:18:37.862269 2613 factory.go:221] Registration of the systemd container factory successfully Sep 12 10:18:37.862380 kubelet[2613]: I0912 10:18:37.862358 2613 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 10:18:37.863183 kubelet[2613]: I0912 10:18:37.863144 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 10:18:37.864673 kubelet[2613]: I0912 10:18:37.864646 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 10:18:37.864673 kubelet[2613]: I0912 10:18:37.864666 2613 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 10:18:37.864828 kubelet[2613]: I0912 10:18:37.864686 2613 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 10:18:37.864828 kubelet[2613]: E0912 10:18:37.864732 2613 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 10:18:37.908419 kubelet[2613]: I0912 10:18:37.908377 2613 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 10:18:37.908419 kubelet[2613]: I0912 10:18:37.908411 2613 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 10:18:37.908621 kubelet[2613]: I0912 10:18:37.908454 2613 state_mem.go:36] "Initialized new in-memory state store" Sep 12 10:18:37.908684 kubelet[2613]: I0912 10:18:37.908634 2613 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 10:18:37.908684 kubelet[2613]: I0912 10:18:37.908652 2613 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 10:18:37.908684 kubelet[2613]: I0912 10:18:37.908678 2613 policy_none.go:49] "None policy: Start" Sep 12 10:18:37.909255 kubelet[2613]: I0912 10:18:37.909234 2613 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 10:18:37.909322 kubelet[2613]: I0912 10:18:37.909264 2613 state_mem.go:35] "Initializing new in-memory state store" Sep 12 10:18:37.909425 kubelet[2613]: I0912 10:18:37.909406 2613 state_mem.go:75] "Updated machine memory state" Sep 12 10:18:37.915398 kubelet[2613]: I0912 10:18:37.915352 2613 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 10:18:37.915601 kubelet[2613]: I0912 10:18:37.915538 2613 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 10:18:37.915601 kubelet[2613]: I0912 10:18:37.915552 2613 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 10:18:37.916124 kubelet[2613]: I0912 10:18:37.915743 2613 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 10:18:38.022761 kubelet[2613]: I0912 10:18:38.022703 2613 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 10:18:38.052853 kubelet[2613]: E0912 10:18:38.052523 2613 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:38.054931 kubelet[2613]: I0912 10:18:38.054896 2613 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 10:18:38.055067 kubelet[2613]: I0912 10:18:38.055025 2613 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 10:18:38.059606 kubelet[2613]: I0912 10:18:38.059223 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:38.059606 kubelet[2613]: I0912 10:18:38.059296 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:38.059606 kubelet[2613]: I0912 10:18:38.059374 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:38.059606 kubelet[2613]: I0912 10:18:38.059488 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 10:18:38.059606 kubelet[2613]: I0912 10:18:38.059520 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99c429d4a31c15b0b5805c6516e428af-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"99c429d4a31c15b0b5805c6516e428af\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:18:38.059892 kubelet[2613]: I0912 10:18:38.059558 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99c429d4a31c15b0b5805c6516e428af-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"99c429d4a31c15b0b5805c6516e428af\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:18:38.059892 kubelet[2613]: I0912 10:18:38.059581 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99c429d4a31c15b0b5805c6516e428af-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"99c429d4a31c15b0b5805c6516e428af\") " pod="kube-system/kube-apiserver-localhost" Sep 12 10:18:38.059892 kubelet[2613]: I0912 10:18:38.059601 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:38.059892 kubelet[2613]: I0912 10:18:38.059648 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 10:18:38.078706 sudo[2648]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 10:18:38.079113 sudo[2648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 10:18:38.352577 kubelet[2613]: E0912 10:18:38.352506 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:38.353161 kubelet[2613]: E0912 10:18:38.352849 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:38.353161 kubelet[2613]: E0912 10:18:38.353022 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:38.714686 sudo[2648]: pam_unix(sudo:session): session closed for user root Sep 12 10:18:38.838392 kubelet[2613]: I0912 10:18:38.838322 2613 apiserver.go:52] "Watching apiserver" Sep 12 10:18:38.856101 kubelet[2613]: I0912 10:18:38.856062 2613 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 10:18:38.867630 kubelet[2613]: I0912 10:18:38.867548 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.867528149 podStartE2EDuration="1.867528149s" podCreationTimestamp="2025-09-12 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:18:38.867296426 +0000 UTC m=+1.248068188" watchObservedRunningTime="2025-09-12 10:18:38.867528149 +0000 UTC m=+1.248299911" Sep 12 10:18:38.881199 kubelet[2613]: E0912 10:18:38.881148 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:38.885118 kubelet[2613]: I0912 10:18:38.884865 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.88484065 podStartE2EDuration="2.88484065s" podCreationTimestamp="2025-09-12 10:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:18:38.874989369 +0000 UTC m=+1.255761121" watchObservedRunningTime="2025-09-12 10:18:38.88484065 +0000 UTC m=+1.265612412" Sep 12 10:18:38.889572 kubelet[2613]: E0912 10:18:38.889025 2613 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 10:18:38.889572 kubelet[2613]: E0912 10:18:38.889265 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:38.890610 kubelet[2613]: E0912 10:18:38.890569 2613 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 10:18:38.890736 kubelet[2613]: E0912 10:18:38.890714 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:38.898030 kubelet[2613]: I0912 10:18:38.897927 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8979029920000001 podStartE2EDuration="1.897902992s" podCreationTimestamp="2025-09-12 10:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:18:38.88547391 +0000 UTC m=+1.266245673" watchObservedRunningTime="2025-09-12 10:18:38.897902992 +0000 UTC m=+1.278674754" Sep 12 10:18:39.882249 kubelet[2613]: E0912 10:18:39.882208 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:39.882965 kubelet[2613]: E0912 10:18:39.882436 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:40.450947 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 12 10:18:40.452741 sshd[1697]: Connection closed by 10.0.0.1 port 60770 Sep 12 10:18:40.453631 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 12 10:18:40.459536 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:60770.service: Deactivated successfully. Sep 12 10:18:40.462417 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 10:18:40.462703 systemd[1]: session-7.scope: Consumed 5.530s CPU time, 250.9M memory peak. Sep 12 10:18:40.464768 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. Sep 12 10:18:40.465959 systemd-logind[1498]: Removed session 7. Sep 12 10:18:42.748771 kubelet[2613]: I0912 10:18:42.748718 2613 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 10:18:42.749337 containerd[1518]: time="2025-09-12T10:18:42.749261069Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 10:18:42.749669 kubelet[2613]: I0912 10:18:42.749583 2613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 10:18:43.254751 kubelet[2613]: E0912 10:18:43.254708 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:43.649876 systemd[1]: Created slice kubepods-besteffort-pod6fe7f9f4_fbb1_47d1_869c_c9ac757e8906.slice - libcontainer container kubepods-besteffort-pod6fe7f9f4_fbb1_47d1_869c_c9ac757e8906.slice. Sep 12 10:18:43.666291 systemd[1]: Created slice kubepods-burstable-poddbb45aa0_f13d_4916_b670_a8a588d62186.slice - libcontainer container kubepods-burstable-poddbb45aa0_f13d_4916_b670_a8a588d62186.slice. Sep 12 10:18:43.697579 kubelet[2613]: I0912 10:18:43.697499 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-lib-modules\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701088 kubelet[2613]: I0912 10:18:43.698661 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6fe7f9f4-fbb1-47d1-869c-c9ac757e8906-kube-proxy\") pod \"kube-proxy-hbf6c\" (UID: \"6fe7f9f4-fbb1-47d1-869c-c9ac757e8906\") " pod="kube-system/kube-proxy-hbf6c" Sep 12 10:18:43.701088 kubelet[2613]: I0912 10:18:43.698694 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-etc-cni-netd\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701088 kubelet[2613]: I0912 10:18:43.698780 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-hostproc\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701088 kubelet[2613]: I0912 10:18:43.698823 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cni-path\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701088 kubelet[2613]: I0912 10:18:43.698849 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6299\" (UniqueName: \"kubernetes.io/projected/dbb45aa0-f13d-4916-b670-a8a588d62186-kube-api-access-k6299\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701088 kubelet[2613]: I0912 10:18:43.698874 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fe7f9f4-fbb1-47d1-869c-c9ac757e8906-lib-modules\") pod \"kube-proxy-hbf6c\" (UID: \"6fe7f9f4-fbb1-47d1-869c-c9ac757e8906\") " pod="kube-system/kube-proxy-hbf6c" Sep 12 10:18:43.701358 kubelet[2613]: I0912 10:18:43.698899 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-run\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701358 kubelet[2613]: I0912 10:18:43.698920 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbb45aa0-f13d-4916-b670-a8a588d62186-clustermesh-secrets\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701358 kubelet[2613]: I0912 10:18:43.698941 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbb45aa0-f13d-4916-b670-a8a588d62186-hubble-tls\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701358 kubelet[2613]: I0912 10:18:43.698966 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-cgroup\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701358 kubelet[2613]: I0912 10:18:43.698994 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqbh5\" (UniqueName: \"kubernetes.io/projected/6fe7f9f4-fbb1-47d1-869c-c9ac757e8906-kube-api-access-vqbh5\") pod \"kube-proxy-hbf6c\" (UID: \"6fe7f9f4-fbb1-47d1-869c-c9ac757e8906\") " pod="kube-system/kube-proxy-hbf6c" Sep 12 10:18:43.701358 kubelet[2613]: I0912 10:18:43.699017 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-xtables-lock\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701692 kubelet[2613]: I0912 10:18:43.699048 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-config-path\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701692 kubelet[2613]: I0912 10:18:43.699092 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-host-proc-sys-kernel\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701692 kubelet[2613]: I0912 10:18:43.699120 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fe7f9f4-fbb1-47d1-869c-c9ac757e8906-xtables-lock\") pod \"kube-proxy-hbf6c\" (UID: \"6fe7f9f4-fbb1-47d1-869c-c9ac757e8906\") " pod="kube-system/kube-proxy-hbf6c" Sep 12 10:18:43.701692 kubelet[2613]: I0912 10:18:43.699142 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-bpf-maps\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.701692 kubelet[2613]: I0912 10:18:43.699178 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-host-proc-sys-net\") pod \"cilium-cvdfk\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " pod="kube-system/cilium-cvdfk" Sep 12 10:18:43.840577 systemd[1]: Created slice kubepods-besteffort-podc86fbb4d_7274_4976_aeba_18627e0d63d7.slice - libcontainer container kubepods-besteffort-podc86fbb4d_7274_4976_aeba_18627e0d63d7.slice. Sep 12 10:18:43.887922 kubelet[2613]: E0912 10:18:43.887873 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:43.901042 kubelet[2613]: I0912 10:18:43.900831 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c86fbb4d-7274-4976-aeba-18627e0d63d7-cilium-config-path\") pod \"cilium-operator-5d85765b45-s4pjw\" (UID: \"c86fbb4d-7274-4976-aeba-18627e0d63d7\") " pod="kube-system/cilium-operator-5d85765b45-s4pjw" Sep 12 10:18:43.901042 kubelet[2613]: I0912 10:18:43.900990 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvhx\" (UniqueName: \"kubernetes.io/projected/c86fbb4d-7274-4976-aeba-18627e0d63d7-kube-api-access-9kvhx\") pod \"cilium-operator-5d85765b45-s4pjw\" (UID: \"c86fbb4d-7274-4976-aeba-18627e0d63d7\") " pod="kube-system/cilium-operator-5d85765b45-s4pjw" Sep 12 10:18:43.960379 kubelet[2613]: E0912 10:18:43.960316 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:43.961197 containerd[1518]: time="2025-09-12T10:18:43.961103436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hbf6c,Uid:6fe7f9f4-fbb1-47d1-869c-c9ac757e8906,Namespace:kube-system,Attempt:0,}" Sep 12 10:18:43.971246 kubelet[2613]: E0912 10:18:43.971204 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:43.971923 containerd[1518]: time="2025-09-12T10:18:43.971861770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cvdfk,Uid:dbb45aa0-f13d-4916-b670-a8a588d62186,Namespace:kube-system,Attempt:0,}" Sep 12 10:18:44.124849 containerd[1518]: time="2025-09-12T10:18:44.124614753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:44.125011 containerd[1518]: time="2025-09-12T10:18:44.124856342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:44.125011 containerd[1518]: time="2025-09-12T10:18:44.124888904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:44.125129 containerd[1518]: time="2025-09-12T10:18:44.125031105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:44.131161 containerd[1518]: time="2025-09-12T10:18:44.130251877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:44.131161 containerd[1518]: time="2025-09-12T10:18:44.130312412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:44.131161 containerd[1518]: time="2025-09-12T10:18:44.130323593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:44.131161 containerd[1518]: time="2025-09-12T10:18:44.130434023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:44.148257 kubelet[2613]: E0912 10:18:44.146356 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:44.148403 containerd[1518]: time="2025-09-12T10:18:44.147187253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-s4pjw,Uid:c86fbb4d-7274-4976-aeba-18627e0d63d7,Namespace:kube-system,Attempt:0,}" Sep 12 10:18:44.160296 systemd[1]: Started cri-containerd-d77ad030eb8ebd12c68db39d8966207a532f8cadcdbdd9c16315b3b3d6bbc2e3.scope - libcontainer container d77ad030eb8ebd12c68db39d8966207a532f8cadcdbdd9c16315b3b3d6bbc2e3. Sep 12 10:18:44.164451 systemd[1]: Started cri-containerd-e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056.scope - libcontainer container e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056. Sep 12 10:18:44.196845 containerd[1518]: time="2025-09-12T10:18:44.196768961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cvdfk,Uid:dbb45aa0-f13d-4916-b670-a8a588d62186,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\"" Sep 12 10:18:44.198084 kubelet[2613]: E0912 10:18:44.197733 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:44.202389 containerd[1518]: time="2025-09-12T10:18:44.202341723Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 10:18:44.206444 containerd[1518]: time="2025-09-12T10:18:44.206410545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hbf6c,Uid:6fe7f9f4-fbb1-47d1-869c-c9ac757e8906,Namespace:kube-system,Attempt:0,} returns sandbox id \"d77ad030eb8ebd12c68db39d8966207a532f8cadcdbdd9c16315b3b3d6bbc2e3\"" Sep 12 10:18:44.207849 kubelet[2613]: E0912 10:18:44.207815 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:44.208376 containerd[1518]: time="2025-09-12T10:18:44.207174387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:18:44.208376 containerd[1518]: time="2025-09-12T10:18:44.207993834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:18:44.208376 containerd[1518]: time="2025-09-12T10:18:44.208013962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:44.208376 containerd[1518]: time="2025-09-12T10:18:44.208255831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:18:44.211617 containerd[1518]: time="2025-09-12T10:18:44.211561384Z" level=info msg="CreateContainer within sandbox \"d77ad030eb8ebd12c68db39d8966207a532f8cadcdbdd9c16315b3b3d6bbc2e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 10:18:44.235045 containerd[1518]: time="2025-09-12T10:18:44.234989148Z" level=info msg="CreateContainer within sandbox \"d77ad030eb8ebd12c68db39d8966207a532f8cadcdbdd9c16315b3b3d6bbc2e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"effec27243f02150c9eec5b1ea6a69200df5339d33f7051e89a16ebc6887ad59\"" Sep 12 10:18:44.235457 containerd[1518]: time="2025-09-12T10:18:44.235390360Z" level=info msg="StartContainer for \"effec27243f02150c9eec5b1ea6a69200df5339d33f7051e89a16ebc6887ad59\"" Sep 12 10:18:44.237450 systemd[1]: Started cri-containerd-2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87.scope - libcontainer container 2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87. Sep 12 10:18:44.274200 systemd[1]: Started cri-containerd-effec27243f02150c9eec5b1ea6a69200df5339d33f7051e89a16ebc6887ad59.scope - libcontainer container effec27243f02150c9eec5b1ea6a69200df5339d33f7051e89a16ebc6887ad59. Sep 12 10:18:44.288102 containerd[1518]: time="2025-09-12T10:18:44.287340520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-s4pjw,Uid:c86fbb4d-7274-4976-aeba-18627e0d63d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87\"" Sep 12 10:18:44.289720 kubelet[2613]: E0912 10:18:44.288564 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:44.311926 containerd[1518]: time="2025-09-12T10:18:44.311879266Z" level=info msg="StartContainer for \"effec27243f02150c9eec5b1ea6a69200df5339d33f7051e89a16ebc6887ad59\" returns successfully" Sep 12 10:18:44.896534 kubelet[2613]: E0912 10:18:44.896486 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:44.905180 kubelet[2613]: I0912 10:18:44.905117 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hbf6c" podStartSLOduration=1.905099058 podStartE2EDuration="1.905099058s" podCreationTimestamp="2025-09-12 10:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:18:44.904845867 +0000 UTC m=+7.285617629" watchObservedRunningTime="2025-09-12 10:18:44.905099058 +0000 UTC m=+7.285870820" Sep 12 10:18:44.924582 kubelet[2613]: E0912 10:18:44.924523 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:45.898311 kubelet[2613]: E0912 10:18:45.898271 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:46.337867 kubelet[2613]: E0912 10:18:46.337789 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:46.899937 kubelet[2613]: E0912 10:18:46.899869 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:49.398273 update_engine[1504]: I20250912 10:18:49.397140 1504 update_attempter.cc:509] Updating boot flags... Sep 12 10:18:49.457109 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2998) Sep 12 10:18:49.533890 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3001) Sep 12 10:18:49.592604 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3001) Sep 12 10:18:50.272532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1008962024.mount: Deactivated successfully. Sep 12 10:18:57.374481 containerd[1518]: time="2025-09-12T10:18:57.374382760Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:57.375374 containerd[1518]: time="2025-09-12T10:18:57.375343242Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 10:18:57.376314 containerd[1518]: time="2025-09-12T10:18:57.376269890Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:57.377966 containerd[1518]: time="2025-09-12T10:18:57.377882020Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.175476507s" Sep 12 10:18:57.378029 containerd[1518]: time="2025-09-12T10:18:57.377966840Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 10:18:57.379260 containerd[1518]: time="2025-09-12T10:18:57.379229512Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 10:18:57.381398 containerd[1518]: time="2025-09-12T10:18:57.381241226Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:18:57.416204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618179044.mount: Deactivated successfully. Sep 12 10:18:57.416699 containerd[1518]: time="2025-09-12T10:18:57.416461041Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608\"" Sep 12 10:18:57.417290 containerd[1518]: time="2025-09-12T10:18:57.417202559Z" level=info msg="StartContainer for \"10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608\"" Sep 12 10:18:57.452339 systemd[1]: Started cri-containerd-10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608.scope - libcontainer container 10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608. Sep 12 10:18:57.486130 containerd[1518]: time="2025-09-12T10:18:57.486020471Z" level=info msg="StartContainer for \"10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608\" returns successfully" Sep 12 10:18:57.498315 systemd[1]: cri-containerd-10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608.scope: Deactivated successfully. Sep 12 10:18:57.850007 containerd[1518]: time="2025-09-12T10:18:57.849268610Z" level=info msg="shim disconnected" id=10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608 namespace=k8s.io Sep 12 10:18:57.850007 containerd[1518]: time="2025-09-12T10:18:57.849409857Z" level=warning msg="cleaning up after shim disconnected" id=10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608 namespace=k8s.io Sep 12 10:18:57.850007 containerd[1518]: time="2025-09-12T10:18:57.849434694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:57.947200 kubelet[2613]: E0912 10:18:57.947154 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:57.949907 containerd[1518]: time="2025-09-12T10:18:57.949858184Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:18:57.971072 containerd[1518]: time="2025-09-12T10:18:57.970992253Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694\"" Sep 12 10:18:57.972843 containerd[1518]: time="2025-09-12T10:18:57.972799070Z" level=info msg="StartContainer for \"df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694\"" Sep 12 10:18:58.007305 systemd[1]: Started cri-containerd-df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694.scope - libcontainer container df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694. Sep 12 10:18:58.040928 containerd[1518]: time="2025-09-12T10:18:58.040869704Z" level=info msg="StartContainer for \"df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694\" returns successfully" Sep 12 10:18:58.055631 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 10:18:58.055927 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:18:58.056645 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:18:58.062481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 10:18:58.062716 systemd[1]: cri-containerd-df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694.scope: Deactivated successfully. Sep 12 10:18:58.088257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 10:18:58.089031 containerd[1518]: time="2025-09-12T10:18:58.088958348Z" level=info msg="shim disconnected" id=df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694 namespace=k8s.io Sep 12 10:18:58.089031 containerd[1518]: time="2025-09-12T10:18:58.089018390Z" level=warning msg="cleaning up after shim disconnected" id=df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694 namespace=k8s.io Sep 12 10:18:58.089031 containerd[1518]: time="2025-09-12T10:18:58.089027859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:58.412639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608-rootfs.mount: Deactivated successfully. Sep 12 10:18:58.949939 kubelet[2613]: E0912 10:18:58.949866 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:58.952174 containerd[1518]: time="2025-09-12T10:18:58.952116880Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:18:58.967232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount136497715.mount: Deactivated successfully. Sep 12 10:18:58.981652 containerd[1518]: time="2025-09-12T10:18:58.981593303Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69\"" Sep 12 10:18:58.983085 containerd[1518]: time="2025-09-12T10:18:58.982219844Z" level=info msg="StartContainer for \"9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69\"" Sep 12 10:18:59.014366 systemd[1]: Started cri-containerd-9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69.scope - libcontainer container 9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69. Sep 12 10:18:59.054461 systemd[1]: cri-containerd-9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69.scope: Deactivated successfully. Sep 12 10:18:59.056513 containerd[1518]: time="2025-09-12T10:18:59.056458112Z" level=info msg="StartContainer for \"9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69\" returns successfully" Sep 12 10:18:59.099293 containerd[1518]: time="2025-09-12T10:18:59.099211217Z" level=info msg="shim disconnected" id=9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69 namespace=k8s.io Sep 12 10:18:59.099293 containerd[1518]: time="2025-09-12T10:18:59.099273344Z" level=warning msg="cleaning up after shim disconnected" id=9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69 namespace=k8s.io Sep 12 10:18:59.099293 containerd[1518]: time="2025-09-12T10:18:59.099282742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:18:59.333574 containerd[1518]: time="2025-09-12T10:18:59.333513962Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:59.334266 containerd[1518]: time="2025-09-12T10:18:59.334219142Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 10:18:59.335586 containerd[1518]: time="2025-09-12T10:18:59.335532447Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 10:18:59.337492 containerd[1518]: time="2025-09-12T10:18:59.337452174Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.958183018s" Sep 12 10:18:59.337551 containerd[1518]: time="2025-09-12T10:18:59.337497130Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 10:18:59.339735 containerd[1518]: time="2025-09-12T10:18:59.339678120Z" level=info msg="CreateContainer within sandbox \"2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 10:18:59.366718 containerd[1518]: time="2025-09-12T10:18:59.366642028Z" level=info msg="CreateContainer within sandbox \"2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662\"" Sep 12 10:18:59.367378 containerd[1518]: time="2025-09-12T10:18:59.367332509Z" level=info msg="StartContainer for \"9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662\"" Sep 12 10:18:59.403242 systemd[1]: Started cri-containerd-9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662.scope - libcontainer container 9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662. Sep 12 10:18:59.541471 containerd[1518]: time="2025-09-12T10:18:59.541404820Z" level=info msg="StartContainer for \"9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662\" returns successfully" Sep 12 10:18:59.958102 kubelet[2613]: E0912 10:18:59.956223 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:59.963260 kubelet[2613]: E0912 10:18:59.962778 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:18:59.965352 containerd[1518]: time="2025-09-12T10:18:59.965202562Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:18:59.983641 containerd[1518]: time="2025-09-12T10:18:59.983449091Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b\"" Sep 12 10:18:59.985139 containerd[1518]: time="2025-09-12T10:18:59.984041928Z" level=info msg="StartContainer for \"16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b\"" Sep 12 10:19:00.053264 systemd[1]: Started cri-containerd-16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b.scope - libcontainer container 16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b. Sep 12 10:19:00.084165 systemd[1]: cri-containerd-16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b.scope: Deactivated successfully. Sep 12 10:19:00.113572 containerd[1518]: time="2025-09-12T10:19:00.113489357Z" level=info msg="StartContainer for \"16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b\" returns successfully" Sep 12 10:19:00.119621 kubelet[2613]: I0912 10:19:00.119516 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-s4pjw" podStartSLOduration=2.070833598 podStartE2EDuration="17.119489581s" podCreationTimestamp="2025-09-12 10:18:43 +0000 UTC" firstStartedPulling="2025-09-12 10:18:44.289735151 +0000 UTC m=+6.670506913" lastFinishedPulling="2025-09-12 10:18:59.338391134 +0000 UTC m=+21.719162896" observedRunningTime="2025-09-12 10:18:59.972712506 +0000 UTC m=+22.353484268" watchObservedRunningTime="2025-09-12 10:19:00.119489581 +0000 UTC m=+22.500261353" Sep 12 10:19:00.148650 containerd[1518]: time="2025-09-12T10:19:00.148582966Z" level=info msg="shim disconnected" id=16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b namespace=k8s.io Sep 12 10:19:00.148650 containerd[1518]: time="2025-09-12T10:19:00.148643129Z" level=warning msg="cleaning up after shim disconnected" id=16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b namespace=k8s.io Sep 12 10:19:00.148650 containerd[1518]: time="2025-09-12T10:19:00.148653518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:19:00.412731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b-rootfs.mount: Deactivated successfully. Sep 12 10:19:00.968362 kubelet[2613]: E0912 10:19:00.967443 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:00.968362 kubelet[2613]: E0912 10:19:00.967594 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:00.969518 containerd[1518]: time="2025-09-12T10:19:00.969480357Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:19:00.990279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount142386087.mount: Deactivated successfully. Sep 12 10:19:00.990802 containerd[1518]: time="2025-09-12T10:19:00.990755310Z" level=info msg="CreateContainer within sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb\"" Sep 12 10:19:00.991229 containerd[1518]: time="2025-09-12T10:19:00.991181963Z" level=info msg="StartContainer for \"43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb\"" Sep 12 10:19:01.035334 systemd[1]: Started cri-containerd-43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb.scope - libcontainer container 43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb. Sep 12 10:19:01.112714 containerd[1518]: time="2025-09-12T10:19:01.112610411Z" level=info msg="StartContainer for \"43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb\" returns successfully" Sep 12 10:19:01.286627 kubelet[2613]: I0912 10:19:01.286588 2613 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 10:19:01.323128 systemd[1]: Created slice kubepods-burstable-pod473bb6b6_74ef_4181_a29b_85ff57196798.slice - libcontainer container kubepods-burstable-pod473bb6b6_74ef_4181_a29b_85ff57196798.slice. Sep 12 10:19:01.330313 kubelet[2613]: I0912 10:19:01.330247 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473bb6b6-74ef-4181-a29b-85ff57196798-config-volume\") pod \"coredns-7c65d6cfc9-4n4b6\" (UID: \"473bb6b6-74ef-4181-a29b-85ff57196798\") " pod="kube-system/coredns-7c65d6cfc9-4n4b6" Sep 12 10:19:01.330313 kubelet[2613]: I0912 10:19:01.330304 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlb88\" (UniqueName: \"kubernetes.io/projected/473bb6b6-74ef-4181-a29b-85ff57196798-kube-api-access-hlb88\") pod \"coredns-7c65d6cfc9-4n4b6\" (UID: \"473bb6b6-74ef-4181-a29b-85ff57196798\") " pod="kube-system/coredns-7c65d6cfc9-4n4b6" Sep 12 10:19:01.333975 systemd[1]: Created slice kubepods-burstable-podc710b523_a349_4124_b906_e056d3f320ef.slice - libcontainer container kubepods-burstable-podc710b523_a349_4124_b906_e056d3f320ef.slice. Sep 12 10:19:01.431438 kubelet[2613]: I0912 10:19:01.431372 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8g6p\" (UniqueName: \"kubernetes.io/projected/c710b523-a349-4124-b906-e056d3f320ef-kube-api-access-w8g6p\") pod \"coredns-7c65d6cfc9-thxjh\" (UID: \"c710b523-a349-4124-b906-e056d3f320ef\") " pod="kube-system/coredns-7c65d6cfc9-thxjh" Sep 12 10:19:01.431604 kubelet[2613]: I0912 10:19:01.431506 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c710b523-a349-4124-b906-e056d3f320ef-config-volume\") pod \"coredns-7c65d6cfc9-thxjh\" (UID: \"c710b523-a349-4124-b906-e056d3f320ef\") " pod="kube-system/coredns-7c65d6cfc9-thxjh" Sep 12 10:19:01.630784 kubelet[2613]: E0912 10:19:01.630600 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:01.632768 containerd[1518]: time="2025-09-12T10:19:01.632723536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4n4b6,Uid:473bb6b6-74ef-4181-a29b-85ff57196798,Namespace:kube-system,Attempt:0,}" Sep 12 10:19:01.638941 kubelet[2613]: E0912 10:19:01.638895 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:01.640063 containerd[1518]: time="2025-09-12T10:19:01.640009960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-thxjh,Uid:c710b523-a349-4124-b906-e056d3f320ef,Namespace:kube-system,Attempt:0,}" Sep 12 10:19:01.973512 kubelet[2613]: E0912 10:19:01.973034 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:02.058906 kubelet[2613]: I0912 10:19:02.058451 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cvdfk" podStartSLOduration=5.88018765 podStartE2EDuration="19.058431449s" podCreationTimestamp="2025-09-12 10:18:43 +0000 UTC" firstStartedPulling="2025-09-12 10:18:44.200747392 +0000 UTC m=+6.581519155" lastFinishedPulling="2025-09-12 10:18:57.378991192 +0000 UTC m=+19.759762954" observedRunningTime="2025-09-12 10:19:02.05837851 +0000 UTC m=+24.439150292" watchObservedRunningTime="2025-09-12 10:19:02.058431449 +0000 UTC m=+24.439203221" Sep 12 10:19:02.974527 kubelet[2613]: E0912 10:19:02.974488 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:03.428359 systemd-networkd[1447]: cilium_host: Link UP Sep 12 10:19:03.428706 systemd-networkd[1447]: cilium_net: Link UP Sep 12 10:19:03.429001 systemd-networkd[1447]: cilium_net: Gained carrier Sep 12 10:19:03.429500 systemd-networkd[1447]: cilium_host: Gained carrier Sep 12 10:19:03.553732 systemd-networkd[1447]: cilium_vxlan: Link UP Sep 12 10:19:03.553948 systemd-networkd[1447]: cilium_vxlan: Gained carrier Sep 12 10:19:03.611302 systemd-networkd[1447]: cilium_net: Gained IPv6LL Sep 12 10:19:03.793273 kernel: NET: Registered PF_ALG protocol family Sep 12 10:19:03.867437 systemd-networkd[1447]: cilium_host: Gained IPv6LL Sep 12 10:19:03.975894 kubelet[2613]: E0912 10:19:03.975850 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:04.551273 systemd-networkd[1447]: lxc_health: Link UP Sep 12 10:19:04.554002 systemd-networkd[1447]: lxc_health: Gained carrier Sep 12 10:19:04.707079 kernel: eth0: renamed from tmpbd101 Sep 12 10:19:04.712751 systemd-networkd[1447]: lxcc23311b0d355: Link UP Sep 12 10:19:04.713408 systemd-networkd[1447]: lxcc23311b0d355: Gained carrier Sep 12 10:19:04.730087 kernel: eth0: renamed from tmp84f89 Sep 12 10:19:04.737837 systemd-networkd[1447]: lxc48edbfe63bd5: Link UP Sep 12 10:19:04.738351 systemd-networkd[1447]: lxc48edbfe63bd5: Gained carrier Sep 12 10:19:04.827230 systemd-networkd[1447]: cilium_vxlan: Gained IPv6LL Sep 12 10:19:04.981000 kubelet[2613]: E0912 10:19:04.980577 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:05.395820 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:46514.service - OpenSSH per-connection server daemon (10.0.0.1:46514). Sep 12 10:19:05.458258 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 46514 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:05.461020 sshd-session[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:05.475300 systemd-logind[1498]: New session 8 of user core. Sep 12 10:19:05.485342 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 10:19:05.715350 sshd[3837]: Connection closed by 10.0.0.1 port 46514 Sep 12 10:19:05.716723 sshd-session[3835]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:05.722089 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:46514.service: Deactivated successfully. Sep 12 10:19:05.725413 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 10:19:05.726171 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. Sep 12 10:19:05.727445 systemd-logind[1498]: Removed session 8. Sep 12 10:19:05.981173 systemd-networkd[1447]: lxc_health: Gained IPv6LL Sep 12 10:19:05.984926 kubelet[2613]: E0912 10:19:05.984880 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:06.363358 systemd-networkd[1447]: lxcc23311b0d355: Gained IPv6LL Sep 12 10:19:06.747488 systemd-networkd[1447]: lxc48edbfe63bd5: Gained IPv6LL Sep 12 10:19:06.986698 kubelet[2613]: E0912 10:19:06.986622 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:09.213499 containerd[1518]: time="2025-09-12T10:19:09.212497555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:19:09.213499 containerd[1518]: time="2025-09-12T10:19:09.212586873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:19:09.213499 containerd[1518]: time="2025-09-12T10:19:09.212601270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:09.213499 containerd[1518]: time="2025-09-12T10:19:09.212709954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:09.227255 containerd[1518]: time="2025-09-12T10:19:09.226385465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:19:09.227255 containerd[1518]: time="2025-09-12T10:19:09.226460446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:19:09.227255 containerd[1518]: time="2025-09-12T10:19:09.226480634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:09.227255 containerd[1518]: time="2025-09-12T10:19:09.226624505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:19:09.247261 systemd[1]: Started cri-containerd-bd1010bc7e2a45b259775348cfc7a5acdd3f517306a35abbd19614fd85b2c72e.scope - libcontainer container bd1010bc7e2a45b259775348cfc7a5acdd3f517306a35abbd19614fd85b2c72e. Sep 12 10:19:09.253218 systemd[1]: Started cri-containerd-84f89917439eddd646ced0bf203e49a4b9ecd907d984c1492927b7e9264ff2a4.scope - libcontainer container 84f89917439eddd646ced0bf203e49a4b9ecd907d984c1492927b7e9264ff2a4. Sep 12 10:19:09.264272 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 10:19:09.268146 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 10:19:09.298045 containerd[1518]: time="2025-09-12T10:19:09.297982964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-thxjh,Uid:c710b523-a349-4124-b906-e056d3f320ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd1010bc7e2a45b259775348cfc7a5acdd3f517306a35abbd19614fd85b2c72e\"" Sep 12 10:19:09.304097 containerd[1518]: time="2025-09-12T10:19:09.304067513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4n4b6,Uid:473bb6b6-74ef-4181-a29b-85ff57196798,Namespace:kube-system,Attempt:0,} returns sandbox id \"84f89917439eddd646ced0bf203e49a4b9ecd907d984c1492927b7e9264ff2a4\"" Sep 12 10:19:09.305733 kubelet[2613]: E0912 10:19:09.305683 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:09.307099 kubelet[2613]: E0912 10:19:09.307081 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:09.308756 containerd[1518]: time="2025-09-12T10:19:09.308723648Z" level=info msg="CreateContainer within sandbox \"bd1010bc7e2a45b259775348cfc7a5acdd3f517306a35abbd19614fd85b2c72e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:19:09.310420 containerd[1518]: time="2025-09-12T10:19:09.310397033Z" level=info msg="CreateContainer within sandbox \"84f89917439eddd646ced0bf203e49a4b9ecd907d984c1492927b7e9264ff2a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 10:19:09.518158 containerd[1518]: time="2025-09-12T10:19:09.518102907Z" level=info msg="CreateContainer within sandbox \"84f89917439eddd646ced0bf203e49a4b9ecd907d984c1492927b7e9264ff2a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5effa45e49f6056d15d3b8f10869001fdcc9c4c1cb56a5c3a258e1a6028b6fc7\"" Sep 12 10:19:09.519085 containerd[1518]: time="2025-09-12T10:19:09.518694719Z" level=info msg="StartContainer for \"5effa45e49f6056d15d3b8f10869001fdcc9c4c1cb56a5c3a258e1a6028b6fc7\"" Sep 12 10:19:09.547235 systemd[1]: Started cri-containerd-5effa45e49f6056d15d3b8f10869001fdcc9c4c1cb56a5c3a258e1a6028b6fc7.scope - libcontainer container 5effa45e49f6056d15d3b8f10869001fdcc9c4c1cb56a5c3a258e1a6028b6fc7. Sep 12 10:19:09.558599 containerd[1518]: time="2025-09-12T10:19:09.558527047Z" level=info msg="CreateContainer within sandbox \"bd1010bc7e2a45b259775348cfc7a5acdd3f517306a35abbd19614fd85b2c72e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74d512f02268eb7d7994703e3a9f529b39f546afc2e0a9f7a7854454013209a3\"" Sep 12 10:19:09.560113 containerd[1518]: time="2025-09-12T10:19:09.559219098Z" level=info msg="StartContainer for \"74d512f02268eb7d7994703e3a9f529b39f546afc2e0a9f7a7854454013209a3\"" Sep 12 10:19:09.598855 systemd[1]: Started cri-containerd-74d512f02268eb7d7994703e3a9f529b39f546afc2e0a9f7a7854454013209a3.scope - libcontainer container 74d512f02268eb7d7994703e3a9f529b39f546afc2e0a9f7a7854454013209a3. Sep 12 10:19:09.726666 containerd[1518]: time="2025-09-12T10:19:09.726577755Z" level=info msg="StartContainer for \"5effa45e49f6056d15d3b8f10869001fdcc9c4c1cb56a5c3a258e1a6028b6fc7\" returns successfully" Sep 12 10:19:09.726666 containerd[1518]: time="2025-09-12T10:19:09.726626436Z" level=info msg="StartContainer for \"74d512f02268eb7d7994703e3a9f529b39f546afc2e0a9f7a7854454013209a3\" returns successfully" Sep 12 10:19:09.993343 kubelet[2613]: E0912 10:19:09.993218 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:09.996225 kubelet[2613]: E0912 10:19:09.995372 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:10.222575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926915066.mount: Deactivated successfully. Sep 12 10:19:10.256825 kubelet[2613]: I0912 10:19:10.256513 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-thxjh" podStartSLOduration=27.256483176 podStartE2EDuration="27.256483176s" podCreationTimestamp="2025-09-12 10:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:19:10.255842111 +0000 UTC m=+32.636613873" watchObservedRunningTime="2025-09-12 10:19:10.256483176 +0000 UTC m=+32.637254938" Sep 12 10:19:10.274198 kubelet[2613]: I0912 10:19:10.274133 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4n4b6" podStartSLOduration=27.274111452 podStartE2EDuration="27.274111452s" podCreationTimestamp="2025-09-12 10:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:19:10.274033787 +0000 UTC m=+32.654805549" watchObservedRunningTime="2025-09-12 10:19:10.274111452 +0000 UTC m=+32.654883215" Sep 12 10:19:10.732958 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:53452.service - OpenSSH per-connection server daemon (10.0.0.1:53452). Sep 12 10:19:10.787905 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 53452 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:10.789761 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:10.794937 systemd-logind[1498]: New session 9 of user core. Sep 12 10:19:10.804268 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 10:19:10.969925 sshd[4030]: Connection closed by 10.0.0.1 port 53452 Sep 12 10:19:10.970347 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:10.974484 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:53452.service: Deactivated successfully. Sep 12 10:19:10.976975 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 10:19:10.977706 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. Sep 12 10:19:10.978700 systemd-logind[1498]: Removed session 9. Sep 12 10:19:10.999553 kubelet[2613]: E0912 10:19:10.997562 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:10.999553 kubelet[2613]: E0912 10:19:10.997607 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:11.999177 kubelet[2613]: E0912 10:19:11.999135 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:11.999474 kubelet[2613]: E0912 10:19:11.999373 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:15.983700 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:53468.service - OpenSSH per-connection server daemon (10.0.0.1:53468). Sep 12 10:19:16.026926 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 53468 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:16.028656 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:16.033568 systemd-logind[1498]: New session 10 of user core. Sep 12 10:19:16.047407 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 10:19:16.176854 sshd[4057]: Connection closed by 10.0.0.1 port 53468 Sep 12 10:19:16.177285 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:16.181547 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:53468.service: Deactivated successfully. Sep 12 10:19:16.184313 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 10:19:16.185102 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. Sep 12 10:19:16.186105 systemd-logind[1498]: Removed session 10. Sep 12 10:19:21.197183 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:56014.service - OpenSSH per-connection server daemon (10.0.0.1:56014). Sep 12 10:19:21.254222 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 56014 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:21.256271 sshd-session[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:21.261502 systemd-logind[1498]: New session 11 of user core. Sep 12 10:19:21.267267 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 10:19:21.393998 sshd[4073]: Connection closed by 10.0.0.1 port 56014 Sep 12 10:19:21.394452 sshd-session[4071]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:21.412865 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:56014.service: Deactivated successfully. Sep 12 10:19:21.415359 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 10:19:21.418205 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. Sep 12 10:19:21.427860 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:56016.service - OpenSSH per-connection server daemon (10.0.0.1:56016). Sep 12 10:19:21.429580 systemd-logind[1498]: Removed session 11. Sep 12 10:19:21.468781 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 56016 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:21.470574 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:21.475363 systemd-logind[1498]: New session 12 of user core. Sep 12 10:19:21.485217 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 10:19:21.813920 sshd[4090]: Connection closed by 10.0.0.1 port 56016 Sep 12 10:19:21.816194 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:21.823635 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:56016.service: Deactivated successfully. Sep 12 10:19:21.826667 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 10:19:21.829933 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. Sep 12 10:19:21.840274 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:56020.service - OpenSSH per-connection server daemon (10.0.0.1:56020). Sep 12 10:19:21.842310 systemd-logind[1498]: Removed session 12. Sep 12 10:19:21.885926 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 56020 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:21.887619 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:21.892442 systemd-logind[1498]: New session 13 of user core. Sep 12 10:19:21.904180 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 10:19:22.019816 sshd[4104]: Connection closed by 10.0.0.1 port 56020 Sep 12 10:19:22.020255 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:22.024760 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:56020.service: Deactivated successfully. Sep 12 10:19:22.027152 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 10:19:22.027853 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. Sep 12 10:19:22.028805 systemd-logind[1498]: Removed session 13. Sep 12 10:19:27.048351 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:56022.service - OpenSSH per-connection server daemon (10.0.0.1:56022). Sep 12 10:19:27.094938 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 56022 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:27.096877 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:27.102822 systemd-logind[1498]: New session 14 of user core. Sep 12 10:19:27.112290 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 10:19:27.237772 sshd[4120]: Connection closed by 10.0.0.1 port 56022 Sep 12 10:19:27.238259 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:27.244318 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:56022.service: Deactivated successfully. Sep 12 10:19:27.247006 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 10:19:27.247970 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. Sep 12 10:19:27.249260 systemd-logind[1498]: Removed session 14. Sep 12 10:19:32.264492 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:59820.service - OpenSSH per-connection server daemon (10.0.0.1:59820). Sep 12 10:19:32.395835 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 59820 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:32.398116 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:32.405259 systemd-logind[1498]: New session 15 of user core. Sep 12 10:19:32.413344 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 10:19:32.544547 sshd[4136]: Connection closed by 10.0.0.1 port 59820 Sep 12 10:19:32.545315 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:32.550663 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:59820.service: Deactivated successfully. Sep 12 10:19:32.553693 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 10:19:32.555280 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. Sep 12 10:19:32.556669 systemd-logind[1498]: Removed session 15. Sep 12 10:19:37.557412 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:59834.service - OpenSSH per-connection server daemon (10.0.0.1:59834). Sep 12 10:19:37.599395 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 59834 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:37.600921 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:37.605103 systemd-logind[1498]: New session 16 of user core. Sep 12 10:19:37.617190 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 10:19:37.730190 sshd[4151]: Connection closed by 10.0.0.1 port 59834 Sep 12 10:19:37.730620 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:37.743294 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:59834.service: Deactivated successfully. Sep 12 10:19:37.745779 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 10:19:37.748178 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. Sep 12 10:19:37.757710 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:59836.service - OpenSSH per-connection server daemon (10.0.0.1:59836). Sep 12 10:19:37.759151 systemd-logind[1498]: Removed session 16. Sep 12 10:19:37.793856 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 59836 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:37.795220 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:37.799698 systemd-logind[1498]: New session 17 of user core. Sep 12 10:19:37.812254 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 10:19:38.051990 sshd[4166]: Connection closed by 10.0.0.1 port 59836 Sep 12 10:19:38.052513 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:38.066262 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:59836.service: Deactivated successfully. Sep 12 10:19:38.068514 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 10:19:38.070385 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. Sep 12 10:19:38.079455 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:59842.service - OpenSSH per-connection server daemon (10.0.0.1:59842). Sep 12 10:19:38.080594 systemd-logind[1498]: Removed session 17. Sep 12 10:19:38.122737 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 59842 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:38.124508 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:38.129355 systemd-logind[1498]: New session 18 of user core. Sep 12 10:19:38.139231 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 10:19:39.216958 sshd[4181]: Connection closed by 10.0.0.1 port 59842 Sep 12 10:19:39.219116 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:39.228722 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:59842.service: Deactivated successfully. Sep 12 10:19:39.233009 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 10:19:39.234913 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. Sep 12 10:19:39.245424 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:59848.service - OpenSSH per-connection server daemon (10.0.0.1:59848). Sep 12 10:19:39.246929 systemd-logind[1498]: Removed session 18. Sep 12 10:19:39.285008 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 59848 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:39.287207 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:39.291762 systemd-logind[1498]: New session 19 of user core. Sep 12 10:19:39.301200 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 10:19:39.562481 sshd[4207]: Connection closed by 10.0.0.1 port 59848 Sep 12 10:19:39.562920 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:39.575879 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:59848.service: Deactivated successfully. Sep 12 10:19:39.578018 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 10:19:39.580003 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. Sep 12 10:19:39.594414 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:59862.service - OpenSSH per-connection server daemon (10.0.0.1:59862). Sep 12 10:19:39.595514 systemd-logind[1498]: Removed session 19. Sep 12 10:19:39.633123 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 59862 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:39.634851 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:39.639530 systemd-logind[1498]: New session 20 of user core. Sep 12 10:19:39.649264 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 10:19:39.768283 sshd[4221]: Connection closed by 10.0.0.1 port 59862 Sep 12 10:19:39.768687 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:39.773406 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:59862.service: Deactivated successfully. Sep 12 10:19:39.775848 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 10:19:39.776607 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. Sep 12 10:19:39.777544 systemd-logind[1498]: Removed session 20. Sep 12 10:19:44.785401 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:37876.service - OpenSSH per-connection server daemon (10.0.0.1:37876). Sep 12 10:19:44.826927 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 37876 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:44.828537 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:44.832939 systemd-logind[1498]: New session 21 of user core. Sep 12 10:19:44.843202 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 10:19:44.952705 sshd[4239]: Connection closed by 10.0.0.1 port 37876 Sep 12 10:19:44.953133 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:44.957662 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:37876.service: Deactivated successfully. Sep 12 10:19:44.960589 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 10:19:44.961457 systemd-logind[1498]: Session 21 logged out. Waiting for processes to exit. Sep 12 10:19:44.962568 systemd-logind[1498]: Removed session 21. Sep 12 10:19:49.980163 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:60514.service - OpenSSH per-connection server daemon (10.0.0.1:60514). Sep 12 10:19:50.020652 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 60514 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:50.022113 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:50.026222 systemd-logind[1498]: New session 22 of user core. Sep 12 10:19:50.036188 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 10:19:50.141349 sshd[4257]: Connection closed by 10.0.0.1 port 60514 Sep 12 10:19:50.141709 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:50.146099 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:60514.service: Deactivated successfully. Sep 12 10:19:50.148412 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 10:19:50.149116 systemd-logind[1498]: Session 22 logged out. Waiting for processes to exit. Sep 12 10:19:50.149940 systemd-logind[1498]: Removed session 22. Sep 12 10:19:55.154927 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:60526.service - OpenSSH per-connection server daemon (10.0.0.1:60526). Sep 12 10:19:55.195876 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 60526 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:19:55.197771 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:19:55.201820 systemd-logind[1498]: New session 23 of user core. Sep 12 10:19:55.208200 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 10:19:55.316962 sshd[4272]: Connection closed by 10.0.0.1 port 60526 Sep 12 10:19:55.317390 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Sep 12 10:19:55.321986 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:60526.service: Deactivated successfully. Sep 12 10:19:55.324527 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 10:19:55.325273 systemd-logind[1498]: Session 23 logged out. Waiting for processes to exit. Sep 12 10:19:55.326351 systemd-logind[1498]: Removed session 23. Sep 12 10:19:59.865839 kubelet[2613]: E0912 10:19:59.865788 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:19:59.865839 kubelet[2613]: E0912 10:19:59.865793 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:00.332922 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:46164.service - OpenSSH per-connection server daemon (10.0.0.1:46164). Sep 12 10:20:00.374484 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 46164 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:20:00.376104 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:20:00.380454 systemd-logind[1498]: New session 24 of user core. Sep 12 10:20:00.387193 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 10:20:00.496787 sshd[4288]: Connection closed by 10.0.0.1 port 46164 Sep 12 10:20:00.497313 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Sep 12 10:20:00.510164 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:46164.service: Deactivated successfully. Sep 12 10:20:00.512411 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 10:20:00.514163 systemd-logind[1498]: Session 24 logged out. Waiting for processes to exit. Sep 12 10:20:00.524442 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:46174.service - OpenSSH per-connection server daemon (10.0.0.1:46174). Sep 12 10:20:00.525546 systemd-logind[1498]: Removed session 24. Sep 12 10:20:00.562641 sshd[4300]: Accepted publickey for core from 10.0.0.1 port 46174 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:20:00.563927 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:20:00.568419 systemd-logind[1498]: New session 25 of user core. Sep 12 10:20:00.577201 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 10:20:01.941072 containerd[1518]: time="2025-09-12T10:20:01.941003765Z" level=info msg="StopContainer for \"9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662\" with timeout 30 (s)" Sep 12 10:20:01.942305 containerd[1518]: time="2025-09-12T10:20:01.942156021Z" level=info msg="Stop container \"9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662\" with signal terminated" Sep 12 10:20:01.960591 systemd[1]: cri-containerd-9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662.scope: Deactivated successfully. Sep 12 10:20:01.985899 containerd[1518]: time="2025-09-12T10:20:01.983634210Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 10:20:01.985694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662-rootfs.mount: Deactivated successfully. Sep 12 10:20:01.988623 containerd[1518]: time="2025-09-12T10:20:01.988556615Z" level=info msg="StopContainer for \"43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb\" with timeout 2 (s)" Sep 12 10:20:01.989104 containerd[1518]: time="2025-09-12T10:20:01.988942069Z" level=info msg="Stop container \"43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb\" with signal terminated" Sep 12 10:20:01.997763 systemd-networkd[1447]: lxc_health: Link DOWN Sep 12 10:20:01.997773 systemd-networkd[1447]: lxc_health: Lost carrier Sep 12 10:20:02.000652 containerd[1518]: time="2025-09-12T10:20:02.000566148Z" level=info msg="shim disconnected" id=9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662 namespace=k8s.io Sep 12 10:20:02.000652 containerd[1518]: time="2025-09-12T10:20:02.000645148Z" level=warning msg="cleaning up after shim disconnected" id=9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662 namespace=k8s.io Sep 12 10:20:02.000652 containerd[1518]: time="2025-09-12T10:20:02.000653965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:20:02.018348 systemd[1]: cri-containerd-43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb.scope: Deactivated successfully. Sep 12 10:20:02.018751 systemd[1]: cri-containerd-43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb.scope: Consumed 8.590s CPU time, 124.1M memory peak, 752K read from disk, 13.3M written to disk. Sep 12 10:20:02.024151 containerd[1518]: time="2025-09-12T10:20:02.024022815Z" level=info msg="StopContainer for \"9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662\" returns successfully" Sep 12 10:20:02.029501 containerd[1518]: time="2025-09-12T10:20:02.029461778Z" level=info msg="StopPodSandbox for \"2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87\"" Sep 12 10:20:02.041422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb-rootfs.mount: Deactivated successfully. Sep 12 10:20:02.048971 containerd[1518]: time="2025-09-12T10:20:02.048886132Z" level=info msg="shim disconnected" id=43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb namespace=k8s.io Sep 12 10:20:02.048971 containerd[1518]: time="2025-09-12T10:20:02.048953360Z" level=warning msg="cleaning up after shim disconnected" id=43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb namespace=k8s.io Sep 12 10:20:02.048971 containerd[1518]: time="2025-09-12T10:20:02.048962918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:20:02.050349 containerd[1518]: time="2025-09-12T10:20:02.029510712Z" level=info msg="Container to stop \"9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:20:02.052837 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87-shm.mount: Deactivated successfully. Sep 12 10:20:02.060563 systemd[1]: cri-containerd-2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87.scope: Deactivated successfully. Sep 12 10:20:02.069547 containerd[1518]: time="2025-09-12T10:20:02.069477689Z" level=info msg="StopContainer for \"43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb\" returns successfully" Sep 12 10:20:02.070774 containerd[1518]: time="2025-09-12T10:20:02.070717742Z" level=info msg="StopPodSandbox for \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\"" Sep 12 10:20:02.070868 containerd[1518]: time="2025-09-12T10:20:02.070780321Z" level=info msg="Container to stop \"43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:20:02.070868 containerd[1518]: time="2025-09-12T10:20:02.070820127Z" level=info msg="Container to stop \"16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:20:02.070868 containerd[1518]: time="2025-09-12T10:20:02.070834194Z" level=info msg="Container to stop \"10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:20:02.070868 containerd[1518]: time="2025-09-12T10:20:02.070844434Z" level=info msg="Container to stop \"df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:20:02.070868 containerd[1518]: time="2025-09-12T10:20:02.070854843Z" level=info msg="Container to stop \"9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 10:20:02.073064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056-shm.mount: Deactivated successfully. Sep 12 10:20:02.078114 systemd[1]: cri-containerd-e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056.scope: Deactivated successfully. Sep 12 10:20:02.101961 containerd[1518]: time="2025-09-12T10:20:02.101861186Z" level=info msg="shim disconnected" id=2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87 namespace=k8s.io Sep 12 10:20:02.101961 containerd[1518]: time="2025-09-12T10:20:02.101931680Z" level=warning msg="cleaning up after shim disconnected" id=2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87 namespace=k8s.io Sep 12 10:20:02.101961 containerd[1518]: time="2025-09-12T10:20:02.101944184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:20:02.108010 containerd[1518]: time="2025-09-12T10:20:02.107940920Z" level=info msg="shim disconnected" id=e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056 namespace=k8s.io Sep 12 10:20:02.108010 containerd[1518]: time="2025-09-12T10:20:02.108007407Z" level=warning msg="cleaning up after shim disconnected" id=e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056 namespace=k8s.io Sep 12 10:20:02.108267 containerd[1518]: time="2025-09-12T10:20:02.108016365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:20:02.128261 containerd[1518]: time="2025-09-12T10:20:02.128203400Z" level=info msg="TearDown network for sandbox \"2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87\" successfully" Sep 12 10:20:02.128832 containerd[1518]: time="2025-09-12T10:20:02.128436405Z" level=info msg="StopPodSandbox for \"2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87\" returns successfully" Sep 12 10:20:02.131903 containerd[1518]: time="2025-09-12T10:20:02.131850320Z" level=info msg="TearDown network for sandbox \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" successfully" Sep 12 10:20:02.131903 containerd[1518]: time="2025-09-12T10:20:02.131887471Z" level=info msg="StopPodSandbox for \"e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056\" returns successfully" Sep 12 10:20:02.190675 kubelet[2613]: I0912 10:20:02.190617 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cni-path\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.190675 kubelet[2613]: I0912 10:20:02.190661 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-cgroup\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.190675 kubelet[2613]: I0912 10:20:02.190690 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c86fbb4d-7274-4976-aeba-18627e0d63d7-cilium-config-path\") pod \"c86fbb4d-7274-4976-aeba-18627e0d63d7\" (UID: \"c86fbb4d-7274-4976-aeba-18627e0d63d7\") " Sep 12 10:20:02.191491 kubelet[2613]: I0912 10:20:02.190712 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-config-path\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191491 kubelet[2613]: I0912 10:20:02.190733 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kvhx\" (UniqueName: \"kubernetes.io/projected/c86fbb4d-7274-4976-aeba-18627e0d63d7-kube-api-access-9kvhx\") pod \"c86fbb4d-7274-4976-aeba-18627e0d63d7\" (UID: \"c86fbb4d-7274-4976-aeba-18627e0d63d7\") " Sep 12 10:20:02.191491 kubelet[2613]: I0912 10:20:02.190752 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-lib-modules\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191491 kubelet[2613]: I0912 10:20:02.190770 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.191491 kubelet[2613]: I0912 10:20:02.190769 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cni-path" (OuterVolumeSpecName: "cni-path") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.191649 kubelet[2613]: I0912 10:20:02.190774 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-etc-cni-netd\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191649 kubelet[2613]: I0912 10:20:02.190844 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.191649 kubelet[2613]: I0912 10:20:02.190876 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-hostproc\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191649 kubelet[2613]: I0912 10:20:02.190929 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-xtables-lock\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191649 kubelet[2613]: I0912 10:20:02.190953 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6299\" (UniqueName: \"kubernetes.io/projected/dbb45aa0-f13d-4916-b670-a8a588d62186-kube-api-access-k6299\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191649 kubelet[2613]: I0912 10:20:02.190971 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbb45aa0-f13d-4916-b670-a8a588d62186-clustermesh-secrets\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191811 kubelet[2613]: I0912 10:20:02.191000 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-host-proc-sys-kernel\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191811 kubelet[2613]: I0912 10:20:02.191016 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-host-proc-sys-net\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191811 kubelet[2613]: I0912 10:20:02.191030 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbb45aa0-f13d-4916-b670-a8a588d62186-hubble-tls\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191811 kubelet[2613]: I0912 10:20:02.191048 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-run\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191811 kubelet[2613]: I0912 10:20:02.191108 2613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-bpf-maps\") pod \"dbb45aa0-f13d-4916-b670-a8a588d62186\" (UID: \"dbb45aa0-f13d-4916-b670-a8a588d62186\") " Sep 12 10:20:02.191811 kubelet[2613]: I0912 10:20:02.191178 2613 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.191811 kubelet[2613]: I0912 10:20:02.191189 2613 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.191988 kubelet[2613]: I0912 10:20:02.191200 2613 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.191988 kubelet[2613]: I0912 10:20:02.191252 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.191988 kubelet[2613]: I0912 10:20:02.191274 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-hostproc" (OuterVolumeSpecName: "hostproc") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.191988 kubelet[2613]: I0912 10:20:02.191290 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.191988 kubelet[2613]: I0912 10:20:02.191380 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.194632 kubelet[2613]: I0912 10:20:02.194588 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 10:20:02.194704 kubelet[2613]: I0912 10:20:02.194681 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.194744 kubelet[2613]: I0912 10:20:02.194718 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.194744 kubelet[2613]: I0912 10:20:02.194737 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 10:20:02.195833 kubelet[2613]: I0912 10:20:02.195772 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb45aa0-f13d-4916-b670-a8a588d62186-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 10:20:02.196264 kubelet[2613]: I0912 10:20:02.196213 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86fbb4d-7274-4976-aeba-18627e0d63d7-kube-api-access-9kvhx" (OuterVolumeSpecName: "kube-api-access-9kvhx") pod "c86fbb4d-7274-4976-aeba-18627e0d63d7" (UID: "c86fbb4d-7274-4976-aeba-18627e0d63d7"). InnerVolumeSpecName "kube-api-access-9kvhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 10:20:02.197543 kubelet[2613]: I0912 10:20:02.197494 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb45aa0-f13d-4916-b670-a8a588d62186-kube-api-access-k6299" (OuterVolumeSpecName: "kube-api-access-k6299") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "kube-api-access-k6299". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 10:20:02.198272 kubelet[2613]: I0912 10:20:02.198240 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c86fbb4d-7274-4976-aeba-18627e0d63d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c86fbb4d-7274-4976-aeba-18627e0d63d7" (UID: "c86fbb4d-7274-4976-aeba-18627e0d63d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 10:20:02.198559 kubelet[2613]: I0912 10:20:02.198533 2613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb45aa0-f13d-4916-b670-a8a588d62186-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dbb45aa0-f13d-4916-b670-a8a588d62186" (UID: "dbb45aa0-f13d-4916-b670-a8a588d62186"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 10:20:02.292092 kubelet[2613]: I0912 10:20:02.291987 2613 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292092 kubelet[2613]: I0912 10:20:02.292031 2613 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kvhx\" (UniqueName: \"kubernetes.io/projected/c86fbb4d-7274-4976-aeba-18627e0d63d7-kube-api-access-9kvhx\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292092 kubelet[2613]: I0912 10:20:02.292043 2613 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292092 kubelet[2613]: I0912 10:20:02.292069 2613 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292092 kubelet[2613]: I0912 10:20:02.292080 2613 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292092 kubelet[2613]: I0912 10:20:02.292089 2613 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6299\" (UniqueName: \"kubernetes.io/projected/dbb45aa0-f13d-4916-b670-a8a588d62186-kube-api-access-k6299\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292092 kubelet[2613]: I0912 10:20:02.292099 2613 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbb45aa0-f13d-4916-b670-a8a588d62186-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292092 kubelet[2613]: I0912 10:20:02.292108 2613 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292647 kubelet[2613]: I0912 10:20:02.292117 2613 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292647 kubelet[2613]: I0912 10:20:02.292125 2613 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbb45aa0-f13d-4916-b670-a8a588d62186-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292647 kubelet[2613]: I0912 10:20:02.292134 2613 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292647 kubelet[2613]: I0912 10:20:02.292142 2613 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbb45aa0-f13d-4916-b670-a8a588d62186-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.292647 kubelet[2613]: I0912 10:20:02.292157 2613 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c86fbb4d-7274-4976-aeba-18627e0d63d7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 10:20:02.941280 kubelet[2613]: E0912 10:20:02.941205 2613 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:20:02.950531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fa8773944038fc8c15cb412e2d3d7095eb7a6f7d9bf92f1b126087b15440e87-rootfs.mount: Deactivated successfully. Sep 12 10:20:02.950680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2fff3ba012f593ca447c7912a954a540ff481a2b13488ea08299db171803056-rootfs.mount: Deactivated successfully. Sep 12 10:20:02.950768 systemd[1]: var-lib-kubelet-pods-c86fbb4d\x2d7274\x2d4976\x2daeba\x2d18627e0d63d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9kvhx.mount: Deactivated successfully. Sep 12 10:20:02.950854 systemd[1]: var-lib-kubelet-pods-dbb45aa0\x2df13d\x2d4916\x2db670\x2da8a588d62186-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6299.mount: Deactivated successfully. Sep 12 10:20:02.950942 systemd[1]: var-lib-kubelet-pods-dbb45aa0\x2df13d\x2d4916\x2db670\x2da8a588d62186-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 10:20:02.951025 systemd[1]: var-lib-kubelet-pods-dbb45aa0\x2df13d\x2d4916\x2db670\x2da8a588d62186-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 10:20:03.118806 kubelet[2613]: I0912 10:20:03.118747 2613 scope.go:117] "RemoveContainer" containerID="43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb" Sep 12 10:20:03.126520 systemd[1]: Removed slice kubepods-burstable-poddbb45aa0_f13d_4916_b670_a8a588d62186.slice - libcontainer container kubepods-burstable-poddbb45aa0_f13d_4916_b670_a8a588d62186.slice. Sep 12 10:20:03.126775 systemd[1]: kubepods-burstable-poddbb45aa0_f13d_4916_b670_a8a588d62186.slice: Consumed 8.711s CPU time, 124.4M memory peak, 780K read from disk, 13.3M written to disk. Sep 12 10:20:03.128033 containerd[1518]: time="2025-09-12T10:20:03.127974746Z" level=info msg="RemoveContainer for \"43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb\"" Sep 12 10:20:03.128189 systemd[1]: Removed slice kubepods-besteffort-podc86fbb4d_7274_4976_aeba_18627e0d63d7.slice - libcontainer container kubepods-besteffort-podc86fbb4d_7274_4976_aeba_18627e0d63d7.slice. Sep 12 10:20:03.132400 containerd[1518]: time="2025-09-12T10:20:03.132323699Z" level=info msg="RemoveContainer for \"43bf5dfc29842761c625d1f113788cab560c1fa1ad264cd99c629203c92f49cb\" returns successfully" Sep 12 10:20:03.132658 kubelet[2613]: I0912 10:20:03.132638 2613 scope.go:117] "RemoveContainer" containerID="16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b" Sep 12 10:20:03.133886 containerd[1518]: time="2025-09-12T10:20:03.133839758Z" level=info msg="RemoveContainer for \"16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b\"" Sep 12 10:20:03.137962 containerd[1518]: time="2025-09-12T10:20:03.137918045Z" level=info msg="RemoveContainer for \"16d13f464858eea3a8068eec6478f6490aef1de28880ca39b9093371e5ec4a7b\" returns successfully" Sep 12 10:20:03.138144 kubelet[2613]: I0912 10:20:03.138124 2613 scope.go:117] "RemoveContainer" containerID="9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69" Sep 12 10:20:03.140839 containerd[1518]: time="2025-09-12T10:20:03.140566280Z" level=info msg="RemoveContainer for \"9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69\"" Sep 12 10:20:03.145891 containerd[1518]: time="2025-09-12T10:20:03.145836979Z" level=info msg="RemoveContainer for \"9a5c68b7020ad056278362f7f05c74523878833b0e1972bbc60e3b012c431d69\" returns successfully" Sep 12 10:20:03.147091 kubelet[2613]: I0912 10:20:03.147044 2613 scope.go:117] "RemoveContainer" containerID="df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694" Sep 12 10:20:03.149291 containerd[1518]: time="2025-09-12T10:20:03.149244961Z" level=info msg="RemoveContainer for \"df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694\"" Sep 12 10:20:03.152925 containerd[1518]: time="2025-09-12T10:20:03.152881727Z" level=info msg="RemoveContainer for \"df005338e9e366e47cb2b75d1f2ea2dda6458c905644fe7694b1ea95e5616694\" returns successfully" Sep 12 10:20:03.153195 kubelet[2613]: I0912 10:20:03.153106 2613 scope.go:117] "RemoveContainer" containerID="10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608" Sep 12 10:20:03.154154 containerd[1518]: time="2025-09-12T10:20:03.154123913Z" level=info msg="RemoveContainer for \"10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608\"" Sep 12 10:20:03.157732 containerd[1518]: time="2025-09-12T10:20:03.157706468Z" level=info msg="RemoveContainer for \"10b8e5473a30acbff520befe0594a0d70648616d3cdf0974f9bf9ceec70b1608\" returns successfully" Sep 12 10:20:03.157959 kubelet[2613]: I0912 10:20:03.157876 2613 scope.go:117] "RemoveContainer" containerID="9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662" Sep 12 10:20:03.159209 containerd[1518]: time="2025-09-12T10:20:03.159164735Z" level=info msg="RemoveContainer for \"9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662\"" Sep 12 10:20:03.166890 containerd[1518]: time="2025-09-12T10:20:03.166849172Z" level=info msg="RemoveContainer for \"9e5850b4395e0fbd1290e3fe7a99b1c63e24c9082978edc14cdf0130d5c3c662\" returns successfully" Sep 12 10:20:03.868661 kubelet[2613]: I0912 10:20:03.868595 2613 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c86fbb4d-7274-4976-aeba-18627e0d63d7" path="/var/lib/kubelet/pods/c86fbb4d-7274-4976-aeba-18627e0d63d7/volumes" Sep 12 10:20:03.869314 kubelet[2613]: I0912 10:20:03.869286 2613 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb45aa0-f13d-4916-b670-a8a588d62186" path="/var/lib/kubelet/pods/dbb45aa0-f13d-4916-b670-a8a588d62186/volumes" Sep 12 10:20:03.884740 sshd[4303]: Connection closed by 10.0.0.1 port 46174 Sep 12 10:20:03.886440 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Sep 12 10:20:03.896429 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:46174.service: Deactivated successfully. Sep 12 10:20:03.898818 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 10:20:03.899688 systemd-logind[1498]: Session 25 logged out. Waiting for processes to exit. Sep 12 10:20:03.910661 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:46184.service - OpenSSH per-connection server daemon (10.0.0.1:46184). Sep 12 10:20:03.912134 systemd-logind[1498]: Removed session 25. Sep 12 10:20:03.953076 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 46184 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:20:03.955116 sshd-session[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:20:03.960442 systemd-logind[1498]: New session 26 of user core. Sep 12 10:20:03.966219 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 10:20:04.614776 sshd[4462]: Connection closed by 10.0.0.1 port 46184 Sep 12 10:20:04.615566 sshd-session[4459]: pam_unix(sshd:session): session closed for user core Sep 12 10:20:04.634790 systemd[1]: Started sshd@26-10.0.0.134:22-10.0.0.1:46192.service - OpenSSH per-connection server daemon (10.0.0.1:46192). Sep 12 10:20:04.636315 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:46184.service: Deactivated successfully. Sep 12 10:20:04.640773 kubelet[2613]: E0912 10:20:04.637932 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dbb45aa0-f13d-4916-b670-a8a588d62186" containerName="apply-sysctl-overwrites" Sep 12 10:20:04.640773 kubelet[2613]: E0912 10:20:04.637967 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c86fbb4d-7274-4976-aeba-18627e0d63d7" containerName="cilium-operator" Sep 12 10:20:04.640773 kubelet[2613]: E0912 10:20:04.637979 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dbb45aa0-f13d-4916-b670-a8a588d62186" containerName="clean-cilium-state" Sep 12 10:20:04.640773 kubelet[2613]: E0912 10:20:04.637987 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dbb45aa0-f13d-4916-b670-a8a588d62186" containerName="cilium-agent" Sep 12 10:20:04.640773 kubelet[2613]: E0912 10:20:04.637996 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dbb45aa0-f13d-4916-b670-a8a588d62186" containerName="mount-cgroup" Sep 12 10:20:04.640773 kubelet[2613]: E0912 10:20:04.638003 2613 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dbb45aa0-f13d-4916-b670-a8a588d62186" containerName="mount-bpf-fs" Sep 12 10:20:04.640773 kubelet[2613]: I0912 10:20:04.638047 2613 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86fbb4d-7274-4976-aeba-18627e0d63d7" containerName="cilium-operator" Sep 12 10:20:04.640773 kubelet[2613]: I0912 10:20:04.638078 2613 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb45aa0-f13d-4916-b670-a8a588d62186" containerName="cilium-agent" Sep 12 10:20:04.642895 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 10:20:04.644817 systemd-logind[1498]: Session 26 logged out. Waiting for processes to exit. Sep 12 10:20:04.651664 systemd-logind[1498]: Removed session 26. Sep 12 10:20:04.659626 systemd[1]: Created slice kubepods-burstable-pod02200840_8188_4f5a_abc1_b4fdf43d081a.slice - libcontainer container kubepods-burstable-pod02200840_8188_4f5a_abc1_b4fdf43d081a.slice. Sep 12 10:20:04.676537 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 46192 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:20:04.678407 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:20:04.684925 systemd-logind[1498]: New session 27 of user core. Sep 12 10:20:04.688210 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 10:20:04.709183 kubelet[2613]: I0912 10:20:04.709139 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-etc-cni-netd\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709183 kubelet[2613]: I0912 10:20:04.709181 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02200840-8188-4f5a-abc1-b4fdf43d081a-cilium-ipsec-secrets\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709316 kubelet[2613]: I0912 10:20:04.709199 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dmf\" (UniqueName: \"kubernetes.io/projected/02200840-8188-4f5a-abc1-b4fdf43d081a-kube-api-access-s2dmf\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709316 kubelet[2613]: I0912 10:20:04.709220 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-xtables-lock\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709316 kubelet[2613]: I0912 10:20:04.709245 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02200840-8188-4f5a-abc1-b4fdf43d081a-clustermesh-secrets\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709316 kubelet[2613]: I0912 10:20:04.709262 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-host-proc-sys-net\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709406 kubelet[2613]: I0912 10:20:04.709276 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02200840-8188-4f5a-abc1-b4fdf43d081a-hubble-tls\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709491 kubelet[2613]: I0912 10:20:04.709438 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-cilium-run\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709611 kubelet[2613]: I0912 10:20:04.709577 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-bpf-maps\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709672 kubelet[2613]: I0912 10:20:04.709620 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-cni-path\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709672 kubelet[2613]: I0912 10:20:04.709648 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-hostproc\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709672 kubelet[2613]: I0912 10:20:04.709663 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-cilium-cgroup\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709744 kubelet[2613]: I0912 10:20:04.709680 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02200840-8188-4f5a-abc1-b4fdf43d081a-cilium-config-path\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709744 kubelet[2613]: I0912 10:20:04.709704 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-lib-modules\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.709744 kubelet[2613]: I0912 10:20:04.709722 2613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02200840-8188-4f5a-abc1-b4fdf43d081a-host-proc-sys-kernel\") pod \"cilium-bbsv5\" (UID: \"02200840-8188-4f5a-abc1-b4fdf43d081a\") " pod="kube-system/cilium-bbsv5" Sep 12 10:20:04.740603 sshd[4476]: Connection closed by 10.0.0.1 port 46192 Sep 12 10:20:04.740911 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Sep 12 10:20:04.755276 systemd[1]: sshd@26-10.0.0.134:22-10.0.0.1:46192.service: Deactivated successfully. Sep 12 10:20:04.757704 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 10:20:04.759625 systemd-logind[1498]: Session 27 logged out. Waiting for processes to exit. Sep 12 10:20:04.773398 systemd[1]: Started sshd@27-10.0.0.134:22-10.0.0.1:46194.service - OpenSSH per-connection server daemon (10.0.0.1:46194). Sep 12 10:20:04.774404 systemd-logind[1498]: Removed session 27. Sep 12 10:20:04.811573 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 46194 ssh2: RSA SHA256:a1Njz4TYaVO8dDak9HFmfQ3eRNCKIjkrhaaXM0zSnF8 Sep 12 10:20:04.814095 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 10:20:04.830313 systemd-logind[1498]: New session 28 of user core. Sep 12 10:20:04.842207 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 10:20:04.963575 kubelet[2613]: E0912 10:20:04.963424 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:04.964113 containerd[1518]: time="2025-09-12T10:20:04.964072819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbsv5,Uid:02200840-8188-4f5a-abc1-b4fdf43d081a,Namespace:kube-system,Attempt:0,}" Sep 12 10:20:04.988316 containerd[1518]: time="2025-09-12T10:20:04.988168077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 10:20:04.988316 containerd[1518]: time="2025-09-12T10:20:04.988258389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 10:20:04.988316 containerd[1518]: time="2025-09-12T10:20:04.988271674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:20:04.988519 containerd[1518]: time="2025-09-12T10:20:04.988384259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 10:20:05.021214 systemd[1]: Started cri-containerd-c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196.scope - libcontainer container c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196. Sep 12 10:20:05.048856 containerd[1518]: time="2025-09-12T10:20:05.048803221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbsv5,Uid:02200840-8188-4f5a-abc1-b4fdf43d081a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\"" Sep 12 10:20:05.049618 kubelet[2613]: E0912 10:20:05.049570 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:05.052156 containerd[1518]: time="2025-09-12T10:20:05.052099456Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 10:20:05.065893 containerd[1518]: time="2025-09-12T10:20:05.065846259Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4baf9d04ea32ef854a02ece1e91ebf70db5ea8cef48c706bbb618184ff99527f\"" Sep 12 10:20:05.066452 containerd[1518]: time="2025-09-12T10:20:05.066420031Z" level=info msg="StartContainer for \"4baf9d04ea32ef854a02ece1e91ebf70db5ea8cef48c706bbb618184ff99527f\"" Sep 12 10:20:05.098197 systemd[1]: Started cri-containerd-4baf9d04ea32ef854a02ece1e91ebf70db5ea8cef48c706bbb618184ff99527f.scope - libcontainer container 4baf9d04ea32ef854a02ece1e91ebf70db5ea8cef48c706bbb618184ff99527f. Sep 12 10:20:05.127413 containerd[1518]: time="2025-09-12T10:20:05.127335571Z" level=info msg="StartContainer for \"4baf9d04ea32ef854a02ece1e91ebf70db5ea8cef48c706bbb618184ff99527f\" returns successfully" Sep 12 10:20:05.130363 kubelet[2613]: E0912 10:20:05.130333 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:05.139863 systemd[1]: cri-containerd-4baf9d04ea32ef854a02ece1e91ebf70db5ea8cef48c706bbb618184ff99527f.scope: Deactivated successfully. Sep 12 10:20:05.175674 containerd[1518]: time="2025-09-12T10:20:05.175596244Z" level=info msg="shim disconnected" id=4baf9d04ea32ef854a02ece1e91ebf70db5ea8cef48c706bbb618184ff99527f namespace=k8s.io Sep 12 10:20:05.175674 containerd[1518]: time="2025-09-12T10:20:05.175659295Z" level=warning msg="cleaning up after shim disconnected" id=4baf9d04ea32ef854a02ece1e91ebf70db5ea8cef48c706bbb618184ff99527f namespace=k8s.io Sep 12 10:20:05.175674 containerd[1518]: time="2025-09-12T10:20:05.175668161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:20:06.133829 kubelet[2613]: E0912 10:20:06.133783 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:06.136172 containerd[1518]: time="2025-09-12T10:20:06.135827045Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 10:20:06.165026 containerd[1518]: time="2025-09-12T10:20:06.164974501Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c9c6865125b0a9a92ca5e9a7a7ad80cc0cbb25d8ca6bf4eb247d54930aff3aca\"" Sep 12 10:20:06.169787 containerd[1518]: time="2025-09-12T10:20:06.169749736Z" level=info msg="StartContainer for \"c9c6865125b0a9a92ca5e9a7a7ad80cc0cbb25d8ca6bf4eb247d54930aff3aca\"" Sep 12 10:20:06.206189 systemd[1]: Started cri-containerd-c9c6865125b0a9a92ca5e9a7a7ad80cc0cbb25d8ca6bf4eb247d54930aff3aca.scope - libcontainer container c9c6865125b0a9a92ca5e9a7a7ad80cc0cbb25d8ca6bf4eb247d54930aff3aca. Sep 12 10:20:06.232938 containerd[1518]: time="2025-09-12T10:20:06.232892210Z" level=info msg="StartContainer for \"c9c6865125b0a9a92ca5e9a7a7ad80cc0cbb25d8ca6bf4eb247d54930aff3aca\" returns successfully" Sep 12 10:20:06.241407 systemd[1]: cri-containerd-c9c6865125b0a9a92ca5e9a7a7ad80cc0cbb25d8ca6bf4eb247d54930aff3aca.scope: Deactivated successfully. Sep 12 10:20:06.266576 containerd[1518]: time="2025-09-12T10:20:06.266508539Z" level=info msg="shim disconnected" id=c9c6865125b0a9a92ca5e9a7a7ad80cc0cbb25d8ca6bf4eb247d54930aff3aca namespace=k8s.io Sep 12 10:20:06.266576 containerd[1518]: time="2025-09-12T10:20:06.266564625Z" level=warning msg="cleaning up after shim disconnected" id=c9c6865125b0a9a92ca5e9a7a7ad80cc0cbb25d8ca6bf4eb247d54930aff3aca namespace=k8s.io Sep 12 10:20:06.266576 containerd[1518]: time="2025-09-12T10:20:06.266573613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:20:06.818319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9c6865125b0a9a92ca5e9a7a7ad80cc0cbb25d8ca6bf4eb247d54930aff3aca-rootfs.mount: Deactivated successfully. Sep 12 10:20:07.138939 kubelet[2613]: E0912 10:20:07.138689 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:07.140836 containerd[1518]: time="2025-09-12T10:20:07.140774086Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 10:20:07.167653 containerd[1518]: time="2025-09-12T10:20:07.167586410Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6fa19cf5126f5d3baff304d5cf61b2e51a9cf46b303ddda39d4c40581c7c476\"" Sep 12 10:20:07.168255 containerd[1518]: time="2025-09-12T10:20:07.168203384Z" level=info msg="StartContainer for \"a6fa19cf5126f5d3baff304d5cf61b2e51a9cf46b303ddda39d4c40581c7c476\"" Sep 12 10:20:07.208220 systemd[1]: Started cri-containerd-a6fa19cf5126f5d3baff304d5cf61b2e51a9cf46b303ddda39d4c40581c7c476.scope - libcontainer container a6fa19cf5126f5d3baff304d5cf61b2e51a9cf46b303ddda39d4c40581c7c476. Sep 12 10:20:07.249576 containerd[1518]: time="2025-09-12T10:20:07.249530692Z" level=info msg="StartContainer for \"a6fa19cf5126f5d3baff304d5cf61b2e51a9cf46b303ddda39d4c40581c7c476\" returns successfully" Sep 12 10:20:07.254501 systemd[1]: cri-containerd-a6fa19cf5126f5d3baff304d5cf61b2e51a9cf46b303ddda39d4c40581c7c476.scope: Deactivated successfully. Sep 12 10:20:07.282560 containerd[1518]: time="2025-09-12T10:20:07.282490278Z" level=info msg="shim disconnected" id=a6fa19cf5126f5d3baff304d5cf61b2e51a9cf46b303ddda39d4c40581c7c476 namespace=k8s.io Sep 12 10:20:07.282560 containerd[1518]: time="2025-09-12T10:20:07.282555633Z" level=warning msg="cleaning up after shim disconnected" id=a6fa19cf5126f5d3baff304d5cf61b2e51a9cf46b303ddda39d4c40581c7c476 namespace=k8s.io Sep 12 10:20:07.282560 containerd[1518]: time="2025-09-12T10:20:07.282564830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:20:07.818157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6fa19cf5126f5d3baff304d5cf61b2e51a9cf46b303ddda39d4c40581c7c476-rootfs.mount: Deactivated successfully. Sep 12 10:20:07.942277 kubelet[2613]: E0912 10:20:07.942191 2613 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 10:20:08.143450 kubelet[2613]: E0912 10:20:08.143296 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:08.146132 containerd[1518]: time="2025-09-12T10:20:08.146037714Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 10:20:08.164941 containerd[1518]: time="2025-09-12T10:20:08.164891079Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"175ec352ede5c79106b8ee1bebd713bb4580e98814f5a9f519d1d5f9c531d8f8\"" Sep 12 10:20:08.165677 containerd[1518]: time="2025-09-12T10:20:08.165519645Z" level=info msg="StartContainer for \"175ec352ede5c79106b8ee1bebd713bb4580e98814f5a9f519d1d5f9c531d8f8\"" Sep 12 10:20:08.207281 systemd[1]: Started cri-containerd-175ec352ede5c79106b8ee1bebd713bb4580e98814f5a9f519d1d5f9c531d8f8.scope - libcontainer container 175ec352ede5c79106b8ee1bebd713bb4580e98814f5a9f519d1d5f9c531d8f8. Sep 12 10:20:08.238175 systemd[1]: cri-containerd-175ec352ede5c79106b8ee1bebd713bb4580e98814f5a9f519d1d5f9c531d8f8.scope: Deactivated successfully. Sep 12 10:20:08.240850 containerd[1518]: time="2025-09-12T10:20:08.240814810Z" level=info msg="StartContainer for \"175ec352ede5c79106b8ee1bebd713bb4580e98814f5a9f519d1d5f9c531d8f8\" returns successfully" Sep 12 10:20:08.269262 containerd[1518]: time="2025-09-12T10:20:08.269188376Z" level=info msg="shim disconnected" id=175ec352ede5c79106b8ee1bebd713bb4580e98814f5a9f519d1d5f9c531d8f8 namespace=k8s.io Sep 12 10:20:08.269262 containerd[1518]: time="2025-09-12T10:20:08.269254703Z" level=warning msg="cleaning up after shim disconnected" id=175ec352ede5c79106b8ee1bebd713bb4580e98814f5a9f519d1d5f9c531d8f8 namespace=k8s.io Sep 12 10:20:08.269262 containerd[1518]: time="2025-09-12T10:20:08.269264802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 10:20:08.817345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-175ec352ede5c79106b8ee1bebd713bb4580e98814f5a9f519d1d5f9c531d8f8-rootfs.mount: Deactivated successfully. Sep 12 10:20:09.150041 kubelet[2613]: E0912 10:20:09.149338 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:09.152891 containerd[1518]: time="2025-09-12T10:20:09.151654118Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 10:20:09.175999 containerd[1518]: time="2025-09-12T10:20:09.175935178Z" level=info msg="CreateContainer within sandbox \"c100288e2bdbad2f7ef0d2a558059f314e052bbd294a750b31b58f96c248c196\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2a4bda42e4b1a2fe0fa2c96b062992b458279985454ba0546d8045c749e4088\"" Sep 12 10:20:09.176706 containerd[1518]: time="2025-09-12T10:20:09.176669834Z" level=info msg="StartContainer for \"b2a4bda42e4b1a2fe0fa2c96b062992b458279985454ba0546d8045c749e4088\"" Sep 12 10:20:09.221574 systemd[1]: Started cri-containerd-b2a4bda42e4b1a2fe0fa2c96b062992b458279985454ba0546d8045c749e4088.scope - libcontainer container b2a4bda42e4b1a2fe0fa2c96b062992b458279985454ba0546d8045c749e4088. Sep 12 10:20:09.268612 containerd[1518]: time="2025-09-12T10:20:09.268531160Z" level=info msg="StartContainer for \"b2a4bda42e4b1a2fe0fa2c96b062992b458279985454ba0546d8045c749e4088\" returns successfully" Sep 12 10:20:09.480140 kubelet[2613]: I0912 10:20:09.479928 2613 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T10:20:09Z","lastTransitionTime":"2025-09-12T10:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 10:20:09.823100 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 10:20:09.865757 kubelet[2613]: E0912 10:20:09.865661 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:10.154145 kubelet[2613]: E0912 10:20:10.154005 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:10.169047 kubelet[2613]: I0912 10:20:10.168424 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bbsv5" podStartSLOduration=6.168395828 podStartE2EDuration="6.168395828s" podCreationTimestamp="2025-09-12 10:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 10:20:10.168192201 +0000 UTC m=+92.548963963" watchObservedRunningTime="2025-09-12 10:20:10.168395828 +0000 UTC m=+92.549167590" Sep 12 10:20:11.155721 kubelet[2613]: E0912 10:20:11.155686 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:12.866067 kubelet[2613]: E0912 10:20:12.866014 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:12.932001 systemd-networkd[1447]: lxc_health: Link UP Sep 12 10:20:12.937470 systemd-networkd[1447]: lxc_health: Gained carrier Sep 12 10:20:12.968085 kubelet[2613]: E0912 10:20:12.967919 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:13.161790 kubelet[2613]: E0912 10:20:13.161634 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:13.212401 systemd[1]: run-containerd-runc-k8s.io-b2a4bda42e4b1a2fe0fa2c96b062992b458279985454ba0546d8045c749e4088-runc.IhIXkd.mount: Deactivated successfully. Sep 12 10:20:14.163954 kubelet[2613]: E0912 10:20:14.163917 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:14.715338 systemd-networkd[1447]: lxc_health: Gained IPv6LL Sep 12 10:20:14.866000 kubelet[2613]: E0912 10:20:14.865973 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 10:20:19.609247 sshd[4490]: Connection closed by 10.0.0.1 port 46194 Sep 12 10:20:19.609792 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Sep 12 10:20:19.614519 systemd[1]: sshd@27-10.0.0.134:22-10.0.0.1:46194.service: Deactivated successfully. Sep 12 10:20:19.616969 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 10:20:19.617884 systemd-logind[1498]: Session 28 logged out. Waiting for processes to exit. Sep 12 10:20:19.618840 systemd-logind[1498]: Removed session 28.