Apr 30 12:39:49.988708 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 12:39:49.988747 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:39:49.988759 kernel: BIOS-provided physical RAM map: Apr 30 12:39:49.988766 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 12:39:49.988772 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 30 12:39:49.988787 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 30 12:39:49.988795 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 30 12:39:49.988802 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 30 12:39:49.988808 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 30 12:39:49.988815 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 30 12:39:49.988822 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 30 12:39:49.988831 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 30 12:39:49.988840 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 30 12:39:49.988847 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 30 12:39:49.988858 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 30 12:39:49.988866 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 30 12:39:49.988876 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Apr 30 12:39:49.988883 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Apr 30 12:39:49.988890 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Apr 30 12:39:49.988897 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Apr 30 12:39:49.988904 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 30 12:39:49.988911 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 30 12:39:49.988918 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 12:39:49.988925 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 12:39:49.988932 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 30 12:39:49.988939 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 12:39:49.988946 kernel: NX (Execute Disable) protection: active Apr 30 12:39:49.988955 kernel: APIC: Static calls initialized Apr 30 12:39:49.988963 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Apr 30 12:39:49.988971 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Apr 30 12:39:49.988980 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Apr 30 12:39:49.988989 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Apr 30 12:39:49.988998 kernel: extended physical RAM map: Apr 30 12:39:49.989007 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 30 12:39:49.989014 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 30 12:39:49.989021 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 30 12:39:49.989029 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 30 12:39:49.989036 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 30 12:39:49.989043 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 30 12:39:49.989053 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 30 12:39:49.989064 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Apr 30 12:39:49.989071 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Apr 30 12:39:49.989078 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Apr 30 12:39:49.989086 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Apr 30 12:39:49.989093 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Apr 30 12:39:49.989106 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 30 12:39:49.989113 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 30 12:39:49.989120 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 30 12:39:49.989128 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 30 12:39:49.989135 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 30 12:39:49.989142 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Apr 30 12:39:49.989160 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Apr 30 12:39:49.989183 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Apr 30 12:39:49.989192 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Apr 30 12:39:49.989203 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 30 12:39:49.989210 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 30 12:39:49.989218 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 12:39:49.989225 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 12:39:49.989235 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 30 12:39:49.989251 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 30 12:39:49.989260 kernel: efi: EFI v2.7 by EDK II Apr 30 12:39:49.989270 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Apr 30 12:39:49.989279 kernel: random: crng init done Apr 30 12:39:49.989286 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 30 12:39:49.989294 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 30 12:39:49.989304 kernel: secureboot: Secure boot disabled Apr 30 12:39:49.989316 kernel: SMBIOS 2.8 present. Apr 30 12:39:49.989326 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 30 12:39:49.989333 kernel: Hypervisor detected: KVM Apr 30 12:39:49.989342 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 12:39:49.989350 kernel: kvm-clock: using sched offset of 4135392344 cycles Apr 30 12:39:49.989359 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 12:39:49.989366 kernel: tsc: Detected 2794.748 MHz processor Apr 30 12:39:49.989374 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 12:39:49.989382 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 12:39:49.989389 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Apr 30 12:39:49.989400 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 30 12:39:49.989407 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 12:39:49.989415 kernel: Using GB pages for direct mapping Apr 30 12:39:49.989423 kernel: ACPI: Early table checksum verification disabled Apr 30 12:39:49.989430 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 30 12:39:49.989438 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 30 12:39:49.989446 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:49.989453 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:49.989461 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 30 12:39:49.989471 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:49.989479 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:49.989486 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:49.989494 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 12:39:49.989501 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 30 12:39:49.989509 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 30 12:39:49.989516 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Apr 30 12:39:49.989524 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 30 12:39:49.989536 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 30 12:39:49.989546 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 30 12:39:49.989556 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 30 12:39:49.989566 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 30 12:39:49.989591 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 30 12:39:49.989617 kernel: No NUMA configuration found Apr 30 12:39:49.989626 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 30 12:39:49.989634 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Apr 30 12:39:49.989641 kernel: Zone ranges: Apr 30 12:39:49.989649 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 12:39:49.989660 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 30 12:39:49.989668 kernel: Normal empty Apr 30 12:39:49.989679 kernel: Movable zone start for each node Apr 30 12:39:49.989686 kernel: Early memory node ranges Apr 30 12:39:49.989694 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 30 12:39:49.989702 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 30 12:39:49.989709 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 30 12:39:49.989716 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 30 12:39:49.989724 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 30 12:39:49.989734 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 30 12:39:49.989741 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Apr 30 12:39:49.989749 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Apr 30 12:39:49.989756 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 30 12:39:49.989764 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 12:39:49.989771 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 30 12:39:49.989795 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 30 12:39:49.989805 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 12:39:49.989812 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 30 12:39:49.989820 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 30 12:39:49.989828 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 30 12:39:49.989839 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 30 12:39:49.989849 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 30 12:39:49.989857 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 12:39:49.989864 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 12:39:49.989873 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 12:39:49.989880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 12:39:49.989891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 12:39:49.989898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 12:39:49.989906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 12:39:49.989914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 12:39:49.989922 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 12:39:49.989929 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 12:39:49.989937 kernel: TSC deadline timer available Apr 30 12:39:49.989945 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 30 12:39:49.989953 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 12:39:49.989963 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 30 12:39:49.989971 kernel: kvm-guest: setup PV sched yield Apr 30 12:39:49.989979 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 30 12:39:49.989986 kernel: Booting paravirtualized kernel on KVM Apr 30 12:39:49.989995 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 12:39:49.990002 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 30 12:39:49.990010 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Apr 30 12:39:49.990018 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Apr 30 12:39:49.990026 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 30 12:39:49.990036 kernel: kvm-guest: PV spinlocks enabled Apr 30 12:39:49.990044 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 12:39:49.990053 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:39:49.990062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:39:49.990069 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 12:39:49.990080 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:39:49.990088 kernel: Fallback order for Node 0: 0 Apr 30 12:39:49.990096 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Apr 30 12:39:49.990106 kernel: Policy zone: DMA32 Apr 30 12:39:49.990114 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:39:49.990122 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 177824K reserved, 0K cma-reserved) Apr 30 12:39:49.990130 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 12:39:49.990137 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 12:39:49.990145 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 12:39:49.990153 kernel: Dynamic Preempt: voluntary Apr 30 12:39:49.990161 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:39:49.990169 kernel: rcu: RCU event tracing is enabled. Apr 30 12:39:49.990180 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 12:39:49.990188 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:39:49.990195 kernel: Rude variant of Tasks RCU enabled. Apr 30 12:39:49.990203 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:39:49.990211 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:39:49.990219 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 12:39:49.990227 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 30 12:39:49.990235 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:39:49.990242 kernel: Console: colour dummy device 80x25 Apr 30 12:39:49.990253 kernel: printk: console [ttyS0] enabled Apr 30 12:39:49.990260 kernel: ACPI: Core revision 20230628 Apr 30 12:39:49.990268 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 30 12:39:49.990276 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 12:39:49.990284 kernel: x2apic enabled Apr 30 12:39:49.990292 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 12:39:49.990302 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 30 12:39:49.990310 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 30 12:39:49.990318 kernel: kvm-guest: setup PV IPIs Apr 30 12:39:49.990331 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 12:39:49.990342 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 30 12:39:49.990353 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Apr 30 12:39:49.990364 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 12:39:49.990375 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 30 12:39:49.990384 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 30 12:39:49.990393 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 12:39:49.990401 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 12:39:49.990409 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 12:39:49.990419 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 12:39:49.990427 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Apr 30 12:39:49.990435 kernel: RETBleed: Mitigation: untrained return thunk Apr 30 12:39:49.990443 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 12:39:49.990451 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 12:39:49.990459 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 30 12:39:49.990467 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 30 12:39:49.990478 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 30 12:39:49.990489 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 12:39:49.990497 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 12:39:49.990505 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 12:39:49.990513 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 12:39:49.990520 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Apr 30 12:39:49.990528 kernel: Freeing SMP alternatives memory: 32K Apr 30 12:39:49.990536 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:39:49.990544 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:39:49.990552 kernel: landlock: Up and running. Apr 30 12:39:49.990562 kernel: SELinux: Initializing. Apr 30 12:39:49.990570 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:39:49.990602 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:39:49.990613 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Apr 30 12:39:49.990622 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 12:39:49.990630 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 12:39:49.990638 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 12:39:49.990646 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 30 12:39:49.990654 kernel: ... version: 0 Apr 30 12:39:49.990665 kernel: ... bit width: 48 Apr 30 12:39:49.990673 kernel: ... generic registers: 6 Apr 30 12:39:49.990682 kernel: ... value mask: 0000ffffffffffff Apr 30 12:39:49.990690 kernel: ... max period: 00007fffffffffff Apr 30 12:39:49.990698 kernel: ... fixed-purpose events: 0 Apr 30 12:39:49.990707 kernel: ... event mask: 000000000000003f Apr 30 12:39:49.990717 kernel: signal: max sigframe size: 1776 Apr 30 12:39:49.990727 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:39:49.990738 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:39:49.990752 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:39:49.990763 kernel: smpboot: x86: Booting SMP configuration: Apr 30 12:39:49.990774 kernel: .... node #0, CPUs: #1 #2 #3 Apr 30 12:39:49.990794 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 12:39:49.990805 kernel: smpboot: Max logical packages: 1 Apr 30 12:39:49.990816 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Apr 30 12:39:49.990827 kernel: devtmpfs: initialized Apr 30 12:39:49.990838 kernel: x86/mm: Memory block size: 128MB Apr 30 12:39:49.990849 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 30 12:39:49.990859 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 30 12:39:49.990875 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 30 12:39:49.990886 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 30 12:39:49.990897 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Apr 30 12:39:49.990907 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 30 12:39:49.990918 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:39:49.990928 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 12:39:49.990939 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:39:49.990949 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:39:49.990965 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:39:49.990976 kernel: audit: type=2000 audit(1746016788.756:1): state=initialized audit_enabled=0 res=1 Apr 30 12:39:49.990986 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:39:49.990994 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 12:39:49.991002 kernel: cpuidle: using governor menu Apr 30 12:39:49.991010 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:39:49.991017 kernel: dca service started, version 1.12.1 Apr 30 12:39:49.991025 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 12:39:49.991033 kernel: PCI: Using configuration type 1 for base access Apr 30 12:39:49.991044 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 12:39:49.991052 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:39:49.991060 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:39:49.991068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:39:49.991075 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:39:49.991083 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:39:49.991091 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:39:49.991099 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:39:49.991106 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:39:49.991118 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 12:39:49.991129 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 12:39:49.991139 kernel: ACPI: Interpreter enabled Apr 30 12:39:49.991149 kernel: ACPI: PM: (supports S0 S3 S5) Apr 30 12:39:49.991159 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 12:39:49.991170 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 12:39:49.991178 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 12:39:49.991186 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 12:39:49.991193 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 12:39:49.991430 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 12:39:49.991605 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 30 12:39:49.991752 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 30 12:39:49.991763 kernel: PCI host bridge to bus 0000:00 Apr 30 12:39:49.991922 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 12:39:49.992045 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 12:39:49.992182 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 12:39:49.992329 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 30 12:39:49.992460 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 30 12:39:49.992612 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 30 12:39:49.992744 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 12:39:49.992936 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 12:39:49.993118 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 30 12:39:49.993268 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 30 12:39:49.993406 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 30 12:39:49.993552 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 30 12:39:49.993724 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 30 12:39:49.993867 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 12:39:49.994021 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 12:39:49.994166 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 30 12:39:49.994314 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 30 12:39:49.994446 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Apr 30 12:39:49.994642 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 30 12:39:49.994798 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 30 12:39:49.994931 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 30 12:39:49.995060 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Apr 30 12:39:49.995214 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 30 12:39:49.995359 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 30 12:39:49.995501 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 30 12:39:49.995661 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 30 12:39:49.995816 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 30 12:39:49.995971 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 12:39:49.996123 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 12:39:49.996286 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 12:39:49.996440 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 30 12:39:49.996651 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 30 12:39:49.996823 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 12:39:49.996975 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 30 12:39:49.996987 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 12:39:49.996995 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 12:39:49.997008 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 12:39:49.997016 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 12:39:49.997024 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 12:39:49.997032 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 12:39:49.997040 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 12:39:49.997048 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 12:39:49.997055 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 12:39:49.997063 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 12:39:49.997071 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 12:39:49.997082 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 12:39:49.997089 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 12:39:49.997097 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 12:39:49.997105 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 12:39:49.997113 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 12:39:49.997121 kernel: iommu: Default domain type: Translated Apr 30 12:39:49.997129 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 12:39:49.997136 kernel: efivars: Registered efivars operations Apr 30 12:39:49.997144 kernel: PCI: Using ACPI for IRQ routing Apr 30 12:39:49.997152 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 12:39:49.997162 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 30 12:39:49.997170 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 30 12:39:49.997178 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Apr 30 12:39:49.997185 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Apr 30 12:39:49.997193 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 30 12:39:49.997201 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 30 12:39:49.997208 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Apr 30 12:39:49.997216 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 30 12:39:49.997359 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 12:39:49.997527 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 12:39:49.997694 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 12:39:49.997707 kernel: vgaarb: loaded Apr 30 12:39:49.997715 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 30 12:39:49.997723 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 30 12:39:49.997731 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 12:39:49.997739 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:39:49.997747 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:39:49.997760 kernel: pnp: PnP ACPI init Apr 30 12:39:49.997956 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 30 12:39:49.997969 kernel: pnp: PnP ACPI: found 6 devices Apr 30 12:39:49.997977 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 12:39:49.997985 kernel: NET: Registered PF_INET protocol family Apr 30 12:39:49.998012 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 12:39:49.998025 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 12:39:49.998033 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:39:49.998044 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:39:49.998052 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 12:39:49.998060 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 12:39:49.998068 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:39:49.998077 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:39:49.998085 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:39:49.998093 kernel: NET: Registered PF_XDP protocol family Apr 30 12:39:49.998227 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 30 12:39:49.998367 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 30 12:39:49.998490 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 12:39:49.998663 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 12:39:49.998795 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 12:39:49.998915 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 30 12:39:49.999032 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 30 12:39:49.999149 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 30 12:39:49.999160 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:39:49.999174 kernel: Initialise system trusted keyrings Apr 30 12:39:49.999182 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 12:39:49.999190 kernel: Key type asymmetric registered Apr 30 12:39:49.999198 kernel: Asymmetric key parser 'x509' registered Apr 30 12:39:49.999207 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 12:39:49.999215 kernel: io scheduler mq-deadline registered Apr 30 12:39:49.999223 kernel: io scheduler kyber registered Apr 30 12:39:49.999231 kernel: io scheduler bfq registered Apr 30 12:39:49.999239 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 12:39:49.999251 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 12:39:49.999260 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 12:39:49.999271 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 12:39:49.999279 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:39:49.999287 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 12:39:49.999295 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 12:39:49.999308 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 12:39:49.999320 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 12:39:49.999488 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 30 12:39:49.999505 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 12:39:49.999675 kernel: rtc_cmos 00:04: registered as rtc0 Apr 30 12:39:49.999845 kernel: rtc_cmos 00:04: setting system clock to 2025-04-30T12:39:49 UTC (1746016789) Apr 30 12:39:50.000006 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 30 12:39:50.000022 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 30 12:39:50.000039 kernel: efifb: probing for efifb Apr 30 12:39:50.000050 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 30 12:39:50.000061 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 30 12:39:50.000072 kernel: efifb: scrolling: redraw Apr 30 12:39:50.000083 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 30 12:39:50.000093 kernel: Console: switching to colour frame buffer device 160x50 Apr 30 12:39:50.000104 kernel: fb0: EFI VGA frame buffer device Apr 30 12:39:50.000116 kernel: pstore: Using crash dump compression: deflate Apr 30 12:39:50.000127 kernel: pstore: Registered efi_pstore as persistent store backend Apr 30 12:39:50.000142 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:39:50.000153 kernel: Segment Routing with IPv6 Apr 30 12:39:50.000164 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:39:50.000175 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:39:50.000186 kernel: Key type dns_resolver registered Apr 30 12:39:50.000197 kernel: IPI shorthand broadcast: enabled Apr 30 12:39:50.000209 kernel: sched_clock: Marking stable (1354002719, 170512974)->(1549351220, -24835527) Apr 30 12:39:50.000219 kernel: registered taskstats version 1 Apr 30 12:39:50.000231 kernel: Loading compiled-in X.509 certificates Apr 30 12:39:50.000247 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 12:39:50.000258 kernel: Key type .fscrypt registered Apr 30 12:39:50.000269 kernel: Key type fscrypt-provisioning registered Apr 30 12:39:50.000280 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:39:50.000291 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:39:50.000301 kernel: ima: No architecture policies found Apr 30 12:39:50.000312 kernel: clk: Disabling unused clocks Apr 30 12:39:50.000323 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 12:39:50.000338 kernel: Write protecting the kernel read-only data: 38912k Apr 30 12:39:50.000349 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 12:39:50.000357 kernel: Run /init as init process Apr 30 12:39:50.000366 kernel: with arguments: Apr 30 12:39:50.000374 kernel: /init Apr 30 12:39:50.000382 kernel: with environment: Apr 30 12:39:50.000390 kernel: HOME=/ Apr 30 12:39:50.000398 kernel: TERM=linux Apr 30 12:39:50.000406 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:39:50.000419 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:39:50.000434 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:39:50.000443 systemd[1]: Detected virtualization kvm. Apr 30 12:39:50.000452 systemd[1]: Detected architecture x86-64. Apr 30 12:39:50.000460 systemd[1]: Running in initrd. Apr 30 12:39:50.000469 systemd[1]: No hostname configured, using default hostname. Apr 30 12:39:50.000478 systemd[1]: Hostname set to . Apr 30 12:39:50.000486 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:39:50.000498 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:39:50.000506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:39:50.000517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:39:50.000529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:39:50.000541 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:39:50.000554 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:39:50.000568 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:39:50.000606 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:39:50.000619 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:39:50.000631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:39:50.000644 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:39:50.000656 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:39:50.000668 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:39:50.000680 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:39:50.000691 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:39:50.000707 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:39:50.000719 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:39:50.000732 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:39:50.000744 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:39:50.000756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:39:50.000768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:39:50.000788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:39:50.000801 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:39:50.000813 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:39:50.000829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:39:50.000841 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:39:50.000852 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:39:50.000863 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:39:50.000875 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:39:50.000886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:39:50.000897 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:39:50.000908 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:39:50.000925 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:39:50.000937 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:39:50.000986 systemd-journald[193]: Collecting audit messages is disabled. Apr 30 12:39:50.001019 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:39:50.001031 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:39:50.001043 systemd-journald[193]: Journal started Apr 30 12:39:50.001076 systemd-journald[193]: Runtime Journal (/run/log/journal/4cf0e4c33c2144b486c570b20f42e237) is 6M, max 48.2M, 42.2M free. Apr 30 12:39:49.987089 systemd-modules-load[194]: Inserted module 'overlay' Apr 30 12:39:50.012200 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:39:50.018889 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:39:50.018960 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:39:50.020602 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:39:50.023149 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 30 12:39:50.024287 kernel: Bridge firewalling registered Apr 30 12:39:50.025877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:39:50.028639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:39:50.030349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:39:50.031107 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:39:50.042319 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:39:50.044013 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:39:50.046442 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:39:50.069893 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:39:50.073018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:39:50.086660 dracut-cmdline[228]: dracut-dracut-053 Apr 30 12:39:50.090553 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 12:39:50.113476 systemd-resolved[230]: Positive Trust Anchors: Apr 30 12:39:50.113500 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:39:50.113532 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:39:50.116753 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 30 12:39:50.118177 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:39:50.123637 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:39:50.194638 kernel: SCSI subsystem initialized Apr 30 12:39:50.204599 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:39:50.214599 kernel: iscsi: registered transport (tcp) Apr 30 12:39:50.241623 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:39:50.241720 kernel: QLogic iSCSI HBA Driver Apr 30 12:39:50.301719 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:39:50.318908 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:39:50.343105 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:39:50.343157 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:39:50.344165 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:39:50.386632 kernel: raid6: avx2x4 gen() 26657 MB/s Apr 30 12:39:50.403601 kernel: raid6: avx2x2 gen() 27335 MB/s Apr 30 12:39:50.420746 kernel: raid6: avx2x1 gen() 25514 MB/s Apr 30 12:39:50.420833 kernel: raid6: using algorithm avx2x2 gen() 27335 MB/s Apr 30 12:39:50.438714 kernel: raid6: .... xor() 19570 MB/s, rmw enabled Apr 30 12:39:50.438785 kernel: raid6: using avx2x2 recovery algorithm Apr 30 12:39:50.462615 kernel: xor: automatically using best checksumming function avx Apr 30 12:39:50.613616 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:39:50.627058 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:39:50.637819 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:39:50.657451 systemd-udevd[414]: Using default interface naming scheme 'v255'. Apr 30 12:39:50.663823 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:39:50.669195 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:39:50.687983 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Apr 30 12:39:50.721893 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:39:50.733808 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:39:50.814693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:39:50.823257 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:39:50.840237 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:39:50.843620 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:39:50.846453 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:39:50.848935 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:39:50.859810 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:39:50.872617 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 30 12:39:50.906929 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 12:39:50.907100 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 12:39:50.907113 kernel: libata version 3.00 loaded. Apr 30 12:39:50.907124 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 12:39:50.907135 kernel: GPT:9289727 != 19775487 Apr 30 12:39:50.907146 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 12:39:50.907156 kernel: GPT:9289727 != 19775487 Apr 30 12:39:50.907171 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 12:39:50.907184 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 12:39:50.907195 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 12:39:50.925772 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 12:39:50.925792 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 12:39:50.925804 kernel: AES CTR mode by8 optimization enabled Apr 30 12:39:50.925814 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 12:39:50.925991 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 12:39:50.926144 kernel: scsi host0: ahci Apr 30 12:39:50.926333 kernel: scsi host1: ahci Apr 30 12:39:50.926501 kernel: scsi host2: ahci Apr 30 12:39:50.926697 kernel: scsi host3: ahci Apr 30 12:39:50.926869 kernel: scsi host4: ahci Apr 30 12:39:50.927024 kernel: scsi host5: ahci Apr 30 12:39:50.927200 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Apr 30 12:39:50.927213 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Apr 30 12:39:50.927229 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Apr 30 12:39:50.927240 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Apr 30 12:39:50.927251 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Apr 30 12:39:50.927262 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Apr 30 12:39:50.873420 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:39:50.934286 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (468) Apr 30 12:39:50.934317 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (475) Apr 30 12:39:50.912746 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:39:50.912983 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:39:50.916566 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:39:50.928726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:39:50.930871 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:39:50.933202 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:39:50.945924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:39:50.962600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:39:50.982316 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 12:39:50.993890 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 12:39:51.003316 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 12:39:51.005922 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 12:39:51.024060 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 12:39:51.035837 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:39:51.039158 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:39:51.046064 disk-uuid[556]: Primary Header is updated. Apr 30 12:39:51.046064 disk-uuid[556]: Secondary Entries is updated. Apr 30 12:39:51.046064 disk-uuid[556]: Secondary Header is updated. Apr 30 12:39:51.048595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 12:39:51.071490 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:39:51.237619 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:51.237705 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:51.238593 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 30 12:39:51.239594 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:51.239608 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 30 12:39:51.240640 kernel: ata3.00: applying bridge limits Apr 30 12:39:51.241595 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:51.241621 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 12:39:51.242600 kernel: ata3.00: configured for UDMA/100 Apr 30 12:39:51.243602 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 30 12:39:51.291166 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 30 12:39:51.303454 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 30 12:39:51.303477 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 30 12:39:52.060603 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 12:39:52.060793 disk-uuid[558]: The operation has completed successfully. Apr 30 12:39:52.094427 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:39:52.094587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:39:52.162867 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:39:52.166739 sh[593]: Success Apr 30 12:39:52.180608 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 30 12:39:52.224642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:39:52.234518 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:39:52.236950 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:39:52.254048 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 12:39:52.254085 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:39:52.254101 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:39:52.255090 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:39:52.255916 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:39:52.261376 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:39:52.264048 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:39:52.277774 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:39:52.280912 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:39:52.298926 kernel: BTRFS info (device vda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:52.298980 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:39:52.298996 kernel: BTRFS info (device vda6): using free space tree Apr 30 12:39:52.302624 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 12:39:52.307601 kernel: BTRFS info (device vda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:52.313812 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:39:52.321748 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:39:52.468152 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:39:52.483740 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:39:52.485267 ignition[681]: Ignition 2.20.0 Apr 30 12:39:52.485274 ignition[681]: Stage: fetch-offline Apr 30 12:39:52.485313 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:52.485325 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:52.485457 ignition[681]: parsed url from cmdline: "" Apr 30 12:39:52.485462 ignition[681]: no config URL provided Apr 30 12:39:52.485468 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:39:52.485478 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:39:52.485506 ignition[681]: op(1): [started] loading QEMU firmware config module Apr 30 12:39:52.485512 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 12:39:52.494921 ignition[681]: op(1): [finished] loading QEMU firmware config module Apr 30 12:39:52.526179 systemd-networkd[778]: lo: Link UP Apr 30 12:39:52.526192 systemd-networkd[778]: lo: Gained carrier Apr 30 12:39:52.528435 systemd-networkd[778]: Enumeration completed Apr 30 12:39:52.528570 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:39:52.528813 systemd[1]: Reached target network.target - Network. Apr 30 12:39:52.529306 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:39:52.529312 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:39:52.531551 systemd-networkd[778]: eth0: Link UP Apr 30 12:39:52.531556 systemd-networkd[778]: eth0: Gained carrier Apr 30 12:39:52.531565 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:39:52.553356 ignition[681]: parsing config with SHA512: 7f3c126cd4a003f3fbd24840f86b7e284ee5ea1b75555e5b0f991d828132277a6553bb3026dc08e26bc687ebf5c92bcba7e8d58efbf36f26029a1b9d71c164c4 Apr 30 12:39:52.557656 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:39:52.558584 unknown[681]: fetched base config from "system" Apr 30 12:39:52.558592 unknown[681]: fetched user config from "qemu" Apr 30 12:39:52.559239 ignition[681]: fetch-offline: fetch-offline passed Apr 30 12:39:52.559325 ignition[681]: Ignition finished successfully Apr 30 12:39:52.564866 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:39:52.565165 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 12:39:52.575828 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:39:52.597295 ignition[787]: Ignition 2.20.0 Apr 30 12:39:52.597307 ignition[787]: Stage: kargs Apr 30 12:39:52.597458 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:52.597471 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:52.601275 ignition[787]: kargs: kargs passed Apr 30 12:39:52.601330 ignition[787]: Ignition finished successfully Apr 30 12:39:52.605964 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:39:52.618706 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:39:52.640600 ignition[796]: Ignition 2.20.0 Apr 30 12:39:52.640611 ignition[796]: Stage: disks Apr 30 12:39:52.640781 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:52.640793 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:52.641662 ignition[796]: disks: disks passed Apr 30 12:39:52.644075 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:39:52.641717 ignition[796]: Ignition finished successfully Apr 30 12:39:52.645418 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:39:52.646954 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:39:52.649108 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:39:52.650131 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:39:52.651944 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:39:52.663860 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:39:52.676800 systemd-resolved[230]: Detected conflict on linux IN A 10.0.0.14 Apr 30 12:39:52.676817 systemd-resolved[230]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Apr 30 12:39:52.679750 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 12:39:52.686894 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:39:52.701679 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:39:52.796725 kernel: EXT4-fs (vda9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 12:39:52.797461 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:39:52.799744 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:39:52.819667 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:39:52.822302 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:39:52.824750 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 12:39:52.824807 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:39:52.824831 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:39:52.831603 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Apr 30 12:39:52.833925 kernel: BTRFS info (device vda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:52.833953 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:39:52.833968 kernel: BTRFS info (device vda6): using free space tree Apr 30 12:39:52.835843 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:39:52.838615 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 12:39:52.839619 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:39:52.852724 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:39:52.886209 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:39:52.890321 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:39:52.894336 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:39:52.898441 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:39:52.988751 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:39:52.997650 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:39:52.998463 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:39:53.008601 kernel: BTRFS info (device vda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:53.026904 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:39:53.037708 ignition[928]: INFO : Ignition 2.20.0 Apr 30 12:39:53.037708 ignition[928]: INFO : Stage: mount Apr 30 12:39:53.039415 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:53.039415 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:53.039415 ignition[928]: INFO : mount: mount passed Apr 30 12:39:53.039415 ignition[928]: INFO : Ignition finished successfully Apr 30 12:39:53.041008 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:39:53.052653 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:39:53.253498 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:39:53.262897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:39:53.271595 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Apr 30 12:39:53.271625 kernel: BTRFS info (device vda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 12:39:53.271644 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 12:39:53.273083 kernel: BTRFS info (device vda6): using free space tree Apr 30 12:39:53.276588 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 12:39:53.277610 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:39:53.312792 ignition[958]: INFO : Ignition 2.20.0 Apr 30 12:39:53.312792 ignition[958]: INFO : Stage: files Apr 30 12:39:53.314761 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:53.314761 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:53.314761 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:39:53.314761 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:39:53.314761 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:39:53.321359 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:39:53.321359 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:39:53.321359 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:39:53.321359 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 12:39:53.321359 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 12:39:53.318058 unknown[958]: wrote ssh authorized keys file for user: core Apr 30 12:39:53.487494 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:39:53.747624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 12:39:53.747624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:39:53.751673 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 12:39:54.045832 systemd-networkd[778]: eth0: Gained IPv6LL Apr 30 12:39:54.091084 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:39:54.192593 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 12:39:54.195012 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 12:39:54.738458 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:39:56.035270 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 12:39:56.035270 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:39:56.039463 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:39:56.039463 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:39:56.039463 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:39:56.039463 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 12:39:56.039463 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 12:39:56.039463 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 12:39:56.039463 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 12:39:56.039463 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 12:39:56.059831 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 12:39:56.063887 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 12:39:56.065498 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 12:39:56.065498 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:39:56.065498 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:39:56.065498 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:39:56.065498 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:39:56.065498 ignition[958]: INFO : files: files passed Apr 30 12:39:56.065498 ignition[958]: INFO : Ignition finished successfully Apr 30 12:39:56.067395 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:39:56.082718 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:39:56.085424 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:39:56.087476 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:39:56.087632 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:39:56.095341 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 12:39:56.098038 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:39:56.098038 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:39:56.101324 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:39:56.104547 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:39:56.107339 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:39:56.120729 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:39:56.146159 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:39:56.146297 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:39:56.148718 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:39:56.150885 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:39:56.151009 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:39:56.159729 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:39:56.177678 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:39:56.189776 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:39:56.200216 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:39:56.201566 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:39:56.204053 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:39:56.206336 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:39:56.206451 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:39:56.208875 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:39:56.210892 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:39:56.213167 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:39:56.215551 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:39:56.216749 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:39:56.217106 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:39:56.217444 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:39:56.217975 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:39:56.218303 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:39:56.218655 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:39:56.218956 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:39:56.219073 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:39:56.219833 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:39:56.220160 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:39:56.262234 ignition[1013]: INFO : Ignition 2.20.0 Apr 30 12:39:56.262234 ignition[1013]: INFO : Stage: umount Apr 30 12:39:56.262234 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:39:56.262234 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 12:39:56.262234 ignition[1013]: INFO : umount: umount passed Apr 30 12:39:56.262234 ignition[1013]: INFO : Ignition finished successfully Apr 30 12:39:56.220453 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:39:56.220549 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:39:56.220971 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:39:56.221082 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:39:56.221810 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:39:56.221926 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:39:56.222380 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:39:56.222648 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:39:56.222746 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:39:56.223170 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:39:56.223495 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:39:56.223840 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:39:56.223933 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:39:56.224358 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:39:56.224442 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:39:56.224899 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:39:56.225009 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:39:56.225440 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:39:56.225543 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:39:56.242771 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:39:56.244832 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:39:56.246299 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:39:56.246425 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:39:56.248801 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:39:56.249002 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:39:56.255422 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:39:56.255542 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:39:56.261024 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:39:56.261186 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:39:56.263092 systemd[1]: Stopped target network.target - Network. Apr 30 12:39:56.264096 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:39:56.264180 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:39:56.265993 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:39:56.266053 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:39:56.268072 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:39:56.268131 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:39:56.270334 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:39:56.270391 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:39:56.272353 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:39:56.274221 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:39:56.277927 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:39:56.282848 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:39:56.283019 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:39:56.287601 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:39:56.287933 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:39:56.288080 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:39:56.292049 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:39:56.292890 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:39:56.292952 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:39:56.303790 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:39:56.305889 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:39:56.305983 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:39:56.308509 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:39:56.308594 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:39:56.311037 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:39:56.311093 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:39:56.313561 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:39:56.313709 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:39:56.316355 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:39:56.320727 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:39:56.320805 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:39:56.328682 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:39:56.328892 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:39:56.331142 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:39:56.331266 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:39:56.333862 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:39:56.333930 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:39:56.336114 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:39:56.336156 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:39:56.338192 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:39:56.338245 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:39:56.340547 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:39:56.340622 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:39:56.342145 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:39:56.342199 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:39:56.354702 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:39:56.356286 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:39:56.356343 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:39:56.358634 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 12:39:56.358688 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:39:56.359678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:39:56.359725 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:39:56.359999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:39:56.360042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:39:56.363784 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 12:39:56.363853 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:39:56.364248 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:39:56.364357 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:39:56.525658 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:39:56.525805 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:39:56.527881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:39:56.529713 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:39:56.529774 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:39:56.540844 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:39:56.548616 systemd[1]: Switching root. Apr 30 12:39:56.584445 systemd-journald[193]: Journal stopped Apr 30 12:39:58.285352 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 30 12:39:58.285438 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:39:58.285456 kernel: SELinux: policy capability open_perms=1 Apr 30 12:39:58.285472 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:39:58.285487 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:39:58.285510 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:39:58.285526 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:39:58.285546 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:39:58.285561 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:39:58.285600 kernel: audit: type=1403 audit(1746016797.384:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:39:58.285631 systemd[1]: Successfully loaded SELinux policy in 43.674ms. Apr 30 12:39:58.285657 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.795ms. Apr 30 12:39:58.285681 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:39:58.285698 systemd[1]: Detected virtualization kvm. Apr 30 12:39:58.285715 systemd[1]: Detected architecture x86-64. Apr 30 12:39:58.285731 systemd[1]: Detected first boot. Apr 30 12:39:58.285752 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:39:58.285770 zram_generator::config[1060]: No configuration found. Apr 30 12:39:58.285788 kernel: Guest personality initialized and is inactive Apr 30 12:39:58.285804 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Apr 30 12:39:58.285820 kernel: Initialized host personality Apr 30 12:39:58.285836 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:39:58.285852 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:39:58.285870 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:39:58.285890 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:39:58.285907 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:39:58.285924 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:39:58.285942 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:39:58.285959 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:39:58.285975 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:39:58.285992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:39:58.286016 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:39:58.286034 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:39:58.286056 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:39:58.286072 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:39:58.286090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:39:58.286107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:39:58.286124 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:39:58.286143 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:39:58.286160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:39:58.286178 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:39:58.286198 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 12:39:58.286215 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:39:58.286232 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:39:58.286249 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:39:58.286266 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:39:58.286283 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:39:58.286301 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:39:58.286317 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:39:58.286334 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:39:58.286354 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:39:58.286371 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:39:58.286388 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:39:58.286406 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:39:58.286423 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:39:58.286439 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:39:58.286456 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:39:58.286473 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:39:58.286490 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:39:58.286511 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:39:58.286528 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:39:58.286545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:58.286562 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:39:58.286600 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:39:58.286617 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:39:58.286634 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:39:58.286651 systemd[1]: Reached target machines.target - Containers. Apr 30 12:39:58.286672 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:39:58.286689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:39:58.286706 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:39:58.286722 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:39:58.286739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:39:58.286757 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:39:58.286774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:39:58.286791 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:39:58.286808 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:39:58.286828 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:39:58.286851 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:39:58.286867 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:39:58.286884 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:39:58.286903 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:39:58.286921 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:39:58.286939 kernel: fuse: init (API version 7.39) Apr 30 12:39:58.286955 kernel: loop: module loaded Apr 30 12:39:58.286978 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:39:58.286996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:39:58.287013 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:39:58.287030 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:39:58.287047 kernel: ACPI: bus type drm_connector registered Apr 30 12:39:58.287066 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:39:58.287083 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:39:58.287099 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:39:58.287117 systemd[1]: Stopped verity-setup.service. Apr 30 12:39:58.287162 systemd-journald[1131]: Collecting audit messages is disabled. Apr 30 12:39:58.287192 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:58.287209 systemd-journald[1131]: Journal started Apr 30 12:39:58.287244 systemd-journald[1131]: Runtime Journal (/run/log/journal/4cf0e4c33c2144b486c570b20f42e237) is 6M, max 48.2M, 42.2M free. Apr 30 12:39:58.037748 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:39:58.052128 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 12:39:58.052737 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:39:58.292607 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:39:58.294645 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:39:58.296082 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:39:58.297345 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:39:58.298484 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:39:58.299744 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:39:58.301011 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:39:58.302338 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:39:58.304112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:39:58.305751 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:39:58.305980 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:39:58.307595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:39:58.307824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:39:58.309497 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:39:58.309753 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:39:58.311295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:39:58.311518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:39:58.313188 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:39:58.313413 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:39:58.314881 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:39:58.315103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:39:58.316727 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:39:58.318219 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:39:58.319960 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:39:58.321607 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:39:58.338369 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:39:58.347645 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:39:58.350141 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:39:58.351407 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:39:58.351504 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:39:58.353690 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:39:58.356129 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:39:58.358494 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:39:58.359763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:39:58.362204 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:39:58.366188 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:39:58.368004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:39:58.370700 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:39:58.374689 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:39:58.377206 systemd-journald[1131]: Time spent on flushing to /var/log/journal/4cf0e4c33c2144b486c570b20f42e237 is 27.241ms for 1056 entries. Apr 30 12:39:58.377206 systemd-journald[1131]: System Journal (/var/log/journal/4cf0e4c33c2144b486c570b20f42e237) is 8M, max 195.6M, 187.6M free. Apr 30 12:39:58.422427 systemd-journald[1131]: Received client request to flush runtime journal. Apr 30 12:39:58.422478 kernel: loop0: detected capacity change from 0 to 147912 Apr 30 12:39:58.379274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:39:58.384182 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:39:58.391093 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:39:58.397388 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:39:58.400870 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:39:58.402490 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:39:58.404229 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:39:58.419458 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:39:58.425739 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:39:58.428764 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:39:58.440123 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Apr 30 12:39:58.440147 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Apr 30 12:39:58.441900 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:39:58.449863 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:39:58.451900 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:39:58.452941 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:39:58.457933 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:39:58.469594 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:39:58.470918 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 12:39:58.479063 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:39:58.491617 kernel: loop1: detected capacity change from 0 to 138176 Apr 30 12:39:58.496401 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:39:58.505994 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:39:58.524286 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Apr 30 12:39:58.524317 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Apr 30 12:39:58.534683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:39:58.545712 kernel: loop2: detected capacity change from 0 to 218376 Apr 30 12:39:58.583610 kernel: loop3: detected capacity change from 0 to 147912 Apr 30 12:39:58.598647 kernel: loop4: detected capacity change from 0 to 138176 Apr 30 12:39:58.619162 kernel: loop5: detected capacity change from 0 to 218376 Apr 30 12:39:58.628487 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 12:39:58.629270 (sd-merge)[1208]: Merged extensions into '/usr'. Apr 30 12:39:58.634343 systemd[1]: Reload requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:39:58.634358 systemd[1]: Reloading... Apr 30 12:39:58.700600 zram_generator::config[1232]: No configuration found. Apr 30 12:39:58.791338 ldconfig[1175]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:39:58.857380 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:39:58.934747 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:39:58.935625 systemd[1]: Reloading finished in 300 ms. Apr 30 12:39:58.958130 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:39:58.960127 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:39:58.983621 systemd[1]: Starting ensure-sysext.service... Apr 30 12:39:58.985823 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:39:59.037568 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:39:59.037980 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:39:59.039223 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:39:59.039514 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Apr 30 12:39:59.039625 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Apr 30 12:39:59.044234 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:39:59.044248 systemd-tmpfiles[1274]: Skipping /boot Apr 30 12:39:59.045639 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:39:59.045661 systemd[1]: Reloading... Apr 30 12:39:59.060939 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:39:59.060957 systemd-tmpfiles[1274]: Skipping /boot Apr 30 12:39:59.105702 zram_generator::config[1303]: No configuration found. Apr 30 12:39:59.223135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:39:59.290933 systemd[1]: Reloading finished in 244 ms. Apr 30 12:39:59.306094 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:39:59.328117 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:39:59.349056 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:39:59.352523 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:39:59.355852 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:39:59.359737 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:39:59.372973 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:39:59.376318 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:39:59.381824 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:59.382039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:39:59.386324 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:39:59.395037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:39:59.400971 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:39:59.403795 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:39:59.403961 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:39:59.408316 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:39:59.409380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:59.411751 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:39:59.414690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:39:59.415454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:39:59.417171 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Apr 30 12:39:59.418088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:39:59.418408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:39:59.421003 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:39:59.421353 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:39:59.439633 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:59.439938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:39:59.440807 augenrules[1376]: No rules Apr 30 12:39:59.447968 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:39:59.452999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:39:59.463742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:39:59.466999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:39:59.467151 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:39:59.472829 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:39:59.474649 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:59.476242 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:39:59.480159 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:39:59.482036 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:39:59.482526 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:39:59.488468 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:39:59.490840 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:39:59.493362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:39:59.493618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:39:59.495493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:39:59.497639 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:39:59.499813 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:39:59.500043 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:39:59.516137 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:39:59.533325 systemd[1]: Finished ensure-sysext.service. Apr 30 12:39:59.543444 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:59.546607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1406) Apr 30 12:39:59.552912 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:39:59.554136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:39:59.556856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:39:59.564857 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:39:59.570554 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:39:59.575819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:39:59.577064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:39:59.577115 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:39:59.580773 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:39:59.588995 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 12:39:59.590215 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:39:59.590271 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 12:39:59.591244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:39:59.591622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:39:59.593381 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:39:59.593837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:39:59.596249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:39:59.596563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:39:59.601024 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:39:59.601337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:39:59.607921 augenrules[1419]: /sbin/augenrules: No change Apr 30 12:39:59.620274 systemd-resolved[1347]: Positive Trust Anchors: Apr 30 12:39:59.620297 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:39:59.623996 augenrules[1447]: No rules Apr 30 12:39:59.620329 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:39:59.625394 systemd-resolved[1347]: Defaulting to hostname 'linux'. Apr 30 12:39:59.628882 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:39:59.629258 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:39:59.630988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:39:59.641362 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 12:39:59.649018 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:39:59.650638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:39:59.650750 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:39:59.663035 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 12:39:59.666669 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 30 12:39:59.672408 kernel: ACPI: button: Power Button [PWRF] Apr 30 12:39:59.673821 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:39:59.700608 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 30 12:39:59.706550 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 30 12:39:59.745159 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 12:39:59.745435 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 12:39:59.746221 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 12:39:59.717351 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:39:59.721067 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 12:39:59.722723 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:39:59.743227 systemd-networkd[1430]: lo: Link UP Apr 30 12:39:59.743234 systemd-networkd[1430]: lo: Gained carrier Apr 30 12:39:59.747763 systemd-networkd[1430]: Enumeration completed Apr 30 12:39:59.748112 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:39:59.749796 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:39:59.749808 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:39:59.751188 systemd-networkd[1430]: eth0: Link UP Apr 30 12:39:59.751208 systemd-networkd[1430]: eth0: Gained carrier Apr 30 12:39:59.751226 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:39:59.755316 systemd[1]: Reached target network.target - Network. Apr 30 12:39:59.764656 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:39:59.765701 systemd-networkd[1430]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 12:39:59.769491 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Apr 30 12:40:00.188045 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 12:40:00.188112 systemd-timesyncd[1431]: Initial clock synchronization to Wed 2025-04-30 12:40:00.187853 UTC. Apr 30 12:40:00.188155 systemd-resolved[1347]: Clock change detected. Flushing caches. Apr 30 12:40:00.193410 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 12:40:00.195423 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:40:00.205751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:40:00.239030 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:40:00.292706 kernel: kvm_amd: TSC scaling supported Apr 30 12:40:00.292806 kernel: kvm_amd: Nested Virtualization enabled Apr 30 12:40:00.292820 kernel: kvm_amd: Nested Paging enabled Apr 30 12:40:00.293908 kernel: kvm_amd: LBR virtualization supported Apr 30 12:40:00.293960 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Apr 30 12:40:00.294596 kernel: kvm_amd: Virtual GIF supported Apr 30 12:40:00.320420 kernel: EDAC MC: Ver: 3.0.0 Apr 30 12:40:00.331320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:40:00.354186 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:40:00.366649 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:40:00.376740 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:40:00.411256 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:40:00.413012 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:40:00.414314 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:40:00.415691 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:40:00.417169 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:40:00.418868 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:40:00.420298 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:40:00.421796 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:40:00.423139 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:40:00.423179 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:40:00.424122 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:40:00.426256 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:40:00.429717 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:40:00.434307 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:40:00.435825 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:40:00.437142 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:40:00.441813 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:40:00.443489 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:40:00.446261 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:40:00.448142 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:40:00.449547 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:40:00.450665 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:40:00.451816 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:40:00.451866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:40:00.458535 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:40:00.461350 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:40:00.463785 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:40:00.464194 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:40:00.468813 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:40:00.470050 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:40:00.473294 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:40:00.475733 jq[1486]: false Apr 30 12:40:00.477441 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:40:00.481477 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:40:00.483876 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:40:00.491592 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:40:00.494638 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:40:00.495466 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:40:00.498048 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:40:00.501874 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:40:00.505492 extend-filesystems[1487]: Found loop3 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found loop4 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found loop5 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found sr0 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found vda Apr 30 12:40:00.505492 extend-filesystems[1487]: Found vda1 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found vda2 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found vda3 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found usr Apr 30 12:40:00.505492 extend-filesystems[1487]: Found vda4 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found vda6 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found vda7 Apr 30 12:40:00.505492 extend-filesystems[1487]: Found vda9 Apr 30 12:40:00.505492 extend-filesystems[1487]: Checking size of /dev/vda9 Apr 30 12:40:00.505459 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:40:00.523279 dbus-daemon[1485]: [system] SELinux support is enabled Apr 30 12:40:00.511863 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:40:00.538862 update_engine[1495]: I20250430 12:40:00.532855 1495 main.cc:92] Flatcar Update Engine starting Apr 30 12:40:00.538862 update_engine[1495]: I20250430 12:40:00.534984 1495 update_check_scheduler.cc:74] Next update check in 3m1s Apr 30 12:40:00.512238 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:40:00.539314 jq[1497]: true Apr 30 12:40:00.515063 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:40:00.515460 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:40:00.523595 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:40:00.529920 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:40:00.531260 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:40:00.543437 jq[1511]: true Apr 30 12:40:00.559714 extend-filesystems[1487]: Resized partition /dev/vda9 Apr 30 12:40:00.569657 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:40:00.569740 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:40:00.571570 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:40:00.571623 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:40:00.572488 tar[1504]: linux-amd64/LICENSE Apr 30 12:40:00.573456 tar[1504]: linux-amd64/helm Apr 30 12:40:00.575236 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:40:00.576652 extend-filesystems[1523]: resize2fs 1.47.1 (20-May-2024) Apr 30 12:40:00.582278 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:40:00.591431 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 12:40:00.599853 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:40:00.680289 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 12:40:00.882463 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1386) Apr 30 12:40:00.883545 systemd-logind[1493]: Watching system buttons on /dev/input/event1 (Power Button) Apr 30 12:40:00.883880 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 12:40:00.896399 systemd-logind[1493]: New seat seat0. Apr 30 12:40:00.900733 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 12:40:00.900733 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 12:40:00.900733 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 12:40:00.900124 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:40:00.908274 extend-filesystems[1487]: Resized filesystem in /dev/vda9 Apr 30 12:40:00.900506 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:40:00.903521 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:40:00.956380 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:40:00.960461 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:40:00.970884 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 12:40:01.059760 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:40:01.127808 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:40:01.201526 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:40:01.225948 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:40:01.233788 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:40:01.234102 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:40:01.245452 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:40:01.279201 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:40:01.307991 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:40:01.312401 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 12:40:01.313971 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:40:01.561374 containerd[1515]: time="2025-04-30T12:40:01.561182310Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:40:01.592677 containerd[1515]: time="2025-04-30T12:40:01.592588629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.619265 containerd[1515]: time="2025-04-30T12:40:01.619177863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.619265 containerd[1515]: time="2025-04-30T12:40:01.619252964Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:40:01.619265 containerd[1515]: time="2025-04-30T12:40:01.619274735Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:40:01.619787 containerd[1515]: time="2025-04-30T12:40:01.619751299Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:40:01.619849 containerd[1515]: time="2025-04-30T12:40:01.619780113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.619973 containerd[1515]: time="2025-04-30T12:40:01.619930655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.620011 containerd[1515]: time="2025-04-30T12:40:01.619969798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.620474 containerd[1515]: time="2025-04-30T12:40:01.620411647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.620515 containerd[1515]: time="2025-04-30T12:40:01.620466180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.620515 containerd[1515]: time="2025-04-30T12:40:01.620501135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.620608 containerd[1515]: time="2025-04-30T12:40:01.620517195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.620819 containerd[1515]: time="2025-04-30T12:40:01.620781641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.621453 containerd[1515]: time="2025-04-30T12:40:01.621421301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:40:01.621746 containerd[1515]: time="2025-04-30T12:40:01.621716625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:40:01.621746 containerd[1515]: time="2025-04-30T12:40:01.621738536Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:40:01.621902 containerd[1515]: time="2025-04-30T12:40:01.621880112Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:40:01.621990 containerd[1515]: time="2025-04-30T12:40:01.621969640Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:40:01.628876 containerd[1515]: time="2025-04-30T12:40:01.628834326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:40:01.628930 containerd[1515]: time="2025-04-30T12:40:01.628902043Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:40:01.628959 containerd[1515]: time="2025-04-30T12:40:01.628936898Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:40:01.628992 containerd[1515]: time="2025-04-30T12:40:01.628957557Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:40:01.628992 containerd[1515]: time="2025-04-30T12:40:01.628973146Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:40:01.629166 containerd[1515]: time="2025-04-30T12:40:01.629139148Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:40:01.629555 containerd[1515]: time="2025-04-30T12:40:01.629531634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:40:01.629718 containerd[1515]: time="2025-04-30T12:40:01.629692485Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:40:01.629750 containerd[1515]: time="2025-04-30T12:40:01.629716761Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:40:01.629750 containerd[1515]: time="2025-04-30T12:40:01.629744453Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:40:01.629803 containerd[1515]: time="2025-04-30T12:40:01.629770291Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.629803 containerd[1515]: time="2025-04-30T12:40:01.629787013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.629864 containerd[1515]: time="2025-04-30T12:40:01.629805267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.629864 containerd[1515]: time="2025-04-30T12:40:01.629822139Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.629916 containerd[1515]: time="2025-04-30T12:40:01.629861883Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.629916 containerd[1515]: time="2025-04-30T12:40:01.629881921Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.629916 containerd[1515]: time="2025-04-30T12:40:01.629894124Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.629916 containerd[1515]: time="2025-04-30T12:40:01.629904142Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:40:01.630015 containerd[1515]: time="2025-04-30T12:40:01.629934640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630015 containerd[1515]: time="2025-04-30T12:40:01.629947454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630015 containerd[1515]: time="2025-04-30T12:40:01.629958454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630015 containerd[1515]: time="2025-04-30T12:40:01.629969786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630015 containerd[1515]: time="2025-04-30T12:40:01.629981457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630015 containerd[1515]: time="2025-04-30T12:40:01.629992939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630015 containerd[1515]: time="2025-04-30T12:40:01.630003669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630015 containerd[1515]: time="2025-04-30T12:40:01.630016012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630028406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630044095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630057540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630069903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630082327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630100511Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630125217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630142099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630155815Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:40:01.630222 containerd[1515]: time="2025-04-30T12:40:01.630218512Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:40:01.630504 containerd[1515]: time="2025-04-30T12:40:01.630242738Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:40:01.630504 containerd[1515]: time="2025-04-30T12:40:01.630256533Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:40:01.630504 containerd[1515]: time="2025-04-30T12:40:01.630271461Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:40:01.630504 containerd[1515]: time="2025-04-30T12:40:01.630288153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630504 containerd[1515]: time="2025-04-30T12:40:01.630303051Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:40:01.630504 containerd[1515]: time="2025-04-30T12:40:01.630333969Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:40:01.630504 containerd[1515]: time="2025-04-30T12:40:01.630348807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:40:01.630794 containerd[1515]: time="2025-04-30T12:40:01.630746262Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:40:01.630794 containerd[1515]: time="2025-04-30T12:40:01.630798129Z" level=info msg="Connect containerd service" Apr 30 12:40:01.631156 containerd[1515]: time="2025-04-30T12:40:01.630832313Z" level=info msg="using legacy CRI server" Apr 30 12:40:01.631156 containerd[1515]: time="2025-04-30T12:40:01.630840048Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:40:01.631156 containerd[1515]: time="2025-04-30T12:40:01.631015667Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:40:01.631926 containerd[1515]: time="2025-04-30T12:40:01.631887713Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:40:01.632222 containerd[1515]: time="2025-04-30T12:40:01.632086676Z" level=info msg="Start subscribing containerd event" Apr 30 12:40:01.632222 containerd[1515]: time="2025-04-30T12:40:01.632174341Z" level=info msg="Start recovering state" Apr 30 12:40:01.632519 containerd[1515]: time="2025-04-30T12:40:01.632271623Z" level=info msg="Start event monitor" Apr 30 12:40:01.632519 containerd[1515]: time="2025-04-30T12:40:01.632298604Z" level=info msg="Start snapshots syncer" Apr 30 12:40:01.632519 containerd[1515]: time="2025-04-30T12:40:01.632323040Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:40:01.632519 containerd[1515]: time="2025-04-30T12:40:01.632334922Z" level=info msg="Start streaming server" Apr 30 12:40:01.632519 containerd[1515]: time="2025-04-30T12:40:01.632471258Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:40:01.632652 containerd[1515]: time="2025-04-30T12:40:01.632536921Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:40:01.632652 containerd[1515]: time="2025-04-30T12:40:01.632631749Z" level=info msg="containerd successfully booted in 0.072899s" Apr 30 12:40:01.632955 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:40:01.803527 tar[1504]: linux-amd64/README.md Apr 30 12:40:01.828060 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:40:01.886682 systemd-networkd[1430]: eth0: Gained IPv6LL Apr 30 12:40:01.891282 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:40:01.893533 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:40:01.906746 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 12:40:01.909901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:01.912564 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:40:01.934466 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 12:40:01.935228 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 12:40:01.937124 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:40:01.942459 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:40:03.429160 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:40:03.469964 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:34438.service - OpenSSH per-connection server daemon (10.0.0.1:34438). Apr 30 12:40:03.529360 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 34438 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:03.532264 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:03.560090 systemd-logind[1493]: New session 1 of user core. Apr 30 12:40:03.561802 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:40:03.564639 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:40:03.566914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:03.569991 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:40:03.572695 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:40:03.647436 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:40:03.661726 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:40:03.666802 (systemd)[1605]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:40:03.669759 systemd-logind[1493]: New session c1 of user core. Apr 30 12:40:03.861777 systemd[1605]: Queued start job for default target default.target. Apr 30 12:40:03.880012 systemd[1605]: Created slice app.slice - User Application Slice. Apr 30 12:40:03.880044 systemd[1605]: Reached target paths.target - Paths. Apr 30 12:40:03.880100 systemd[1605]: Reached target timers.target - Timers. Apr 30 12:40:03.882156 systemd[1605]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:40:03.897916 systemd[1605]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:40:03.898079 systemd[1605]: Reached target sockets.target - Sockets. Apr 30 12:40:03.898129 systemd[1605]: Reached target basic.target - Basic System. Apr 30 12:40:03.898175 systemd[1605]: Reached target default.target - Main User Target. Apr 30 12:40:03.898214 systemd[1605]: Startup finished in 218ms. Apr 30 12:40:03.898840 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:40:03.908632 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:40:03.910894 systemd[1]: Startup finished in 1.499s (kernel) + 7.629s (initrd) + 6.152s (userspace) = 15.281s. Apr 30 12:40:04.014907 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:34482.service - OpenSSH per-connection server daemon (10.0.0.1:34482). Apr 30 12:40:04.110036 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 34482 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.113262 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.125877 systemd-logind[1493]: New session 2 of user core. Apr 30 12:40:04.138540 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:40:04.198911 sshd[1628]: Connection closed by 10.0.0.1 port 34482 Apr 30 12:40:04.199483 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.209694 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:34482.service: Deactivated successfully. Apr 30 12:40:04.211833 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 12:40:04.213659 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. Apr 30 12:40:04.222959 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:34492.service - OpenSSH per-connection server daemon (10.0.0.1:34492). Apr 30 12:40:04.225256 systemd-logind[1493]: Removed session 2. Apr 30 12:40:04.265258 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 34492 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.267872 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.273111 systemd-logind[1493]: New session 3 of user core. Apr 30 12:40:04.291623 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:40:04.345337 sshd[1636]: Connection closed by 10.0.0.1 port 34492 Apr 30 12:40:04.346221 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.351947 kubelet[1602]: E0430 12:40:04.351897 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:40:04.360782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:40:04.360974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:40:04.361355 systemd[1]: kubelet.service: Consumed 2.271s CPU time, 255.1M memory peak. Apr 30 12:40:04.361896 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:34492.service: Deactivated successfully. Apr 30 12:40:04.363898 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 12:40:04.366109 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. Apr 30 12:40:04.378804 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:34520.service - OpenSSH per-connection server daemon (10.0.0.1:34520). Apr 30 12:40:04.380138 systemd-logind[1493]: Removed session 3. Apr 30 12:40:04.421121 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 34520 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.422837 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.428955 systemd-logind[1493]: New session 4 of user core. Apr 30 12:40:04.442594 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:40:04.500942 sshd[1645]: Connection closed by 10.0.0.1 port 34520 Apr 30 12:40:04.504271 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.515453 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:34520.service: Deactivated successfully. Apr 30 12:40:04.523715 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:40:04.524990 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:40:04.535825 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:34530.service - OpenSSH per-connection server daemon (10.0.0.1:34530). Apr 30 12:40:04.536659 systemd-logind[1493]: Removed session 4. Apr 30 12:40:04.577564 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 34530 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.580036 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.585556 systemd-logind[1493]: New session 5 of user core. Apr 30 12:40:04.596709 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:40:04.660102 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 12:40:04.660585 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:40:04.680713 sudo[1654]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:04.683252 sshd[1653]: Connection closed by 10.0.0.1 port 34530 Apr 30 12:40:04.684192 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.704806 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:34530.service: Deactivated successfully. Apr 30 12:40:04.707174 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:40:04.709466 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:40:04.723651 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:34640.service - OpenSSH per-connection server daemon (10.0.0.1:34640). Apr 30 12:40:04.724816 systemd-logind[1493]: Removed session 5. Apr 30 12:40:04.764607 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 34640 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:04.766382 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:04.772248 systemd-logind[1493]: New session 6 of user core. Apr 30 12:40:04.781589 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:40:04.841499 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 12:40:04.841908 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:40:04.846833 sudo[1664]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:04.856615 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 12:40:04.857090 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:40:04.878885 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:40:04.956997 augenrules[1686]: No rules Apr 30 12:40:04.959560 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:40:04.959962 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:40:04.962445 sudo[1663]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:04.965958 sshd[1662]: Connection closed by 10.0.0.1 port 34640 Apr 30 12:40:04.966978 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:04.976813 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:34640.service: Deactivated successfully. Apr 30 12:40:04.979098 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:40:04.981077 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:40:04.994913 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:34654.service - OpenSSH per-connection server daemon (10.0.0.1:34654). Apr 30 12:40:04.996348 systemd-logind[1493]: Removed session 6. Apr 30 12:40:05.037738 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 34654 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:40:05.039820 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:40:05.045593 systemd-logind[1493]: New session 7 of user core. Apr 30 12:40:05.059661 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:40:05.114968 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:40:05.115372 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:40:05.731818 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:40:05.731955 (dockerd)[1718]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:40:06.024908 dockerd[1718]: time="2025-04-30T12:40:06.024717125Z" level=info msg="Starting up" Apr 30 12:40:07.171817 dockerd[1718]: time="2025-04-30T12:40:07.171544977Z" level=info msg="Loading containers: start." Apr 30 12:40:07.404430 kernel: Initializing XFRM netlink socket Apr 30 12:40:07.498831 systemd-networkd[1430]: docker0: Link UP Apr 30 12:40:07.610464 dockerd[1718]: time="2025-04-30T12:40:07.610406836Z" level=info msg="Loading containers: done." Apr 30 12:40:07.631661 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck54725160-merged.mount: Deactivated successfully. Apr 30 12:40:07.643972 dockerd[1718]: time="2025-04-30T12:40:07.643928894Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:40:07.644126 dockerd[1718]: time="2025-04-30T12:40:07.644045372Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:40:07.644297 dockerd[1718]: time="2025-04-30T12:40:07.644263982Z" level=info msg="Daemon has completed initialization" Apr 30 12:40:07.692813 dockerd[1718]: time="2025-04-30T12:40:07.692733828Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:40:07.692926 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:40:08.939605 containerd[1515]: time="2025-04-30T12:40:08.939552794Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 12:40:09.824037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642746977.mount: Deactivated successfully. Apr 30 12:40:11.967343 containerd[1515]: time="2025-04-30T12:40:11.967238477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:11.968167 containerd[1515]: time="2025-04-30T12:40:11.968099041Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" Apr 30 12:40:11.969608 containerd[1515]: time="2025-04-30T12:40:11.969566995Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:11.973810 containerd[1515]: time="2025-04-30T12:40:11.973753968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:11.975728 containerd[1515]: time="2025-04-30T12:40:11.975680412Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 3.036078065s" Apr 30 12:40:11.975823 containerd[1515]: time="2025-04-30T12:40:11.975749862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 12:40:11.976865 containerd[1515]: time="2025-04-30T12:40:11.976760638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 12:40:14.136480 containerd[1515]: time="2025-04-30T12:40:14.136377091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:14.137815 containerd[1515]: time="2025-04-30T12:40:14.137768561Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" Apr 30 12:40:14.139712 containerd[1515]: time="2025-04-30T12:40:14.139647906Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:14.143781 containerd[1515]: time="2025-04-30T12:40:14.143736926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:14.145220 containerd[1515]: time="2025-04-30T12:40:14.145182568Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.168377036s" Apr 30 12:40:14.145277 containerd[1515]: time="2025-04-30T12:40:14.145222152Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 12:40:14.146733 containerd[1515]: time="2025-04-30T12:40:14.146283893Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 12:40:14.611670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:40:14.625571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:14.869854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:14.876532 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:40:14.985276 kubelet[1982]: E0430 12:40:14.985119 1982 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:40:14.993897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:40:14.994170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:40:14.994640 systemd[1]: kubelet.service: Consumed 336ms CPU time, 104.8M memory peak. Apr 30 12:40:17.092741 containerd[1515]: time="2025-04-30T12:40:17.092657468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:17.094430 containerd[1515]: time="2025-04-30T12:40:17.094338090Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" Apr 30 12:40:17.095852 containerd[1515]: time="2025-04-30T12:40:17.095818708Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:17.098441 containerd[1515]: time="2025-04-30T12:40:17.098409617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:17.099710 containerd[1515]: time="2025-04-30T12:40:17.099647349Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.953310045s" Apr 30 12:40:17.099710 containerd[1515]: time="2025-04-30T12:40:17.099692514Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 12:40:17.100468 containerd[1515]: time="2025-04-30T12:40:17.100255590Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 12:40:18.412572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3546594680.mount: Deactivated successfully. Apr 30 12:40:19.232790 containerd[1515]: time="2025-04-30T12:40:19.232727528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:19.233423 containerd[1515]: time="2025-04-30T12:40:19.233372057Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Apr 30 12:40:19.234529 containerd[1515]: time="2025-04-30T12:40:19.234479143Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:19.238087 containerd[1515]: time="2025-04-30T12:40:19.238030324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:19.239047 containerd[1515]: time="2025-04-30T12:40:19.239011124Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.138721861s" Apr 30 12:40:19.239047 containerd[1515]: time="2025-04-30T12:40:19.239042793Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 12:40:19.239521 containerd[1515]: time="2025-04-30T12:40:19.239496564Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 12:40:20.400471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671927153.mount: Deactivated successfully. Apr 30 12:40:23.250670 containerd[1515]: time="2025-04-30T12:40:23.250578618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:23.251761 containerd[1515]: time="2025-04-30T12:40:23.251679403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Apr 30 12:40:23.253055 containerd[1515]: time="2025-04-30T12:40:23.252992035Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:23.260901 containerd[1515]: time="2025-04-30T12:40:23.260838573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:23.262054 containerd[1515]: time="2025-04-30T12:40:23.262010771Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.02240419s" Apr 30 12:40:23.262054 containerd[1515]: time="2025-04-30T12:40:23.262050115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 12:40:23.262827 containerd[1515]: time="2025-04-30T12:40:23.262755438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 12:40:23.945954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544960797.mount: Deactivated successfully. Apr 30 12:40:23.952464 containerd[1515]: time="2025-04-30T12:40:23.952374717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:23.953306 containerd[1515]: time="2025-04-30T12:40:23.953226414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 12:40:23.955298 containerd[1515]: time="2025-04-30T12:40:23.955245952Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:23.957896 containerd[1515]: time="2025-04-30T12:40:23.957846130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:23.958876 containerd[1515]: time="2025-04-30T12:40:23.958839152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 696.046544ms" Apr 30 12:40:23.958952 containerd[1515]: time="2025-04-30T12:40:23.958878446Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 12:40:23.959512 containerd[1515]: time="2025-04-30T12:40:23.959493089Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 12:40:24.479751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311097116.mount: Deactivated successfully. Apr 30 12:40:25.124318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:40:25.133588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:25.307634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:25.313416 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:40:25.785712 kubelet[2115]: E0430 12:40:25.785633 2115 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:40:25.790342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:40:25.790608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:40:25.791084 systemd[1]: kubelet.service: Consumed 301ms CPU time, 104.3M memory peak. Apr 30 12:40:27.478269 containerd[1515]: time="2025-04-30T12:40:27.478167035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:27.480020 containerd[1515]: time="2025-04-30T12:40:27.479294751Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Apr 30 12:40:27.481541 containerd[1515]: time="2025-04-30T12:40:27.481458890Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:27.486258 containerd[1515]: time="2025-04-30T12:40:27.486167342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:40:27.487957 containerd[1515]: time="2025-04-30T12:40:27.487895523Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.528294983s" Apr 30 12:40:27.487957 containerd[1515]: time="2025-04-30T12:40:27.487940067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 12:40:30.803509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:30.803786 systemd[1]: kubelet.service: Consumed 301ms CPU time, 104.3M memory peak. Apr 30 12:40:30.884749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:30.915707 systemd[1]: Reload requested from client PID 2159 ('systemctl') (unit session-7.scope)... Apr 30 12:40:30.915733 systemd[1]: Reloading... Apr 30 12:40:31.058501 zram_generator::config[2203]: No configuration found. Apr 30 12:40:31.425902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:40:31.555403 systemd[1]: Reloading finished in 639 ms. Apr 30 12:40:31.603594 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:31.606843 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:40:31.607142 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:31.607195 systemd[1]: kubelet.service: Consumed 210ms CPU time, 91.7M memory peak. Apr 30 12:40:31.608999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:31.782828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:31.787407 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:40:31.861886 kubelet[2253]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:40:31.861886 kubelet[2253]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 12:40:31.861886 kubelet[2253]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:40:31.862442 kubelet[2253]: I0430 12:40:31.861928 2253 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:40:32.062984 kubelet[2253]: I0430 12:40:32.062840 2253 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 12:40:32.062984 kubelet[2253]: I0430 12:40:32.062876 2253 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:40:32.063184 kubelet[2253]: I0430 12:40:32.063150 2253 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 12:40:32.289030 kubelet[2253]: E0430 12:40:32.288950 2253 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:32.289368 kubelet[2253]: I0430 12:40:32.289329 2253 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:40:32.348345 kubelet[2253]: E0430 12:40:32.348196 2253 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:40:32.348345 kubelet[2253]: I0430 12:40:32.348236 2253 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:40:32.356551 kubelet[2253]: I0430 12:40:32.356499 2253 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:40:32.359264 kubelet[2253]: I0430 12:40:32.359220 2253 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:40:32.359443 kubelet[2253]: I0430 12:40:32.359256 2253 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:40:32.359443 kubelet[2253]: I0430 12:40:32.359434 2253 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:40:32.359443 kubelet[2253]: I0430 12:40:32.359443 2253 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 12:40:32.359648 kubelet[2253]: I0430 12:40:32.359596 2253 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:40:32.366036 kubelet[2253]: I0430 12:40:32.366003 2253 kubelet.go:446] "Attempting to sync node with API server" Apr 30 12:40:32.366036 kubelet[2253]: I0430 12:40:32.366023 2253 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:40:32.366114 kubelet[2253]: I0430 12:40:32.366042 2253 kubelet.go:352] "Adding apiserver pod source" Apr 30 12:40:32.366114 kubelet[2253]: I0430 12:40:32.366053 2253 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:40:32.395732 kubelet[2253]: W0430 12:40:32.395566 2253 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Apr 30 12:40:32.396382 kubelet[2253]: E0430 12:40:32.395827 2253 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:32.396382 kubelet[2253]: I0430 12:40:32.396019 2253 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:40:32.396382 kubelet[2253]: W0430 12:40:32.396051 2253 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Apr 30 12:40:32.396382 kubelet[2253]: E0430 12:40:32.396123 2253 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:32.396946 kubelet[2253]: I0430 12:40:32.396910 2253 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:40:32.407276 kubelet[2253]: W0430 12:40:32.407240 2253 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:40:32.416564 kubelet[2253]: I0430 12:40:32.416532 2253 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 12:40:32.416632 kubelet[2253]: I0430 12:40:32.416579 2253 server.go:1287] "Started kubelet" Apr 30 12:40:32.416853 kubelet[2253]: I0430 12:40:32.416654 2253 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:40:32.417669 kubelet[2253]: I0430 12:40:32.417645 2253 server.go:490] "Adding debug handlers to kubelet server" Apr 30 12:40:32.417759 kubelet[2253]: I0430 12:40:32.417688 2253 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:40:32.418859 kubelet[2253]: I0430 12:40:32.418787 2253 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:40:32.418983 kubelet[2253]: I0430 12:40:32.418948 2253 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:40:32.419103 kubelet[2253]: I0430 12:40:32.419079 2253 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 12:40:32.419363 kubelet[2253]: E0430 12:40:32.419106 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.419363 kubelet[2253]: I0430 12:40:32.419179 2253 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:40:32.420533 kubelet[2253]: I0430 12:40:32.420503 2253 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:40:32.420704 kubelet[2253]: I0430 12:40:32.420672 2253 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:40:32.421770 kubelet[2253]: I0430 12:40:32.421072 2253 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:40:32.421770 kubelet[2253]: I0430 12:40:32.421149 2253 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:40:32.421770 kubelet[2253]: W0430 12:40:32.421614 2253 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Apr 30 12:40:32.421770 kubelet[2253]: E0430 12:40:32.421666 2253 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:32.421918 kubelet[2253]: E0430 12:40:32.421795 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Apr 30 12:40:32.422074 kubelet[2253]: E0430 12:40:32.422048 2253 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:40:32.422653 kubelet[2253]: I0430 12:40:32.422551 2253 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:40:32.425322 kubelet[2253]: E0430 12:40:32.423964 2253 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183b190eec0a9cae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 12:40:32.416554158 +0000 UTC m=+0.624688728,LastTimestamp:2025-04-30 12:40:32.416554158 +0000 UTC m=+0.624688728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 12:40:32.435144 kubelet[2253]: I0430 12:40:32.435112 2253 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 12:40:32.435144 kubelet[2253]: I0430 12:40:32.435130 2253 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 12:40:32.435144 kubelet[2253]: I0430 12:40:32.435147 2253 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:40:32.439073 kubelet[2253]: I0430 12:40:32.439027 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:40:32.440504 kubelet[2253]: I0430 12:40:32.440475 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:40:32.440504 kubelet[2253]: I0430 12:40:32.440502 2253 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 12:40:32.440578 kubelet[2253]: I0430 12:40:32.440552 2253 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 12:40:32.440578 kubelet[2253]: I0430 12:40:32.440571 2253 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 12:40:32.440681 kubelet[2253]: E0430 12:40:32.440640 2253 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:40:32.520364 kubelet[2253]: E0430 12:40:32.520255 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.541851 kubelet[2253]: E0430 12:40:32.541763 2253 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 12:40:32.621324 kubelet[2253]: E0430 12:40:32.621070 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.622988 kubelet[2253]: E0430 12:40:32.622895 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Apr 30 12:40:32.721305 kubelet[2253]: E0430 12:40:32.721221 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.742576 kubelet[2253]: E0430 12:40:32.742525 2253 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 12:40:32.821828 kubelet[2253]: E0430 12:40:32.821777 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:32.922668 kubelet[2253]: E0430 12:40:32.922620 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.023206 kubelet[2253]: E0430 12:40:33.023124 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.023616 kubelet[2253]: E0430 12:40:33.023579 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Apr 30 12:40:33.123995 kubelet[2253]: E0430 12:40:33.123937 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.143227 kubelet[2253]: E0430 12:40:33.143191 2253 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 12:40:33.224969 kubelet[2253]: E0430 12:40:33.224797 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.325467 kubelet[2253]: E0430 12:40:33.325369 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.399752 kubelet[2253]: W0430 12:40:33.399700 2253 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Apr 30 12:40:33.399752 kubelet[2253]: E0430 12:40:33.399743 2253 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:33.426161 kubelet[2253]: E0430 12:40:33.426117 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.453044 kubelet[2253]: W0430 12:40:33.453013 2253 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Apr 30 12:40:33.453100 kubelet[2253]: E0430 12:40:33.453044 2253 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:33.526751 kubelet[2253]: E0430 12:40:33.526629 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.627170 kubelet[2253]: E0430 12:40:33.627122 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.727892 kubelet[2253]: E0430 12:40:33.727827 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.757326 kubelet[2253]: W0430 12:40:33.757250 2253 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Apr 30 12:40:33.757496 kubelet[2253]: E0430 12:40:33.757337 2253 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:33.802270 kubelet[2253]: I0430 12:40:33.801957 2253 policy_none.go:49] "None policy: Start" Apr 30 12:40:33.802270 kubelet[2253]: I0430 12:40:33.802015 2253 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 12:40:33.802270 kubelet[2253]: I0430 12:40:33.802036 2253 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:40:33.818599 kubelet[2253]: W0430 12:40:33.818535 2253 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Apr 30 12:40:33.818599 kubelet[2253]: E0430 12:40:33.818581 2253 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:33.819525 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:40:33.824864 kubelet[2253]: E0430 12:40:33.824813 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" Apr 30 12:40:33.828898 kubelet[2253]: E0430 12:40:33.828839 2253 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:33.841499 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:40:33.855036 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:40:33.856559 kubelet[2253]: I0430 12:40:33.856502 2253 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:40:33.856871 kubelet[2253]: I0430 12:40:33.856846 2253 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:40:33.857067 kubelet[2253]: I0430 12:40:33.856880 2253 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:40:33.857180 kubelet[2253]: I0430 12:40:33.857151 2253 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:40:33.857958 kubelet[2253]: E0430 12:40:33.857938 2253 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 12:40:33.858033 kubelet[2253]: E0430 12:40:33.857989 2253 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 12:40:33.955294 systemd[1]: Created slice kubepods-burstable-pod387ce659c56f022a930cccb7f73bb904.slice - libcontainer container kubepods-burstable-pod387ce659c56f022a930cccb7f73bb904.slice. Apr 30 12:40:33.958505 kubelet[2253]: I0430 12:40:33.958461 2253 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 12:40:33.958969 kubelet[2253]: E0430 12:40:33.958925 2253 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 30 12:40:33.982665 kubelet[2253]: E0430 12:40:33.982618 2253 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 12:40:33.986120 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. Apr 30 12:40:33.999001 kubelet[2253]: E0430 12:40:33.998956 2253 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 12:40:34.001690 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. Apr 30 12:40:34.003576 kubelet[2253]: E0430 12:40:34.003541 2253 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 12:40:34.030898 kubelet[2253]: I0430 12:40:34.030832 2253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/387ce659c56f022a930cccb7f73bb904-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"387ce659c56f022a930cccb7f73bb904\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:34.030898 kubelet[2253]: I0430 12:40:34.030882 2253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:34.030898 kubelet[2253]: I0430 12:40:34.030909 2253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:34.031123 kubelet[2253]: I0430 12:40:34.030954 2253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:34.031123 kubelet[2253]: I0430 12:40:34.031006 2253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/387ce659c56f022a930cccb7f73bb904-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"387ce659c56f022a930cccb7f73bb904\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:34.031123 kubelet[2253]: I0430 12:40:34.031049 2253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/387ce659c56f022a930cccb7f73bb904-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"387ce659c56f022a930cccb7f73bb904\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:34.031123 kubelet[2253]: I0430 12:40:34.031074 2253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:34.031123 kubelet[2253]: I0430 12:40:34.031087 2253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:34.031275 kubelet[2253]: I0430 12:40:34.031102 2253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:34.160803 kubelet[2253]: I0430 12:40:34.160759 2253 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 12:40:34.161167 kubelet[2253]: E0430 12:40:34.161132 2253 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 30 12:40:34.283427 kubelet[2253]: E0430 12:40:34.283356 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:34.284360 containerd[1515]: time="2025-04-30T12:40:34.284315506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:387ce659c56f022a930cccb7f73bb904,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:34.299878 kubelet[2253]: E0430 12:40:34.299821 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:34.300444 containerd[1515]: time="2025-04-30T12:40:34.300402455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:34.304749 kubelet[2253]: E0430 12:40:34.304701 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:34.305244 containerd[1515]: time="2025-04-30T12:40:34.305204501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:34.434254 kubelet[2253]: E0430 12:40:34.434147 2253 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:34.563690 kubelet[2253]: I0430 12:40:34.563650 2253 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 12:40:34.564062 kubelet[2253]: E0430 12:40:34.564026 2253 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 30 12:40:34.805738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639257697.mount: Deactivated successfully. Apr 30 12:40:34.815225 containerd[1515]: time="2025-04-30T12:40:34.815155465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.818314 containerd[1515]: time="2025-04-30T12:40:34.818234800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 12:40:34.819345 containerd[1515]: time="2025-04-30T12:40:34.819300919Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.821298 containerd[1515]: time="2025-04-30T12:40:34.821252478Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.822447 containerd[1515]: time="2025-04-30T12:40:34.822381548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:40:34.823742 containerd[1515]: time="2025-04-30T12:40:34.823697937Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.824715 containerd[1515]: time="2025-04-30T12:40:34.824674773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:40:34.825871 containerd[1515]: time="2025-04-30T12:40:34.825838019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:40:34.826562 containerd[1515]: time="2025-04-30T12:40:34.826537994Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 542.053774ms" Apr 30 12:40:34.829432 containerd[1515]: time="2025-04-30T12:40:34.829379904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.892025ms" Apr 30 12:40:34.833094 containerd[1515]: time="2025-04-30T12:40:34.833043223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.736605ms" Apr 30 12:40:34.907288 kubelet[2253]: W0430 12:40:34.907194 2253 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Apr 30 12:40:34.907288 kubelet[2253]: E0430 12:40:34.907282 2253 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Apr 30 12:40:35.074476 containerd[1515]: time="2025-04-30T12:40:35.069604905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:35.074476 containerd[1515]: time="2025-04-30T12:40:35.072199953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:35.074476 containerd[1515]: time="2025-04-30T12:40:35.072216024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:35.074476 containerd[1515]: time="2025-04-30T12:40:35.072324483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:35.085463 containerd[1515]: time="2025-04-30T12:40:35.085233548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:35.086130 containerd[1515]: time="2025-04-30T12:40:35.085510268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:35.086130 containerd[1515]: time="2025-04-30T12:40:35.085533363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:35.086459 containerd[1515]: time="2025-04-30T12:40:35.086210732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:35.108889 systemd[1]: Started cri-containerd-58210907b731e9f9e35786fa8071cdadb7e801bc7c82192e924055355e85e88a.scope - libcontainer container 58210907b731e9f9e35786fa8071cdadb7e801bc7c82192e924055355e85e88a. Apr 30 12:40:35.110108 containerd[1515]: time="2025-04-30T12:40:35.109987989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:35.110468 containerd[1515]: time="2025-04-30T12:40:35.110298213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:35.111509 containerd[1515]: time="2025-04-30T12:40:35.111378105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:35.111925 containerd[1515]: time="2025-04-30T12:40:35.111856413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:35.128658 systemd[1]: Started cri-containerd-fbfbaad60b584aed7f828111bb8cd37c6f18092e34d85757a8f088e0c3cdaef4.scope - libcontainer container fbfbaad60b584aed7f828111bb8cd37c6f18092e34d85757a8f088e0c3cdaef4. Apr 30 12:40:35.198774 systemd[1]: Started cri-containerd-b0a21e8645a2c0ef448d05e683899ec79ef7965fb5c4dae8b4da55c4ca08beee.scope - libcontainer container b0a21e8645a2c0ef448d05e683899ec79ef7965fb5c4dae8b4da55c4ca08beee. Apr 30 12:40:35.222259 containerd[1515]: time="2025-04-30T12:40:35.222215713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"58210907b731e9f9e35786fa8071cdadb7e801bc7c82192e924055355e85e88a\"" Apr 30 12:40:35.225163 kubelet[2253]: E0430 12:40:35.224876 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:35.228956 containerd[1515]: time="2025-04-30T12:40:35.228908492Z" level=info msg="CreateContainer within sandbox \"58210907b731e9f9e35786fa8071cdadb7e801bc7c82192e924055355e85e88a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:40:35.264614 containerd[1515]: time="2025-04-30T12:40:35.264547354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0a21e8645a2c0ef448d05e683899ec79ef7965fb5c4dae8b4da55c4ca08beee\"" Apr 30 12:40:35.265899 kubelet[2253]: E0430 12:40:35.265856 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:35.268993 containerd[1515]: time="2025-04-30T12:40:35.268951102Z" level=info msg="CreateContainer within sandbox \"b0a21e8645a2c0ef448d05e683899ec79ef7965fb5c4dae8b4da55c4ca08beee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:40:35.312436 containerd[1515]: time="2025-04-30T12:40:35.312279857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:387ce659c56f022a930cccb7f73bb904,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbfbaad60b584aed7f828111bb8cd37c6f18092e34d85757a8f088e0c3cdaef4\"" Apr 30 12:40:35.313978 kubelet[2253]: E0430 12:40:35.313936 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:35.315769 containerd[1515]: time="2025-04-30T12:40:35.315721329Z" level=info msg="CreateContainer within sandbox \"fbfbaad60b584aed7f828111bb8cd37c6f18092e34d85757a8f088e0c3cdaef4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:40:35.322462 containerd[1515]: time="2025-04-30T12:40:35.322369403Z" level=info msg="CreateContainer within sandbox \"58210907b731e9f9e35786fa8071cdadb7e801bc7c82192e924055355e85e88a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"717b3a102bfd703e90e03240120c734bd3d8da3293df52617a0f32fdd8198ae8\"" Apr 30 12:40:35.323051 containerd[1515]: time="2025-04-30T12:40:35.323021494Z" level=info msg="StartContainer for \"717b3a102bfd703e90e03240120c734bd3d8da3293df52617a0f32fdd8198ae8\"" Apr 30 12:40:35.329081 containerd[1515]: time="2025-04-30T12:40:35.328841590Z" level=info msg="CreateContainer within sandbox \"b0a21e8645a2c0ef448d05e683899ec79ef7965fb5c4dae8b4da55c4ca08beee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a6b34a3365a898bf06afbd9ec576ebf7664425d95f21a867dddfff99dd94355a\"" Apr 30 12:40:35.329868 containerd[1515]: time="2025-04-30T12:40:35.329327863Z" level=info msg="StartContainer for \"a6b34a3365a898bf06afbd9ec576ebf7664425d95f21a867dddfff99dd94355a\"" Apr 30 12:40:35.343307 containerd[1515]: time="2025-04-30T12:40:35.343246044Z" level=info msg="CreateContainer within sandbox \"fbfbaad60b584aed7f828111bb8cd37c6f18092e34d85757a8f088e0c3cdaef4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2efe8ccc3619d4779098908002715c868114d8b5b05f9683efeaa5cc8586d046\"" Apr 30 12:40:35.344439 containerd[1515]: time="2025-04-30T12:40:35.344053603Z" level=info msg="StartContainer for \"2efe8ccc3619d4779098908002715c868114d8b5b05f9683efeaa5cc8586d046\"" Apr 30 12:40:35.365761 systemd[1]: Started cri-containerd-717b3a102bfd703e90e03240120c734bd3d8da3293df52617a0f32fdd8198ae8.scope - libcontainer container 717b3a102bfd703e90e03240120c734bd3d8da3293df52617a0f32fdd8198ae8. Apr 30 12:40:35.366751 kubelet[2253]: I0430 12:40:35.366293 2253 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 12:40:35.366832 kubelet[2253]: E0430 12:40:35.366808 2253 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 30 12:40:35.367685 systemd[1]: Started cri-containerd-a6b34a3365a898bf06afbd9ec576ebf7664425d95f21a867dddfff99dd94355a.scope - libcontainer container a6b34a3365a898bf06afbd9ec576ebf7664425d95f21a867dddfff99dd94355a. Apr 30 12:40:35.393666 systemd[1]: Started cri-containerd-2efe8ccc3619d4779098908002715c868114d8b5b05f9683efeaa5cc8586d046.scope - libcontainer container 2efe8ccc3619d4779098908002715c868114d8b5b05f9683efeaa5cc8586d046. Apr 30 12:40:35.428526 kubelet[2253]: E0430 12:40:35.425859 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="3.2s" Apr 30 12:40:35.470500 containerd[1515]: time="2025-04-30T12:40:35.469363836Z" level=info msg="StartContainer for \"717b3a102bfd703e90e03240120c734bd3d8da3293df52617a0f32fdd8198ae8\" returns successfully" Apr 30 12:40:35.477051 containerd[1515]: time="2025-04-30T12:40:35.477000557Z" level=info msg="StartContainer for \"a6b34a3365a898bf06afbd9ec576ebf7664425d95f21a867dddfff99dd94355a\" returns successfully" Apr 30 12:40:35.493572 containerd[1515]: time="2025-04-30T12:40:35.493500761Z" level=info msg="StartContainer for \"2efe8ccc3619d4779098908002715c868114d8b5b05f9683efeaa5cc8586d046\" returns successfully" Apr 30 12:40:36.458270 kubelet[2253]: E0430 12:40:36.458220 2253 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 12:40:36.458795 kubelet[2253]: E0430 12:40:36.458377 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:36.480369 kubelet[2253]: E0430 12:40:36.480152 2253 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 12:40:36.480369 kubelet[2253]: E0430 12:40:36.480289 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:36.483424 kubelet[2253]: E0430 12:40:36.483362 2253 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 12:40:36.483622 kubelet[2253]: E0430 12:40:36.483502 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:36.970980 kubelet[2253]: I0430 12:40:36.970577 2253 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 12:40:37.350559 kubelet[2253]: I0430 12:40:37.350435 2253 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Apr 30 12:40:37.398903 kubelet[2253]: I0430 12:40:37.398855 2253 apiserver.go:52] "Watching apiserver" Apr 30 12:40:37.422148 kubelet[2253]: I0430 12:40:37.422096 2253 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:37.422148 kubelet[2253]: I0430 12:40:37.422134 2253 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:40:37.480340 kubelet[2253]: I0430 12:40:37.480286 2253 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:37.480877 kubelet[2253]: I0430 12:40:37.480618 2253 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:37.480877 kubelet[2253]: I0430 12:40:37.480779 2253 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:37.512445 kubelet[2253]: E0430 12:40:37.511844 2253 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:37.512445 kubelet[2253]: E0430 12:40:37.511904 2253 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:37.512445 kubelet[2253]: I0430 12:40:37.511929 2253 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:37.512445 kubelet[2253]: E0430 12:40:37.512107 2253 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:37.512445 kubelet[2253]: E0430 12:40:37.511844 2253 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:37.512445 kubelet[2253]: E0430 12:40:37.512204 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:37.512445 kubelet[2253]: E0430 12:40:37.512282 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:37.512445 kubelet[2253]: E0430 12:40:37.512107 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:37.523614 kubelet[2253]: E0430 12:40:37.523522 2253 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:37.524075 kubelet[2253]: I0430 12:40:37.523835 2253 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:37.526896 kubelet[2253]: E0430 12:40:37.526850 2253 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:38.482832 kubelet[2253]: I0430 12:40:38.482775 2253 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:38.483367 kubelet[2253]: I0430 12:40:38.483213 2253 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:38.488993 kubelet[2253]: E0430 12:40:38.488839 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:38.492437 kubelet[2253]: E0430 12:40:38.492277 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:39.117735 kubelet[2253]: I0430 12:40:39.117687 2253 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:39.123806 kubelet[2253]: E0430 12:40:39.123758 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:39.484843 kubelet[2253]: E0430 12:40:39.484782 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:39.485482 kubelet[2253]: E0430 12:40:39.484974 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:39.485482 kubelet[2253]: E0430 12:40:39.485251 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:39.854083 systemd[1]: Reload requested from client PID 2543 ('systemctl') (unit session-7.scope)... Apr 30 12:40:39.854103 systemd[1]: Reloading... Apr 30 12:40:39.956484 zram_generator::config[2590]: No configuration found. Apr 30 12:40:40.094783 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:40:40.262125 systemd[1]: Reloading finished in 407 ms. Apr 30 12:40:40.289542 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:40.301893 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:40:40.302288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:40.302367 systemd[1]: kubelet.service: Consumed 1.202s CPU time, 131.2M memory peak. Apr 30 12:40:40.311956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:40:40.504540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:40:40.510035 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:40:40.593687 kubelet[2632]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:40:40.593687 kubelet[2632]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 12:40:40.593687 kubelet[2632]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:40:40.593687 kubelet[2632]: I0430 12:40:40.593631 2632 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:40:40.601863 kubelet[2632]: I0430 12:40:40.601827 2632 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 12:40:40.601863 kubelet[2632]: I0430 12:40:40.601852 2632 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:40:40.602101 kubelet[2632]: I0430 12:40:40.602082 2632 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 12:40:40.603321 kubelet[2632]: I0430 12:40:40.603301 2632 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:40:40.605593 kubelet[2632]: I0430 12:40:40.605570 2632 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:40:40.608527 kubelet[2632]: E0430 12:40:40.608499 2632 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 12:40:40.608527 kubelet[2632]: I0430 12:40:40.608524 2632 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 12:40:40.614343 kubelet[2632]: I0430 12:40:40.614303 2632 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:40:40.615643 kubelet[2632]: I0430 12:40:40.614594 2632 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:40:40.615643 kubelet[2632]: I0430 12:40:40.614627 2632 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 12:40:40.615643 kubelet[2632]: I0430 12:40:40.614887 2632 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:40:40.615643 kubelet[2632]: I0430 12:40:40.614901 2632 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 12:40:40.615902 kubelet[2632]: I0430 12:40:40.614946 2632 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:40:40.615902 kubelet[2632]: I0430 12:40:40.615141 2632 kubelet.go:446] "Attempting to sync node with API server" Apr 30 12:40:40.615902 kubelet[2632]: I0430 12:40:40.615162 2632 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:40:40.615902 kubelet[2632]: I0430 12:40:40.615183 2632 kubelet.go:352] "Adding apiserver pod source" Apr 30 12:40:40.615902 kubelet[2632]: I0430 12:40:40.615195 2632 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:40:40.616213 kubelet[2632]: I0430 12:40:40.616178 2632 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:40:40.616925 kubelet[2632]: I0430 12:40:40.616889 2632 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:40:40.617838 kubelet[2632]: I0430 12:40:40.617812 2632 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 12:40:40.617886 kubelet[2632]: I0430 12:40:40.617848 2632 server.go:1287] "Started kubelet" Apr 30 12:40:40.623568 kubelet[2632]: I0430 12:40:40.623509 2632 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:40:40.623938 kubelet[2632]: I0430 12:40:40.623915 2632 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:40:40.625827 kubelet[2632]: I0430 12:40:40.625502 2632 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:40:40.627018 kubelet[2632]: I0430 12:40:40.626100 2632 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:40:40.627185 kubelet[2632]: I0430 12:40:40.627153 2632 server.go:490] "Adding debug handlers to kubelet server" Apr 30 12:40:40.629155 kubelet[2632]: E0430 12:40:40.628297 2632 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:40:40.629249 kubelet[2632]: I0430 12:40:40.629228 2632 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 12:40:40.632215 kubelet[2632]: I0430 12:40:40.632175 2632 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 12:40:40.632593 kubelet[2632]: E0430 12:40:40.632576 2632 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 12:40:40.632868 kubelet[2632]: I0430 12:40:40.632853 2632 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:40:40.633156 kubelet[2632]: I0430 12:40:40.633145 2632 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:40:40.633491 kubelet[2632]: I0430 12:40:40.633447 2632 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:40:40.633691 kubelet[2632]: I0430 12:40:40.633664 2632 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:40:40.635127 kubelet[2632]: I0430 12:40:40.635093 2632 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:40:40.643116 kubelet[2632]: I0430 12:40:40.642955 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:40:40.644445 kubelet[2632]: I0430 12:40:40.644382 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:40:40.644445 kubelet[2632]: I0430 12:40:40.644434 2632 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 12:40:40.644514 kubelet[2632]: I0430 12:40:40.644456 2632 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 12:40:40.644514 kubelet[2632]: I0430 12:40:40.644465 2632 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 12:40:40.644578 kubelet[2632]: E0430 12:40:40.644515 2632 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:40:40.678060 kubelet[2632]: I0430 12:40:40.678011 2632 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 12:40:40.678060 kubelet[2632]: I0430 12:40:40.678034 2632 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 12:40:40.678060 kubelet[2632]: I0430 12:40:40.678057 2632 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:40:40.678278 kubelet[2632]: I0430 12:40:40.678258 2632 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:40:40.678341 kubelet[2632]: I0430 12:40:40.678277 2632 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:40:40.678341 kubelet[2632]: I0430 12:40:40.678311 2632 policy_none.go:49] "None policy: Start" Apr 30 12:40:40.678341 kubelet[2632]: I0430 12:40:40.678323 2632 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 12:40:40.678341 kubelet[2632]: I0430 12:40:40.678337 2632 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:40:40.678548 kubelet[2632]: I0430 12:40:40.678526 2632 state_mem.go:75] "Updated machine memory state" Apr 30 12:40:40.683416 kubelet[2632]: I0430 12:40:40.683378 2632 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:40:40.683732 kubelet[2632]: I0430 12:40:40.683704 2632 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 12:40:40.683838 kubelet[2632]: I0430 12:40:40.683726 2632 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:40:40.684414 kubelet[2632]: I0430 12:40:40.683951 2632 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:40:40.684976 kubelet[2632]: E0430 12:40:40.684801 2632 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 12:40:40.745750 kubelet[2632]: I0430 12:40:40.745660 2632 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:40.745982 kubelet[2632]: I0430 12:40:40.745792 2632 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.745982 kubelet[2632]: I0430 12:40:40.745829 2632 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:40.753686 kubelet[2632]: E0430 12:40:40.753635 2632 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:40.754608 kubelet[2632]: E0430 12:40:40.754567 2632 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:40.754744 kubelet[2632]: E0430 12:40:40.754663 2632 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.788678 kubelet[2632]: I0430 12:40:40.788625 2632 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 12:40:40.795428 kubelet[2632]: I0430 12:40:40.795378 2632 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Apr 30 12:40:40.795568 kubelet[2632]: I0430 12:40:40.795467 2632 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Apr 30 12:40:40.806014 sudo[2667]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 12:40:40.806534 sudo[2667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 12:40:40.833806 kubelet[2632]: I0430 12:40:40.833742 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/387ce659c56f022a930cccb7f73bb904-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"387ce659c56f022a930cccb7f73bb904\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:40.833936 kubelet[2632]: I0430 12:40:40.833830 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.833936 kubelet[2632]: I0430 12:40:40.833856 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.833936 kubelet[2632]: I0430 12:40:40.833879 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:40.833936 kubelet[2632]: I0430 12:40:40.833900 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.833936 kubelet[2632]: I0430 12:40:40.833918 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/387ce659c56f022a930cccb7f73bb904-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"387ce659c56f022a930cccb7f73bb904\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:40.834077 kubelet[2632]: I0430 12:40:40.833938 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/387ce659c56f022a930cccb7f73bb904-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"387ce659c56f022a930cccb7f73bb904\") " pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:40.834077 kubelet[2632]: I0430 12:40:40.833956 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:40.834077 kubelet[2632]: I0430 12:40:40.833978 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 12:40:41.054697 kubelet[2632]: E0430 12:40:41.054652 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:41.054875 kubelet[2632]: E0430 12:40:41.054834 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:41.055209 kubelet[2632]: E0430 12:40:41.055130 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:41.488361 sudo[2667]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:41.618106 kubelet[2632]: I0430 12:40:41.618056 2632 apiserver.go:52] "Watching apiserver" Apr 30 12:40:41.634822 kubelet[2632]: I0430 12:40:41.634752 2632 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:40:41.660359 kubelet[2632]: E0430 12:40:41.660309 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:41.660497 kubelet[2632]: I0430 12:40:41.660413 2632 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:41.660950 kubelet[2632]: I0430 12:40:41.660917 2632 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:41.764661 kubelet[2632]: E0430 12:40:41.763709 2632 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 30 12:40:41.764661 kubelet[2632]: E0430 12:40:41.764094 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:41.764986 kubelet[2632]: E0430 12:40:41.764829 2632 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 12:40:41.765117 kubelet[2632]: E0430 12:40:41.765048 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:41.797086 kubelet[2632]: I0430 12:40:41.797002 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.796976267 podStartE2EDuration="3.796976267s" podCreationTimestamp="2025-04-30 12:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:40:41.788631682 +0000 UTC m=+1.234439032" watchObservedRunningTime="2025-04-30 12:40:41.796976267 +0000 UTC m=+1.242783617" Apr 30 12:40:41.797265 kubelet[2632]: I0430 12:40:41.797130 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.797125811 podStartE2EDuration="2.797125811s" podCreationTimestamp="2025-04-30 12:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:40:41.796800722 +0000 UTC m=+1.242608072" watchObservedRunningTime="2025-04-30 12:40:41.797125811 +0000 UTC m=+1.242933161" Apr 30 12:40:41.808586 kubelet[2632]: I0430 12:40:41.808204 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.808184254 podStartE2EDuration="3.808184254s" podCreationTimestamp="2025-04-30 12:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:40:41.808088893 +0000 UTC m=+1.253896243" watchObservedRunningTime="2025-04-30 12:40:41.808184254 +0000 UTC m=+1.253991604" Apr 30 12:40:42.661695 kubelet[2632]: E0430 12:40:42.661656 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:42.662182 kubelet[2632]: E0430 12:40:42.661778 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:43.542673 sudo[1698]: pam_unix(sudo:session): session closed for user root Apr 30 12:40:43.544404 sshd[1697]: Connection closed by 10.0.0.1 port 34654 Apr 30 12:40:43.553875 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Apr 30 12:40:43.558479 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:34654.service: Deactivated successfully. Apr 30 12:40:43.560895 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:40:43.561123 systemd[1]: session-7.scope: Consumed 5.655s CPU time, 253.5M memory peak. Apr 30 12:40:43.562352 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:40:43.563276 systemd-logind[1493]: Removed session 7. Apr 30 12:40:45.119756 kubelet[2632]: I0430 12:40:45.119698 2632 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:40:45.120498 kubelet[2632]: I0430 12:40:45.120449 2632 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:40:45.120546 containerd[1515]: time="2025-04-30T12:40:45.120191769Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:40:45.632286 update_engine[1495]: I20250430 12:40:45.632157 1495 update_attempter.cc:509] Updating boot flags... Apr 30 12:40:45.637237 kubelet[2632]: E0430 12:40:45.637189 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:45.670126 kubelet[2632]: E0430 12:40:45.667509 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:46.223536 kubelet[2632]: E0430 12:40:46.223031 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:46.292585 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2715) Apr 30 12:40:46.368516 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2718) Apr 30 12:40:46.444427 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2718) Apr 30 12:40:46.669125 kubelet[2632]: E0430 12:40:46.669074 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:46.816580 systemd[1]: Created slice kubepods-besteffort-pod859ec3c9_05e4_4f77_8d73_a5d0cbfc374c.slice - libcontainer container kubepods-besteffort-pod859ec3c9_05e4_4f77_8d73_a5d0cbfc374c.slice. Apr 30 12:40:46.832211 systemd[1]: Created slice kubepods-burstable-podc6b095b7_0ddb_4743_8e69_fe17232195cb.slice - libcontainer container kubepods-burstable-podc6b095b7_0ddb_4743_8e69_fe17232195cb.slice. Apr 30 12:40:46.837899 systemd[1]: Created slice kubepods-besteffort-podd94bc2e0_5d54_4c8b_a857_24186da688cf.slice - libcontainer container kubepods-besteffort-podd94bc2e0_5d54_4c8b_a857_24186da688cf.slice. Apr 30 12:40:46.872837 kubelet[2632]: I0430 12:40:46.872760 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/859ec3c9-05e4-4f77-8d73-a5d0cbfc374c-xtables-lock\") pod \"kube-proxy-kchbw\" (UID: \"859ec3c9-05e4-4f77-8d73-a5d0cbfc374c\") " pod="kube-system/kube-proxy-kchbw" Apr 30 12:40:46.872837 kubelet[2632]: I0430 12:40:46.872813 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-bpf-maps\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.872837 kubelet[2632]: I0430 12:40:46.872842 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d94bc2e0-5d54-4c8b-a857-24186da688cf-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lvqfs\" (UID: \"d94bc2e0-5d54-4c8b-a857-24186da688cf\") " pod="kube-system/cilium-operator-6c4d7847fc-lvqfs" Apr 30 12:40:46.873093 kubelet[2632]: I0430 12:40:46.872867 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n48fq\" (UniqueName: \"kubernetes.io/projected/c6b095b7-0ddb-4743-8e69-fe17232195cb-kube-api-access-n48fq\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873093 kubelet[2632]: I0430 12:40:46.872896 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/859ec3c9-05e4-4f77-8d73-a5d0cbfc374c-kube-proxy\") pod \"kube-proxy-kchbw\" (UID: \"859ec3c9-05e4-4f77-8d73-a5d0cbfc374c\") " pod="kube-system/kube-proxy-kchbw" Apr 30 12:40:46.873093 kubelet[2632]: I0430 12:40:46.872918 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cni-path\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873093 kubelet[2632]: I0430 12:40:46.872937 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873093 kubelet[2632]: I0430 12:40:46.872962 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-host-proc-sys-kernel\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873093 kubelet[2632]: I0430 12:40:46.872985 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-run\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873260 kubelet[2632]: I0430 12:40:46.873003 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6b095b7-0ddb-4743-8e69-fe17232195cb-clustermesh-secrets\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873260 kubelet[2632]: I0430 12:40:46.873021 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6b095b7-0ddb-4743-8e69-fe17232195cb-hubble-tls\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873260 kubelet[2632]: I0430 12:40:46.873039 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfmcp\" (UniqueName: \"kubernetes.io/projected/d94bc2e0-5d54-4c8b-a857-24186da688cf-kube-api-access-xfmcp\") pod \"cilium-operator-6c4d7847fc-lvqfs\" (UID: \"d94bc2e0-5d54-4c8b-a857-24186da688cf\") " pod="kube-system/cilium-operator-6c4d7847fc-lvqfs" Apr 30 12:40:46.873260 kubelet[2632]: I0430 12:40:46.873062 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-cgroup\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873260 kubelet[2632]: I0430 12:40:46.873085 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-lib-modules\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873434 kubelet[2632]: I0430 12:40:46.873113 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/859ec3c9-05e4-4f77-8d73-a5d0cbfc374c-lib-modules\") pod \"kube-proxy-kchbw\" (UID: \"859ec3c9-05e4-4f77-8d73-a5d0cbfc374c\") " pod="kube-system/kube-proxy-kchbw" Apr 30 12:40:46.873434 kubelet[2632]: I0430 12:40:46.873168 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg2pn\" (UniqueName: \"kubernetes.io/projected/859ec3c9-05e4-4f77-8d73-a5d0cbfc374c-kube-api-access-mg2pn\") pod \"kube-proxy-kchbw\" (UID: \"859ec3c9-05e4-4f77-8d73-a5d0cbfc374c\") " pod="kube-system/kube-proxy-kchbw" Apr 30 12:40:46.873434 kubelet[2632]: I0430 12:40:46.873187 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-etc-cni-netd\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873434 kubelet[2632]: I0430 12:40:46.873207 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-host-proc-sys-net\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873434 kubelet[2632]: I0430 12:40:46.873226 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-hostproc\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:46.873434 kubelet[2632]: I0430 12:40:46.873246 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-xtables-lock\") pod \"cilium-wctmq\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " pod="kube-system/cilium-wctmq" Apr 30 12:40:47.131914 kubelet[2632]: E0430 12:40:47.131848 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.132841 containerd[1515]: time="2025-04-30T12:40:47.132779807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kchbw,Uid:859ec3c9-05e4-4f77-8d73-a5d0cbfc374c,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:47.136666 kubelet[2632]: E0430 12:40:47.136599 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.137290 containerd[1515]: time="2025-04-30T12:40:47.137205345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wctmq,Uid:c6b095b7-0ddb-4743-8e69-fe17232195cb,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:47.140660 kubelet[2632]: E0430 12:40:47.140608 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.142725 containerd[1515]: time="2025-04-30T12:40:47.142557310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lvqfs,Uid:d94bc2e0-5d54-4c8b-a857-24186da688cf,Namespace:kube-system,Attempt:0,}" Apr 30 12:40:47.193949 containerd[1515]: time="2025-04-30T12:40:47.191711367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:47.193949 containerd[1515]: time="2025-04-30T12:40:47.191803783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:47.193949 containerd[1515]: time="2025-04-30T12:40:47.191819913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.193949 containerd[1515]: time="2025-04-30T12:40:47.191932746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.199980 containerd[1515]: time="2025-04-30T12:40:47.199800951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:47.199980 containerd[1515]: time="2025-04-30T12:40:47.199877235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:47.199980 containerd[1515]: time="2025-04-30T12:40:47.199896140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.200231 containerd[1515]: time="2025-04-30T12:40:47.200023662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.203430 containerd[1515]: time="2025-04-30T12:40:47.203230049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:40:47.203684 containerd[1515]: time="2025-04-30T12:40:47.203518326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:40:47.203972 containerd[1515]: time="2025-04-30T12:40:47.203820269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.204546 containerd[1515]: time="2025-04-30T12:40:47.204349652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:40:47.222808 systemd[1]: Started cri-containerd-047d2d4f4489d0da12ef5d9845fd29b3676481f119dba3cbe4e7faae4027fa0e.scope - libcontainer container 047d2d4f4489d0da12ef5d9845fd29b3676481f119dba3cbe4e7faae4027fa0e. Apr 30 12:40:47.227729 systemd[1]: Started cri-containerd-cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31.scope - libcontainer container cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31. Apr 30 12:40:47.232629 systemd[1]: Started cri-containerd-39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744.scope - libcontainer container 39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744. Apr 30 12:40:47.261876 containerd[1515]: time="2025-04-30T12:40:47.261837195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kchbw,Uid:859ec3c9-05e4-4f77-8d73-a5d0cbfc374c,Namespace:kube-system,Attempt:0,} returns sandbox id \"047d2d4f4489d0da12ef5d9845fd29b3676481f119dba3cbe4e7faae4027fa0e\"" Apr 30 12:40:47.263371 kubelet[2632]: E0430 12:40:47.263350 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.269777 containerd[1515]: time="2025-04-30T12:40:47.269610819Z" level=info msg="CreateContainer within sandbox \"047d2d4f4489d0da12ef5d9845fd29b3676481f119dba3cbe4e7faae4027fa0e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:40:47.269777 containerd[1515]: time="2025-04-30T12:40:47.269666966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wctmq,Uid:c6b095b7-0ddb-4743-8e69-fe17232195cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\"" Apr 30 12:40:47.270764 kubelet[2632]: E0430 12:40:47.270553 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.272738 containerd[1515]: time="2025-04-30T12:40:47.272701968Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 12:40:47.295027 containerd[1515]: time="2025-04-30T12:40:47.294981006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lvqfs,Uid:d94bc2e0-5d54-4c8b-a857-24186da688cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\"" Apr 30 12:40:47.296236 kubelet[2632]: E0430 12:40:47.296195 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.303083 containerd[1515]: time="2025-04-30T12:40:47.302992481Z" level=info msg="CreateContainer within sandbox \"047d2d4f4489d0da12ef5d9845fd29b3676481f119dba3cbe4e7faae4027fa0e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fc437feb58a4e1f873acafc445f7037ee8838afa6cd953ea4374674781a538d3\"" Apr 30 12:40:47.304122 containerd[1515]: time="2025-04-30T12:40:47.303979853Z" level=info msg="StartContainer for \"fc437feb58a4e1f873acafc445f7037ee8838afa6cd953ea4374674781a538d3\"" Apr 30 12:40:47.345610 systemd[1]: Started cri-containerd-fc437feb58a4e1f873acafc445f7037ee8838afa6cd953ea4374674781a538d3.scope - libcontainer container fc437feb58a4e1f873acafc445f7037ee8838afa6cd953ea4374674781a538d3. Apr 30 12:40:47.383031 containerd[1515]: time="2025-04-30T12:40:47.382880221Z" level=info msg="StartContainer for \"fc437feb58a4e1f873acafc445f7037ee8838afa6cd953ea4374674781a538d3\" returns successfully" Apr 30 12:40:47.674201 kubelet[2632]: E0430 12:40:47.674041 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:47.674201 kubelet[2632]: E0430 12:40:47.674103 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:49.536026 kubelet[2632]: E0430 12:40:49.535981 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:49.550902 kubelet[2632]: I0430 12:40:49.550804 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kchbw" podStartSLOduration=3.550729125 podStartE2EDuration="3.550729125s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:40:47.712666694 +0000 UTC m=+7.158474054" watchObservedRunningTime="2025-04-30 12:40:49.550729125 +0000 UTC m=+8.996536475" Apr 30 12:40:49.677749 kubelet[2632]: E0430 12:40:49.677704 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:50.679467 kubelet[2632]: E0430 12:40:50.679374 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:40:58.923583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986588602.mount: Deactivated successfully. Apr 30 12:41:05.180862 kubelet[2632]: E0430 12:41:05.180708 2632 kubelet.go:2579] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.536s" Apr 30 12:41:05.245704 containerd[1515]: time="2025-04-30T12:41:05.245557721Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:41:05.247720 containerd[1515]: time="2025-04-30T12:41:05.247645349Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 12:41:05.249764 containerd[1515]: time="2025-04-30T12:41:05.249586973Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:41:05.251803 containerd[1515]: time="2025-04-30T12:41:05.251712944Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.97897078s" Apr 30 12:41:05.251803 containerd[1515]: time="2025-04-30T12:41:05.251766746Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 12:41:05.268179 containerd[1515]: time="2025-04-30T12:41:05.268056771Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 12:41:05.284845 containerd[1515]: time="2025-04-30T12:41:05.284756880Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:41:05.340552 containerd[1515]: time="2025-04-30T12:41:05.340262946Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\"" Apr 30 12:41:05.344614 containerd[1515]: time="2025-04-30T12:41:05.344467989Z" level=info msg="StartContainer for \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\"" Apr 30 12:41:05.388685 systemd[1]: Started cri-containerd-500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605.scope - libcontainer container 500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605. Apr 30 12:41:05.422285 containerd[1515]: time="2025-04-30T12:41:05.422219688Z" level=info msg="StartContainer for \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\" returns successfully" Apr 30 12:41:05.437764 systemd[1]: cri-containerd-500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605.scope: Deactivated successfully. Apr 30 12:41:06.198949 kubelet[2632]: E0430 12:41:06.198901 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:06.328017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605-rootfs.mount: Deactivated successfully. Apr 30 12:41:06.697637 containerd[1515]: time="2025-04-30T12:41:06.697544199Z" level=info msg="shim disconnected" id=500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605 namespace=k8s.io Apr 30 12:41:06.697637 containerd[1515]: time="2025-04-30T12:41:06.697623499Z" level=warning msg="cleaning up after shim disconnected" id=500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605 namespace=k8s.io Apr 30 12:41:06.697637 containerd[1515]: time="2025-04-30T12:41:06.697634800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:41:07.199967 kubelet[2632]: E0430 12:41:07.199912 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:07.202746 containerd[1515]: time="2025-04-30T12:41:07.202704866Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:41:08.154061 containerd[1515]: time="2025-04-30T12:41:08.153993164Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\"" Apr 30 12:41:08.154577 containerd[1515]: time="2025-04-30T12:41:08.154529764Z" level=info msg="StartContainer for \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\"" Apr 30 12:41:08.187530 systemd[1]: Started cri-containerd-12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1.scope - libcontainer container 12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1. Apr 30 12:41:08.229838 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:41:08.230124 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:08.230471 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:41:08.236834 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:41:08.237121 systemd[1]: cri-containerd-12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1.scope: Deactivated successfully. Apr 30 12:41:08.253172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:41:08.354181 containerd[1515]: time="2025-04-30T12:41:08.353640179Z" level=info msg="StartContainer for \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\" returns successfully" Apr 30 12:41:08.541055 containerd[1515]: time="2025-04-30T12:41:08.540891020Z" level=info msg="shim disconnected" id=12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1 namespace=k8s.io Apr 30 12:41:08.541055 containerd[1515]: time="2025-04-30T12:41:08.540950892Z" level=warning msg="cleaning up after shim disconnected" id=12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1 namespace=k8s.io Apr 30 12:41:08.541055 containerd[1515]: time="2025-04-30T12:41:08.540959899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:41:08.623316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1-rootfs.mount: Deactivated successfully. Apr 30 12:41:09.207870 kubelet[2632]: E0430 12:41:09.207819 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:09.211033 containerd[1515]: time="2025-04-30T12:41:09.210135565Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:41:09.230238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585222419.mount: Deactivated successfully. Apr 30 12:41:09.233769 containerd[1515]: time="2025-04-30T12:41:09.233719755Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\"" Apr 30 12:41:09.234186 containerd[1515]: time="2025-04-30T12:41:09.234155525Z" level=info msg="StartContainer for \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\"" Apr 30 12:41:09.270604 systemd[1]: Started cri-containerd-b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc.scope - libcontainer container b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc. Apr 30 12:41:09.305543 containerd[1515]: time="2025-04-30T12:41:09.305476271Z" level=info msg="StartContainer for \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\" returns successfully" Apr 30 12:41:09.307805 systemd[1]: cri-containerd-b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc.scope: Deactivated successfully. Apr 30 12:41:09.337836 containerd[1515]: time="2025-04-30T12:41:09.337740043Z" level=info msg="shim disconnected" id=b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc namespace=k8s.io Apr 30 12:41:09.337836 containerd[1515]: time="2025-04-30T12:41:09.337806988Z" level=warning msg="cleaning up after shim disconnected" id=b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc namespace=k8s.io Apr 30 12:41:09.337836 containerd[1515]: time="2025-04-30T12:41:09.337817608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:41:09.622518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc-rootfs.mount: Deactivated successfully. Apr 30 12:41:10.153751 containerd[1515]: time="2025-04-30T12:41:10.153137514Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:41:10.154110 containerd[1515]: time="2025-04-30T12:41:10.154067252Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 12:41:10.155161 containerd[1515]: time="2025-04-30T12:41:10.155124720Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:41:10.156471 containerd[1515]: time="2025-04-30T12:41:10.156436707Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.888332195s" Apr 30 12:41:10.156471 containerd[1515]: time="2025-04-30T12:41:10.156467124Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 12:41:10.158173 containerd[1515]: time="2025-04-30T12:41:10.158146442Z" level=info msg="CreateContainer within sandbox \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 12:41:10.172492 containerd[1515]: time="2025-04-30T12:41:10.172422434Z" level=info msg="CreateContainer within sandbox \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\"" Apr 30 12:41:10.173423 containerd[1515]: time="2025-04-30T12:41:10.173146445Z" level=info msg="StartContainer for \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\"" Apr 30 12:41:10.205591 systemd[1]: Started cri-containerd-cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1.scope - libcontainer container cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1. Apr 30 12:41:10.219210 kubelet[2632]: E0430 12:41:10.217803 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:10.223572 containerd[1515]: time="2025-04-30T12:41:10.223505869Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:41:10.406315 containerd[1515]: time="2025-04-30T12:41:10.406152378Z" level=info msg="StartContainer for \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\" returns successfully" Apr 30 12:41:10.582364 containerd[1515]: time="2025-04-30T12:41:10.582290048Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\"" Apr 30 12:41:10.582914 containerd[1515]: time="2025-04-30T12:41:10.582875118Z" level=info msg="StartContainer for \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\"" Apr 30 12:41:10.611565 systemd[1]: Started cri-containerd-3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7.scope - libcontainer container 3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7. Apr 30 12:41:10.652081 systemd[1]: cri-containerd-3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7.scope: Deactivated successfully. Apr 30 12:41:10.751823 containerd[1515]: time="2025-04-30T12:41:10.751654584Z" level=info msg="StartContainer for \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\" returns successfully" Apr 30 12:41:10.791857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7-rootfs.mount: Deactivated successfully. Apr 30 12:41:11.237404 kubelet[2632]: E0430 12:41:11.237356 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:11.239656 kubelet[2632]: E0430 12:41:11.239597 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:11.370837 containerd[1515]: time="2025-04-30T12:41:11.370693956Z" level=info msg="shim disconnected" id=3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7 namespace=k8s.io Apr 30 12:41:11.370837 containerd[1515]: time="2025-04-30T12:41:11.370769388Z" level=warning msg="cleaning up after shim disconnected" id=3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7 namespace=k8s.io Apr 30 12:41:11.370837 containerd[1515]: time="2025-04-30T12:41:11.370783405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:41:11.448639 kubelet[2632]: I0430 12:41:11.448556 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lvqfs" podStartSLOduration=2.588425148 podStartE2EDuration="25.448530932s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="2025-04-30 12:40:47.296918819 +0000 UTC m=+6.742726169" lastFinishedPulling="2025-04-30 12:41:10.157024603 +0000 UTC m=+29.602831953" observedRunningTime="2025-04-30 12:41:11.396588766 +0000 UTC m=+30.842396116" watchObservedRunningTime="2025-04-30 12:41:11.448530932 +0000 UTC m=+30.894338282" Apr 30 12:41:12.246609 kubelet[2632]: E0430 12:41:12.246437 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:12.246609 kubelet[2632]: E0430 12:41:12.246437 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:12.249410 containerd[1515]: time="2025-04-30T12:41:12.248823718Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:41:12.835531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257874958.mount: Deactivated successfully. Apr 30 12:41:12.892942 containerd[1515]: time="2025-04-30T12:41:12.892876114Z" level=info msg="CreateContainer within sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\"" Apr 30 12:41:12.893568 containerd[1515]: time="2025-04-30T12:41:12.893526447Z" level=info msg="StartContainer for \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\"" Apr 30 12:41:12.959833 systemd[1]: Started cri-containerd-7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f.scope - libcontainer container 7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f. Apr 30 12:41:13.000538 containerd[1515]: time="2025-04-30T12:41:13.000449964Z" level=info msg="StartContainer for \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\" returns successfully" Apr 30 12:41:13.139548 kubelet[2632]: I0430 12:41:13.138571 2632 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 12:41:13.176248 systemd[1]: Created slice kubepods-burstable-pod669687a1_4ab5_4cdd_9c52_b961e69ab382.slice - libcontainer container kubepods-burstable-pod669687a1_4ab5_4cdd_9c52_b961e69ab382.slice. Apr 30 12:41:13.194216 systemd[1]: Created slice kubepods-burstable-pod9cfa2ac7_63a5_47ba_8bae_9d518db31439.slice - libcontainer container kubepods-burstable-pod9cfa2ac7_63a5_47ba_8bae_9d518db31439.slice. Apr 30 12:41:13.252543 kubelet[2632]: E0430 12:41:13.252248 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:13.274650 kubelet[2632]: I0430 12:41:13.273675 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wctmq" podStartSLOduration=9.277987809 podStartE2EDuration="27.273650455s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="2025-04-30 12:40:47.271714386 +0000 UTC m=+6.717521736" lastFinishedPulling="2025-04-30 12:41:05.267377032 +0000 UTC m=+24.713184382" observedRunningTime="2025-04-30 12:41:13.27311551 +0000 UTC m=+32.718922880" watchObservedRunningTime="2025-04-30 12:41:13.273650455 +0000 UTC m=+32.719457805" Apr 30 12:41:13.343510 kubelet[2632]: I0430 12:41:13.343460 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cfa2ac7-63a5-47ba-8bae-9d518db31439-config-volume\") pod \"coredns-668d6bf9bc-qdw88\" (UID: \"9cfa2ac7-63a5-47ba-8bae-9d518db31439\") " pod="kube-system/coredns-668d6bf9bc-qdw88" Apr 30 12:41:13.343826 kubelet[2632]: I0430 12:41:13.343780 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkw79\" (UniqueName: \"kubernetes.io/projected/669687a1-4ab5-4cdd-9c52-b961e69ab382-kube-api-access-bkw79\") pod \"coredns-668d6bf9bc-2d249\" (UID: \"669687a1-4ab5-4cdd-9c52-b961e69ab382\") " pod="kube-system/coredns-668d6bf9bc-2d249" Apr 30 12:41:13.344022 kubelet[2632]: I0430 12:41:13.343895 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/669687a1-4ab5-4cdd-9c52-b961e69ab382-config-volume\") pod \"coredns-668d6bf9bc-2d249\" (UID: \"669687a1-4ab5-4cdd-9c52-b961e69ab382\") " pod="kube-system/coredns-668d6bf9bc-2d249" Apr 30 12:41:13.344022 kubelet[2632]: I0430 12:41:13.343969 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxk8p\" (UniqueName: \"kubernetes.io/projected/9cfa2ac7-63a5-47ba-8bae-9d518db31439-kube-api-access-kxk8p\") pod \"coredns-668d6bf9bc-qdw88\" (UID: \"9cfa2ac7-63a5-47ba-8bae-9d518db31439\") " pod="kube-system/coredns-668d6bf9bc-qdw88" Apr 30 12:41:13.485060 kubelet[2632]: E0430 12:41:13.484675 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:13.486603 containerd[1515]: time="2025-04-30T12:41:13.486546050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2d249,Uid:669687a1-4ab5-4cdd-9c52-b961e69ab382,Namespace:kube-system,Attempt:0,}" Apr 30 12:41:13.497067 kubelet[2632]: E0430 12:41:13.497011 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:13.497753 containerd[1515]: time="2025-04-30T12:41:13.497702978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qdw88,Uid:9cfa2ac7-63a5-47ba-8bae-9d518db31439,Namespace:kube-system,Attempt:0,}" Apr 30 12:41:14.254090 kubelet[2632]: E0430 12:41:14.254031 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:15.255979 kubelet[2632]: E0430 12:41:15.255928 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:15.362551 systemd-networkd[1430]: cilium_host: Link UP Apr 30 12:41:15.362775 systemd-networkd[1430]: cilium_net: Link UP Apr 30 12:41:15.362996 systemd-networkd[1430]: cilium_net: Gained carrier Apr 30 12:41:15.363217 systemd-networkd[1430]: cilium_host: Gained carrier Apr 30 12:41:15.363435 systemd-networkd[1430]: cilium_net: Gained IPv6LL Apr 30 12:41:15.363628 systemd-networkd[1430]: cilium_host: Gained IPv6LL Apr 30 12:41:15.490870 systemd-networkd[1430]: cilium_vxlan: Link UP Apr 30 12:41:15.490883 systemd-networkd[1430]: cilium_vxlan: Gained carrier Apr 30 12:41:15.748257 kernel: NET: Registered PF_ALG protocol family Apr 30 12:41:16.257382 kubelet[2632]: E0430 12:41:16.257337 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:16.586186 systemd-networkd[1430]: lxc_health: Link UP Apr 30 12:41:16.586536 systemd-networkd[1430]: lxc_health: Gained carrier Apr 30 12:41:17.128196 kernel: eth0: renamed from tmp6b944 Apr 30 12:41:17.126192 systemd-networkd[1430]: lxc2e18b0ed4f39: Link UP Apr 30 12:41:17.133438 kernel: eth0: renamed from tmpf297c Apr 30 12:41:17.138834 systemd-networkd[1430]: lxc49023673dddd: Link UP Apr 30 12:41:17.142978 systemd-networkd[1430]: lxc2e18b0ed4f39: Gained carrier Apr 30 12:41:17.144338 systemd-networkd[1430]: lxc49023673dddd: Gained carrier Apr 30 12:41:17.268218 kubelet[2632]: E0430 12:41:17.268173 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:17.343806 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL Apr 30 12:41:18.046994 systemd-networkd[1430]: lxc_health: Gained IPv6LL Apr 30 12:41:18.266533 kubelet[2632]: E0430 12:41:18.266480 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:18.750549 systemd-networkd[1430]: lxc2e18b0ed4f39: Gained IPv6LL Apr 30 12:41:19.070654 systemd-networkd[1430]: lxc49023673dddd: Gained IPv6LL Apr 30 12:41:20.873985 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:33160.service - OpenSSH per-connection server daemon (10.0.0.1:33160). Apr 30 12:41:20.921676 sshd[3867]: Accepted publickey for core from 10.0.0.1 port 33160 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:20.924221 sshd-session[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:20.933701 systemd-logind[1493]: New session 8 of user core. Apr 30 12:41:20.938638 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:41:20.945621 containerd[1515]: time="2025-04-30T12:41:20.945384106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:41:20.946118 containerd[1515]: time="2025-04-30T12:41:20.945617013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:41:20.946118 containerd[1515]: time="2025-04-30T12:41:20.945686615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:41:20.947071 containerd[1515]: time="2025-04-30T12:41:20.946895454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:41:20.947071 containerd[1515]: time="2025-04-30T12:41:20.946970536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:41:20.947071 containerd[1515]: time="2025-04-30T12:41:20.946985454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:41:20.947417 containerd[1515]: time="2025-04-30T12:41:20.947122821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:41:20.947675 containerd[1515]: time="2025-04-30T12:41:20.947617661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:41:20.982738 systemd[1]: Started cri-containerd-6b9442114b3565bbcd1d88e0e61c67cecdca26088d91f38ae4fbe02d097ba863.scope - libcontainer container 6b9442114b3565bbcd1d88e0e61c67cecdca26088d91f38ae4fbe02d097ba863. Apr 30 12:41:20.984989 systemd[1]: Started cri-containerd-f297c3cd53f7f00b30b15444ecf0a9d1ad975b946bbbeaaa022fb92c884411c2.scope - libcontainer container f297c3cd53f7f00b30b15444ecf0a9d1ad975b946bbbeaaa022fb92c884411c2. Apr 30 12:41:21.001368 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 12:41:21.004708 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 12:41:21.036095 containerd[1515]: time="2025-04-30T12:41:21.036014149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qdw88,Uid:9cfa2ac7-63a5-47ba-8bae-9d518db31439,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b9442114b3565bbcd1d88e0e61c67cecdca26088d91f38ae4fbe02d097ba863\"" Apr 30 12:41:21.038216 kubelet[2632]: E0430 12:41:21.037281 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:21.040686 containerd[1515]: time="2025-04-30T12:41:21.040622384Z" level=info msg="CreateContainer within sandbox \"6b9442114b3565bbcd1d88e0e61c67cecdca26088d91f38ae4fbe02d097ba863\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:41:21.047625 containerd[1515]: time="2025-04-30T12:41:21.047588495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2d249,Uid:669687a1-4ab5-4cdd-9c52-b961e69ab382,Namespace:kube-system,Attempt:0,} returns sandbox id \"f297c3cd53f7f00b30b15444ecf0a9d1ad975b946bbbeaaa022fb92c884411c2\"" Apr 30 12:41:21.048678 kubelet[2632]: E0430 12:41:21.048639 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:21.053152 containerd[1515]: time="2025-04-30T12:41:21.052488166Z" level=info msg="CreateContainer within sandbox \"f297c3cd53f7f00b30b15444ecf0a9d1ad975b946bbbeaaa022fb92c884411c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:41:21.071004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299841151.mount: Deactivated successfully. Apr 30 12:41:21.076719 containerd[1515]: time="2025-04-30T12:41:21.076663846Z" level=info msg="CreateContainer within sandbox \"6b9442114b3565bbcd1d88e0e61c67cecdca26088d91f38ae4fbe02d097ba863\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58393eb25657240f12b8d455c685081c968d72cb25aed7cc2b53cb88df7c735b\"" Apr 30 12:41:21.077791 containerd[1515]: time="2025-04-30T12:41:21.077729276Z" level=info msg="StartContainer for \"58393eb25657240f12b8d455c685081c968d72cb25aed7cc2b53cb88df7c735b\"" Apr 30 12:41:21.103443 containerd[1515]: time="2025-04-30T12:41:21.102509923Z" level=info msg="CreateContainer within sandbox \"f297c3cd53f7f00b30b15444ecf0a9d1ad975b946bbbeaaa022fb92c884411c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"049910a048eb44d056bdae98a4cf43562eaba8e97b18638a4d488067165c0127\"" Apr 30 12:41:21.104208 containerd[1515]: time="2025-04-30T12:41:21.104142909Z" level=info msg="StartContainer for \"049910a048eb44d056bdae98a4cf43562eaba8e97b18638a4d488067165c0127\"" Apr 30 12:41:21.108435 sshd[3895]: Connection closed by 10.0.0.1 port 33160 Apr 30 12:41:21.109051 sshd-session[3867]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:21.117143 systemd[1]: Started cri-containerd-58393eb25657240f12b8d455c685081c968d72cb25aed7cc2b53cb88df7c735b.scope - libcontainer container 58393eb25657240f12b8d455c685081c968d72cb25aed7cc2b53cb88df7c735b. Apr 30 12:41:21.117769 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:33160.service: Deactivated successfully. Apr 30 12:41:21.120896 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:41:21.123626 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:41:21.125642 systemd-logind[1493]: Removed session 8. Apr 30 12:41:21.152576 systemd[1]: Started cri-containerd-049910a048eb44d056bdae98a4cf43562eaba8e97b18638a4d488067165c0127.scope - libcontainer container 049910a048eb44d056bdae98a4cf43562eaba8e97b18638a4d488067165c0127. Apr 30 12:41:21.163682 containerd[1515]: time="2025-04-30T12:41:21.163631513Z" level=info msg="StartContainer for \"58393eb25657240f12b8d455c685081c968d72cb25aed7cc2b53cb88df7c735b\" returns successfully" Apr 30 12:41:21.192482 containerd[1515]: time="2025-04-30T12:41:21.192433211Z" level=info msg="StartContainer for \"049910a048eb44d056bdae98a4cf43562eaba8e97b18638a4d488067165c0127\" returns successfully" Apr 30 12:41:21.279771 kubelet[2632]: E0430 12:41:21.279719 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:21.282857 kubelet[2632]: E0430 12:41:21.282764 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:21.294236 kubelet[2632]: I0430 12:41:21.294124 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qdw88" podStartSLOduration=35.294105572 podStartE2EDuration="35.294105572s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:41:21.293879707 +0000 UTC m=+40.739687067" watchObservedRunningTime="2025-04-30 12:41:21.294105572 +0000 UTC m=+40.739912922" Apr 30 12:41:22.285656 kubelet[2632]: E0430 12:41:22.285359 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:22.285656 kubelet[2632]: E0430 12:41:22.285562 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:22.616779 kubelet[2632]: I0430 12:41:22.616448 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2d249" podStartSLOduration=36.616430804 podStartE2EDuration="36.616430804s" podCreationTimestamp="2025-04-30 12:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:41:21.311819717 +0000 UTC m=+40.757627067" watchObservedRunningTime="2025-04-30 12:41:22.616430804 +0000 UTC m=+42.062238154" Apr 30 12:41:23.287560 kubelet[2632]: E0430 12:41:23.287521 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:23.288050 kubelet[2632]: E0430 12:41:23.287577 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:41:26.137798 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:35392.service - OpenSSH per-connection server daemon (10.0.0.1:35392). Apr 30 12:41:26.180046 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 35392 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:26.182523 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:26.188562 systemd-logind[1493]: New session 9 of user core. Apr 30 12:41:26.205668 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:41:26.389425 sshd[4054]: Connection closed by 10.0.0.1 port 35392 Apr 30 12:41:26.389806 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:26.394316 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:35392.service: Deactivated successfully. Apr 30 12:41:26.396468 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:41:26.397184 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:41:26.398097 systemd-logind[1493]: Removed session 9. Apr 30 12:41:31.406774 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:35402.service - OpenSSH per-connection server daemon (10.0.0.1:35402). Apr 30 12:41:31.453277 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 35402 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:31.455146 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:31.459820 systemd-logind[1493]: New session 10 of user core. Apr 30 12:41:31.473639 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:41:31.599453 sshd[4070]: Connection closed by 10.0.0.1 port 35402 Apr 30 12:41:31.599803 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:31.605712 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:35402.service: Deactivated successfully. Apr 30 12:41:31.607904 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:41:31.608629 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:41:31.609582 systemd-logind[1493]: Removed session 10. Apr 30 12:41:36.616076 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:49958.service - OpenSSH per-connection server daemon (10.0.0.1:49958). Apr 30 12:41:36.660437 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 49958 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:36.662748 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:36.669666 systemd-logind[1493]: New session 11 of user core. Apr 30 12:41:36.679071 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:41:36.797851 sshd[4087]: Connection closed by 10.0.0.1 port 49958 Apr 30 12:41:36.798248 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:36.802475 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:49958.service: Deactivated successfully. Apr 30 12:41:36.804936 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:41:36.805767 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:41:36.806804 systemd-logind[1493]: Removed session 11. Apr 30 12:41:41.814129 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:49970.service - OpenSSH per-connection server daemon (10.0.0.1:49970). Apr 30 12:41:41.854955 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 49970 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:41.857210 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:41.863543 systemd-logind[1493]: New session 12 of user core. Apr 30 12:41:41.872725 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:41:41.993438 sshd[4105]: Connection closed by 10.0.0.1 port 49970 Apr 30 12:41:41.993875 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:42.010513 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:49970.service: Deactivated successfully. Apr 30 12:41:42.012906 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:41:42.014682 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:41:42.024669 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:49976.service - OpenSSH per-connection server daemon (10.0.0.1:49976). Apr 30 12:41:42.025987 systemd-logind[1493]: Removed session 12. Apr 30 12:41:42.061533 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 49976 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:42.063071 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:42.067351 systemd-logind[1493]: New session 13 of user core. Apr 30 12:41:42.074527 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:41:42.251346 sshd[4121]: Connection closed by 10.0.0.1 port 49976 Apr 30 12:41:42.251955 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:42.264063 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:49976.service: Deactivated successfully. Apr 30 12:41:42.266620 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:41:42.269087 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:41:42.281940 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:49980.service - OpenSSH per-connection server daemon (10.0.0.1:49980). Apr 30 12:41:42.286124 systemd-logind[1493]: Removed session 13. Apr 30 12:41:42.333592 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 49980 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:42.335756 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:42.343096 systemd-logind[1493]: New session 14 of user core. Apr 30 12:41:42.353680 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:41:42.520083 sshd[4134]: Connection closed by 10.0.0.1 port 49980 Apr 30 12:41:42.520598 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:42.526199 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:49980.service: Deactivated successfully. Apr 30 12:41:42.528652 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:41:42.529471 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:41:42.530553 systemd-logind[1493]: Removed session 14. Apr 30 12:41:47.534261 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:35252.service - OpenSSH per-connection server daemon (10.0.0.1:35252). Apr 30 12:41:47.577132 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 35252 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:47.578970 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:47.584054 systemd-logind[1493]: New session 15 of user core. Apr 30 12:41:47.589586 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:41:47.719147 sshd[4152]: Connection closed by 10.0.0.1 port 35252 Apr 30 12:41:47.719654 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:47.725453 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:35252.service: Deactivated successfully. Apr 30 12:41:47.728157 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:41:47.728980 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:41:47.730043 systemd-logind[1493]: Removed session 15. Apr 30 12:41:52.732910 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:35264.service - OpenSSH per-connection server daemon (10.0.0.1:35264). Apr 30 12:41:52.774302 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 35264 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:52.776236 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:52.781495 systemd-logind[1493]: New session 16 of user core. Apr 30 12:41:52.793578 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:41:52.909732 sshd[4169]: Connection closed by 10.0.0.1 port 35264 Apr 30 12:41:52.910134 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:52.914211 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:35264.service: Deactivated successfully. Apr 30 12:41:52.917085 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:41:52.917944 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:41:52.918900 systemd-logind[1493]: Removed session 16. Apr 30 12:41:57.927465 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:49474.service - OpenSSH per-connection server daemon (10.0.0.1:49474). Apr 30 12:41:57.985594 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 49474 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:57.987457 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:57.992311 systemd-logind[1493]: New session 17 of user core. Apr 30 12:41:58.006714 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:41:58.127537 sshd[4184]: Connection closed by 10.0.0.1 port 49474 Apr 30 12:41:58.128025 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:58.141402 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:49474.service: Deactivated successfully. Apr 30 12:41:58.144031 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:41:58.146027 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:41:58.157852 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:49488.service - OpenSSH per-connection server daemon (10.0.0.1:49488). Apr 30 12:41:58.159159 systemd-logind[1493]: Removed session 17. Apr 30 12:41:58.197563 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 49488 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:58.199558 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:58.204510 systemd-logind[1493]: New session 18 of user core. Apr 30 12:41:58.212568 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:41:58.524666 sshd[4200]: Connection closed by 10.0.0.1 port 49488 Apr 30 12:41:58.525306 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:58.539130 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:49488.service: Deactivated successfully. Apr 30 12:41:58.541800 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:41:58.543800 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:41:58.553206 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:49502.service - OpenSSH per-connection server daemon (10.0.0.1:49502). Apr 30 12:41:58.554546 systemd-logind[1493]: Removed session 18. Apr 30 12:41:58.593405 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 49502 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:58.595196 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:58.600139 systemd-logind[1493]: New session 19 of user core. Apr 30 12:41:58.613678 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:41:59.514238 sshd[4213]: Connection closed by 10.0.0.1 port 49502 Apr 30 12:41:59.517012 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:59.529695 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:49502.service: Deactivated successfully. Apr 30 12:41:59.533948 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:41:59.535624 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:41:59.546783 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:49518.service - OpenSSH per-connection server daemon (10.0.0.1:49518). Apr 30 12:41:59.547840 systemd-logind[1493]: Removed session 19. Apr 30 12:41:59.586963 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 49518 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:59.588824 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:59.594743 systemd-logind[1493]: New session 20 of user core. Apr 30 12:41:59.603523 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:41:59.897796 sshd[4251]: Connection closed by 10.0.0.1 port 49518 Apr 30 12:41:59.898716 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Apr 30 12:41:59.907999 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:49518.service: Deactivated successfully. Apr 30 12:41:59.910227 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:41:59.911298 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:41:59.919841 systemd[1]: Started sshd@20-10.0.0.14:22-10.0.0.1:49526.service - OpenSSH per-connection server daemon (10.0.0.1:49526). Apr 30 12:41:59.921238 systemd-logind[1493]: Removed session 20. Apr 30 12:41:59.961630 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 49526 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:41:59.963594 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:41:59.968794 systemd-logind[1493]: New session 21 of user core. Apr 30 12:41:59.985720 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 12:42:00.113642 sshd[4265]: Connection closed by 10.0.0.1 port 49526 Apr 30 12:42:00.114126 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:00.119340 systemd[1]: sshd@20-10.0.0.14:22-10.0.0.1:49526.service: Deactivated successfully. Apr 30 12:42:00.121966 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 12:42:00.123092 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Apr 30 12:42:00.124263 systemd-logind[1493]: Removed session 21. Apr 30 12:42:05.128772 systemd[1]: Started sshd@21-10.0.0.14:22-10.0.0.1:49392.service - OpenSSH per-connection server daemon (10.0.0.1:49392). Apr 30 12:42:05.170734 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 49392 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:05.172551 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:05.177029 systemd-logind[1493]: New session 22 of user core. Apr 30 12:42:05.186545 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 12:42:05.306227 sshd[4280]: Connection closed by 10.0.0.1 port 49392 Apr 30 12:42:05.306644 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:05.311439 systemd[1]: sshd@21-10.0.0.14:22-10.0.0.1:49392.service: Deactivated successfully. Apr 30 12:42:05.313979 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 12:42:05.314885 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. Apr 30 12:42:05.315926 systemd-logind[1493]: Removed session 22. Apr 30 12:42:05.645628 kubelet[2632]: E0430 12:42:05.645568 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:08.646074 kubelet[2632]: E0430 12:42:08.645986 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:10.326499 systemd[1]: Started sshd@22-10.0.0.14:22-10.0.0.1:49398.service - OpenSSH per-connection server daemon (10.0.0.1:49398). Apr 30 12:42:10.372184 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 49398 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:10.374033 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:10.378591 systemd-logind[1493]: New session 23 of user core. Apr 30 12:42:10.387816 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 12:42:10.512891 sshd[4297]: Connection closed by 10.0.0.1 port 49398 Apr 30 12:42:10.513349 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:10.519004 systemd[1]: sshd@22-10.0.0.14:22-10.0.0.1:49398.service: Deactivated successfully. Apr 30 12:42:10.521991 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 12:42:10.523307 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. Apr 30 12:42:10.524591 systemd-logind[1493]: Removed session 23. Apr 30 12:42:10.645542 kubelet[2632]: E0430 12:42:10.645507 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:14.646141 kubelet[2632]: E0430 12:42:14.646070 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:15.530191 systemd[1]: Started sshd@23-10.0.0.14:22-10.0.0.1:50556.service - OpenSSH per-connection server daemon (10.0.0.1:50556). Apr 30 12:42:15.574331 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 50556 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:15.576299 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:15.582841 systemd-logind[1493]: New session 24 of user core. Apr 30 12:42:15.597701 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 12:42:15.742621 sshd[4312]: Connection closed by 10.0.0.1 port 50556 Apr 30 12:42:15.743107 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:15.748646 systemd[1]: sshd@23-10.0.0.14:22-10.0.0.1:50556.service: Deactivated successfully. Apr 30 12:42:15.751331 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 12:42:15.752303 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. Apr 30 12:42:15.753328 systemd-logind[1493]: Removed session 24. Apr 30 12:42:20.758432 systemd[1]: Started sshd@24-10.0.0.14:22-10.0.0.1:50570.service - OpenSSH per-connection server daemon (10.0.0.1:50570). Apr 30 12:42:20.805964 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 50570 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:20.808232 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:20.814347 systemd-logind[1493]: New session 25 of user core. Apr 30 12:42:20.825664 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 12:42:20.951262 sshd[4330]: Connection closed by 10.0.0.1 port 50570 Apr 30 12:42:20.951772 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:20.965193 systemd[1]: sshd@24-10.0.0.14:22-10.0.0.1:50570.service: Deactivated successfully. Apr 30 12:42:20.967750 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 12:42:20.970462 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. Apr 30 12:42:20.979062 systemd[1]: Started sshd@25-10.0.0.14:22-10.0.0.1:50572.service - OpenSSH per-connection server daemon (10.0.0.1:50572). Apr 30 12:42:20.980769 systemd-logind[1493]: Removed session 25. Apr 30 12:42:21.021650 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 50572 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:21.023726 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:21.030696 systemd-logind[1493]: New session 26 of user core. Apr 30 12:42:21.040776 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 12:42:22.645736 kubelet[2632]: E0430 12:42:22.645688 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:22.655414 kubelet[2632]: E0430 12:42:22.652585 2632 configmap.go:193] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Apr 30 12:42:22.655414 kubelet[2632]: E0430 12:42:22.652819 2632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path podName:c6b095b7-0ddb-4743-8e69-fe17232195cb nodeName:}" failed. No retries permitted until 2025-04-30 12:42:23.152763782 +0000 UTC m=+102.598571122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path") pod "cilium-wctmq" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb") : configmap "cilium-config" not found Apr 30 12:42:23.156005 kubelet[2632]: E0430 12:42:23.155940 2632 configmap.go:193] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Apr 30 12:42:23.156190 kubelet[2632]: E0430 12:42:23.156030 2632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path podName:c6b095b7-0ddb-4743-8e69-fe17232195cb nodeName:}" failed. No retries permitted until 2025-04-30 12:42:24.156013775 +0000 UTC m=+103.601821125 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path") pod "cilium-wctmq" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb") : configmap "cilium-config" not found Apr 30 12:42:23.363805 containerd[1515]: time="2025-04-30T12:42:23.363739079Z" level=info msg="StopContainer for \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\" with timeout 30 (s)" Apr 30 12:42:23.365184 containerd[1515]: time="2025-04-30T12:42:23.364592846Z" level=info msg="Stop container \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\" with signal terminated" Apr 30 12:42:23.378279 systemd[1]: cri-containerd-cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1.scope: Deactivated successfully. Apr 30 12:42:23.411315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1-rootfs.mount: Deactivated successfully. Apr 30 12:42:23.428611 containerd[1515]: time="2025-04-30T12:42:23.428496057Z" level=info msg="shim disconnected" id=cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1 namespace=k8s.io Apr 30 12:42:23.428850 containerd[1515]: time="2025-04-30T12:42:23.428605685Z" level=warning msg="cleaning up after shim disconnected" id=cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1 namespace=k8s.io Apr 30 12:42:23.428850 containerd[1515]: time="2025-04-30T12:42:23.428648586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:23.462139 containerd[1515]: time="2025-04-30T12:42:23.462056751Z" level=info msg="StopContainer for \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\" returns successfully" Apr 30 12:42:23.468733 containerd[1515]: time="2025-04-30T12:42:23.468645723Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:42:23.476760 containerd[1515]: time="2025-04-30T12:42:23.476705941Z" level=info msg="StopPodSandbox for \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\"" Apr 30 12:42:23.478655 containerd[1515]: time="2025-04-30T12:42:23.478414548Z" level=info msg="StopContainer for \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\" with timeout 2 (s)" Apr 30 12:42:23.479160 containerd[1515]: time="2025-04-30T12:42:23.479115205Z" level=info msg="Stop container \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\" with signal terminated" Apr 30 12:42:23.483258 containerd[1515]: time="2025-04-30T12:42:23.476974510Z" level=info msg="Container to stop \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:23.487126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744-shm.mount: Deactivated successfully. Apr 30 12:42:23.491821 systemd-networkd[1430]: lxc_health: Link DOWN Apr 30 12:42:23.491830 systemd-networkd[1430]: lxc_health: Lost carrier Apr 30 12:42:23.497965 systemd[1]: cri-containerd-39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744.scope: Deactivated successfully. Apr 30 12:42:23.525246 systemd[1]: cri-containerd-7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f.scope: Deactivated successfully. Apr 30 12:42:23.525825 systemd[1]: cri-containerd-7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f.scope: Consumed 7.889s CPU time, 124M memory peak, 216K read from disk, 13.3M written to disk. Apr 30 12:42:23.532383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744-rootfs.mount: Deactivated successfully. Apr 30 12:42:23.551458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f-rootfs.mount: Deactivated successfully. Apr 30 12:42:23.706868 containerd[1515]: time="2025-04-30T12:42:23.706446349Z" level=info msg="shim disconnected" id=39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744 namespace=k8s.io Apr 30 12:42:23.706868 containerd[1515]: time="2025-04-30T12:42:23.706516381Z" level=warning msg="cleaning up after shim disconnected" id=39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744 namespace=k8s.io Apr 30 12:42:23.706868 containerd[1515]: time="2025-04-30T12:42:23.706533433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:23.706868 containerd[1515]: time="2025-04-30T12:42:23.706560725Z" level=info msg="shim disconnected" id=7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f namespace=k8s.io Apr 30 12:42:23.706868 containerd[1515]: time="2025-04-30T12:42:23.706832289Z" level=warning msg="cleaning up after shim disconnected" id=7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f namespace=k8s.io Apr 30 12:42:23.706868 containerd[1515]: time="2025-04-30T12:42:23.706848420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:23.727461 containerd[1515]: time="2025-04-30T12:42:23.727316873Z" level=info msg="TearDown network for sandbox \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\" successfully" Apr 30 12:42:23.727461 containerd[1515]: time="2025-04-30T12:42:23.727366938Z" level=info msg="StopPodSandbox for \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\" returns successfully" Apr 30 12:42:23.824685 containerd[1515]: time="2025-04-30T12:42:23.824353059Z" level=info msg="StopContainer for \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\" returns successfully" Apr 30 12:42:23.825182 containerd[1515]: time="2025-04-30T12:42:23.825124229Z" level=info msg="StopPodSandbox for \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\"" Apr 30 12:42:23.825243 containerd[1515]: time="2025-04-30T12:42:23.825176098Z" level=info msg="Container to stop \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:23.825243 containerd[1515]: time="2025-04-30T12:42:23.825218628Z" level=info msg="Container to stop \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:23.825243 containerd[1515]: time="2025-04-30T12:42:23.825228827Z" level=info msg="Container to stop \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:23.825243 containerd[1515]: time="2025-04-30T12:42:23.825239107Z" level=info msg="Container to stop \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:23.825417 containerd[1515]: time="2025-04-30T12:42:23.825248946Z" level=info msg="Container to stop \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:42:23.835259 systemd[1]: cri-containerd-cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31.scope: Deactivated successfully. Apr 30 12:42:23.861615 kubelet[2632]: I0430 12:42:23.861564 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfmcp\" (UniqueName: \"kubernetes.io/projected/d94bc2e0-5d54-4c8b-a857-24186da688cf-kube-api-access-xfmcp\") pod \"d94bc2e0-5d54-4c8b-a857-24186da688cf\" (UID: \"d94bc2e0-5d54-4c8b-a857-24186da688cf\") " Apr 30 12:42:23.905084 kubelet[2632]: I0430 12:42:23.861626 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d94bc2e0-5d54-4c8b-a857-24186da688cf-cilium-config-path\") pod \"d94bc2e0-5d54-4c8b-a857-24186da688cf\" (UID: \"d94bc2e0-5d54-4c8b-a857-24186da688cf\") " Apr 30 12:42:23.908814 kubelet[2632]: I0430 12:42:23.908737 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d94bc2e0-5d54-4c8b-a857-24186da688cf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d94bc2e0-5d54-4c8b-a857-24186da688cf" (UID: "d94bc2e0-5d54-4c8b-a857-24186da688cf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 12:42:23.963232 kubelet[2632]: I0430 12:42:23.963007 2632 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d94bc2e0-5d54-4c8b-a857-24186da688cf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.093775 kubelet[2632]: I0430 12:42:24.093699 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d94bc2e0-5d54-4c8b-a857-24186da688cf-kube-api-access-xfmcp" (OuterVolumeSpecName: "kube-api-access-xfmcp") pod "d94bc2e0-5d54-4c8b-a857-24186da688cf" (UID: "d94bc2e0-5d54-4c8b-a857-24186da688cf"). InnerVolumeSpecName "kube-api-access-xfmcp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 12:42:24.164623 kubelet[2632]: I0430 12:42:24.164548 2632 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfmcp\" (UniqueName: \"kubernetes.io/projected/d94bc2e0-5d54-4c8b-a857-24186da688cf-kube-api-access-xfmcp\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.164783 kubelet[2632]: E0430 12:42:24.164655 2632 configmap.go:193] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Apr 30 12:42:24.164783 kubelet[2632]: E0430 12:42:24.164715 2632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path podName:c6b095b7-0ddb-4743-8e69-fe17232195cb nodeName:}" failed. No retries permitted until 2025-04-30 12:42:26.164700647 +0000 UTC m=+105.610507997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path") pod "cilium-wctmq" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb") : configmap "cilium-config" not found Apr 30 12:42:24.265315 containerd[1515]: time="2025-04-30T12:42:24.264749391Z" level=info msg="shim disconnected" id=cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31 namespace=k8s.io Apr 30 12:42:24.265315 containerd[1515]: time="2025-04-30T12:42:24.264820496Z" level=warning msg="cleaning up after shim disconnected" id=cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31 namespace=k8s.io Apr 30 12:42:24.265315 containerd[1515]: time="2025-04-30T12:42:24.264832128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:24.282240 containerd[1515]: time="2025-04-30T12:42:24.281924349Z" level=info msg="TearDown network for sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" successfully" Apr 30 12:42:24.282240 containerd[1515]: time="2025-04-30T12:42:24.281994372Z" level=info msg="StopPodSandbox for \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" returns successfully" Apr 30 12:42:24.365819 kubelet[2632]: I0430 12:42:24.365752 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-host-proc-sys-net\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.365819 kubelet[2632]: I0430 12:42:24.365813 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-hostproc\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.365819 kubelet[2632]: I0430 12:42:24.365845 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n48fq\" (UniqueName: \"kubernetes.io/projected/c6b095b7-0ddb-4743-8e69-fe17232195cb-kube-api-access-n48fq\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366124 kubelet[2632]: I0430 12:42:24.365868 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-run\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366124 kubelet[2632]: I0430 12:42:24.365889 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6b095b7-0ddb-4743-8e69-fe17232195cb-hubble-tls\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366124 kubelet[2632]: I0430 12:42:24.365910 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cni-path\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366124 kubelet[2632]: I0430 12:42:24.365935 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-bpf-maps\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366124 kubelet[2632]: I0430 12:42:24.365955 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-host-proc-sys-kernel\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366124 kubelet[2632]: I0430 12:42:24.365975 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-cgroup\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366331 kubelet[2632]: I0430 12:42:24.366004 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-lib-modules\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366331 kubelet[2632]: I0430 12:42:24.366030 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-etc-cni-netd\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366331 kubelet[2632]: I0430 12:42:24.366054 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6b095b7-0ddb-4743-8e69-fe17232195cb-clustermesh-secrets\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366331 kubelet[2632]: I0430 12:42:24.366085 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366331 kubelet[2632]: I0430 12:42:24.366075 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.366331 kubelet[2632]: I0430 12:42:24.366106 2632 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-xtables-lock\") pod \"c6b095b7-0ddb-4743-8e69-fe17232195cb\" (UID: \"c6b095b7-0ddb-4743-8e69-fe17232195cb\") " Apr 30 12:42:24.366552 kubelet[2632]: I0430 12:42:24.366160 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.366552 kubelet[2632]: I0430 12:42:24.366197 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.366552 kubelet[2632]: I0430 12:42:24.366234 2632 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.366552 kubelet[2632]: I0430 12:42:24.366251 2632 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.366552 kubelet[2632]: I0430 12:42:24.366265 2632 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.366552 kubelet[2632]: I0430 12:42:24.366292 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.366730 kubelet[2632]: I0430 12:42:24.366312 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.366730 kubelet[2632]: I0430 12:42:24.366328 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.366730 kubelet[2632]: I0430 12:42:24.366344 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.366730 kubelet[2632]: I0430 12:42:24.366362 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.366730 kubelet[2632]: I0430 12:42:24.366377 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.366874 kubelet[2632]: I0430 12:42:24.366690 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 12:42:24.370443 kubelet[2632]: I0430 12:42:24.370381 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b095b7-0ddb-4743-8e69-fe17232195cb-kube-api-access-n48fq" (OuterVolumeSpecName: "kube-api-access-n48fq") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "kube-api-access-n48fq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 12:42:24.370512 kubelet[2632]: I0430 12:42:24.370471 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b095b7-0ddb-4743-8e69-fe17232195cb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 12:42:24.371224 kubelet[2632]: I0430 12:42:24.371185 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 12:42:24.371473 kubelet[2632]: I0430 12:42:24.371373 2632 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b095b7-0ddb-4743-8e69-fe17232195cb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6b095b7-0ddb-4743-8e69-fe17232195cb" (UID: "c6b095b7-0ddb-4743-8e69-fe17232195cb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 12:42:24.417002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31-rootfs.mount: Deactivated successfully. Apr 30 12:42:24.417159 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31-shm.mount: Deactivated successfully. Apr 30 12:42:24.417245 systemd[1]: var-lib-kubelet-pods-d94bc2e0\x2d5d54\x2d4c8b\x2da857\x2d24186da688cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxfmcp.mount: Deactivated successfully. Apr 30 12:42:24.417341 systemd[1]: var-lib-kubelet-pods-c6b095b7\x2d0ddb\x2d4743\x2d8e69\x2dfe17232195cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn48fq.mount: Deactivated successfully. Apr 30 12:42:24.418617 systemd[1]: var-lib-kubelet-pods-c6b095b7\x2d0ddb\x2d4743\x2d8e69\x2dfe17232195cb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 12:42:24.418747 systemd[1]: var-lib-kubelet-pods-c6b095b7\x2d0ddb\x2d4743\x2d8e69\x2dfe17232195cb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 12:42:24.432124 kubelet[2632]: I0430 12:42:24.432076 2632 scope.go:117] "RemoveContainer" containerID="cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1" Apr 30 12:42:24.440842 systemd[1]: Removed slice kubepods-besteffort-podd94bc2e0_5d54_4c8b_a857_24186da688cf.slice - libcontainer container kubepods-besteffort-podd94bc2e0_5d54_4c8b_a857_24186da688cf.slice. Apr 30 12:42:24.443092 containerd[1515]: time="2025-04-30T12:42:24.442933668Z" level=info msg="RemoveContainer for \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\"" Apr 30 12:42:24.448584 systemd[1]: Removed slice kubepods-burstable-podc6b095b7_0ddb_4743_8e69_fe17232195cb.slice - libcontainer container kubepods-burstable-podc6b095b7_0ddb_4743_8e69_fe17232195cb.slice. Apr 30 12:42:24.448714 systemd[1]: kubepods-burstable-podc6b095b7_0ddb_4743_8e69_fe17232195cb.slice: Consumed 8.010s CPU time, 124.3M memory peak, 232K read from disk, 13.3M written to disk. Apr 30 12:42:24.467527 kubelet[2632]: I0430 12:42:24.467442 2632 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467527 kubelet[2632]: I0430 12:42:24.467505 2632 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467527 kubelet[2632]: I0430 12:42:24.467517 2632 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n48fq\" (UniqueName: \"kubernetes.io/projected/c6b095b7-0ddb-4743-8e69-fe17232195cb-kube-api-access-n48fq\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467527 kubelet[2632]: I0430 12:42:24.467527 2632 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6b095b7-0ddb-4743-8e69-fe17232195cb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467527 kubelet[2632]: I0430 12:42:24.467540 2632 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467527 kubelet[2632]: I0430 12:42:24.467550 2632 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467892 kubelet[2632]: I0430 12:42:24.467562 2632 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467892 kubelet[2632]: I0430 12:42:24.467575 2632 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467892 kubelet[2632]: I0430 12:42:24.467584 2632 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467892 kubelet[2632]: I0430 12:42:24.467594 2632 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6b095b7-0ddb-4743-8e69-fe17232195cb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.467892 kubelet[2632]: I0430 12:42:24.467607 2632 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6b095b7-0ddb-4743-8e69-fe17232195cb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 30 12:42:24.656700 containerd[1515]: time="2025-04-30T12:42:24.656621626Z" level=info msg="RemoveContainer for \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\" returns successfully" Apr 30 12:42:24.657053 kubelet[2632]: I0430 12:42:24.657014 2632 scope.go:117] "RemoveContainer" containerID="cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1" Apr 30 12:42:24.657411 containerd[1515]: time="2025-04-30T12:42:24.657330008Z" level=error msg="ContainerStatus for \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\": not found" Apr 30 12:42:24.664978 kubelet[2632]: E0430 12:42:24.664922 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\": not found" containerID="cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1" Apr 30 12:42:24.665180 kubelet[2632]: I0430 12:42:24.664977 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1"} err="failed to get container status \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"cadc6ebcbd78d68a98e7fbb6492a9c1dd51a4caa385e654163ef7271ad4ef4e1\": not found" Apr 30 12:42:24.665180 kubelet[2632]: I0430 12:42:24.665064 2632 scope.go:117] "RemoveContainer" containerID="7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f" Apr 30 12:42:24.666788 containerd[1515]: time="2025-04-30T12:42:24.666747855Z" level=info msg="RemoveContainer for \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\"" Apr 30 12:42:24.749823 containerd[1515]: time="2025-04-30T12:42:24.749698356Z" level=info msg="RemoveContainer for \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\" returns successfully" Apr 30 12:42:24.750140 kubelet[2632]: I0430 12:42:24.750072 2632 scope.go:117] "RemoveContainer" containerID="3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7" Apr 30 12:42:24.752126 containerd[1515]: time="2025-04-30T12:42:24.751942446Z" level=info msg="RemoveContainer for \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\"" Apr 30 12:42:24.787450 containerd[1515]: time="2025-04-30T12:42:24.785893568Z" level=info msg="RemoveContainer for \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\" returns successfully" Apr 30 12:42:24.787673 kubelet[2632]: I0430 12:42:24.786282 2632 scope.go:117] "RemoveContainer" containerID="b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc" Apr 30 12:42:24.791629 sshd[4346]: Connection closed by 10.0.0.1 port 50572 Apr 30 12:42:24.792256 containerd[1515]: time="2025-04-30T12:42:24.791971880Z" level=info msg="RemoveContainer for \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\"" Apr 30 12:42:24.792583 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:24.808051 systemd[1]: sshd@25-10.0.0.14:22-10.0.0.1:50572.service: Deactivated successfully. Apr 30 12:42:24.811149 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 12:42:24.814308 systemd-logind[1493]: Session 26 logged out. Waiting for processes to exit. Apr 30 12:42:24.819179 systemd[1]: Started sshd@26-10.0.0.14:22-10.0.0.1:50582.service - OpenSSH per-connection server daemon (10.0.0.1:50582). Apr 30 12:42:24.820959 systemd-logind[1493]: Removed session 26. Apr 30 12:42:24.896961 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 50582 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:24.899099 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:24.905200 containerd[1515]: time="2025-04-30T12:42:24.905147485Z" level=info msg="RemoveContainer for \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\" returns successfully" Apr 30 12:42:24.906289 kubelet[2632]: I0430 12:42:24.905498 2632 scope.go:117] "RemoveContainer" containerID="12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1" Apr 30 12:42:24.905663 systemd-logind[1493]: New session 27 of user core. Apr 30 12:42:24.906863 containerd[1515]: time="2025-04-30T12:42:24.906763256Z" level=info msg="RemoveContainer for \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\"" Apr 30 12:42:24.915732 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 12:42:25.063694 containerd[1515]: time="2025-04-30T12:42:25.063633091Z" level=info msg="RemoveContainer for \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\" returns successfully" Apr 30 12:42:25.064018 kubelet[2632]: I0430 12:42:25.063970 2632 scope.go:117] "RemoveContainer" containerID="500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605" Apr 30 12:42:25.065146 containerd[1515]: time="2025-04-30T12:42:25.065100229Z" level=info msg="RemoveContainer for \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\"" Apr 30 12:42:25.261799 containerd[1515]: time="2025-04-30T12:42:25.261516114Z" level=info msg="RemoveContainer for \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\" returns successfully" Apr 30 12:42:25.262043 kubelet[2632]: I0430 12:42:25.261956 2632 scope.go:117] "RemoveContainer" containerID="7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f" Apr 30 12:42:25.262426 containerd[1515]: time="2025-04-30T12:42:25.262348270Z" level=error msg="ContainerStatus for \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\": not found" Apr 30 12:42:25.262554 kubelet[2632]: E0430 12:42:25.262533 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\": not found" containerID="7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f" Apr 30 12:42:25.262634 kubelet[2632]: I0430 12:42:25.262561 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f"} err="failed to get container status \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7aa4ade4ec0409ea75b657b2f46fde8cd884681cc25d53d597921dff1e07165f\": not found" Apr 30 12:42:25.262634 kubelet[2632]: I0430 12:42:25.262584 2632 scope.go:117] "RemoveContainer" containerID="3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7" Apr 30 12:42:25.262968 containerd[1515]: time="2025-04-30T12:42:25.262903531Z" level=error msg="ContainerStatus for \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\": not found" Apr 30 12:42:25.263234 kubelet[2632]: E0430 12:42:25.263192 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\": not found" containerID="3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7" Apr 30 12:42:25.263234 kubelet[2632]: I0430 12:42:25.263229 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7"} err="failed to get container status \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"3dd881693e989ce08370c6a449285d111571ee7b3da1f60583fe92d2cccfb5d7\": not found" Apr 30 12:42:25.263504 kubelet[2632]: I0430 12:42:25.263250 2632 scope.go:117] "RemoveContainer" containerID="b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc" Apr 30 12:42:25.263570 containerd[1515]: time="2025-04-30T12:42:25.263456969Z" level=error msg="ContainerStatus for \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\": not found" Apr 30 12:42:25.263690 kubelet[2632]: E0430 12:42:25.263642 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\": not found" containerID="b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc" Apr 30 12:42:25.263690 kubelet[2632]: I0430 12:42:25.263667 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc"} err="failed to get container status \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"b65ade2fd641c1e40d259a27966597f2aadc54db20fb199f07de0a07c44995dc\": not found" Apr 30 12:42:25.263690 kubelet[2632]: I0430 12:42:25.263683 2632 scope.go:117] "RemoveContainer" containerID="12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1" Apr 30 12:42:25.263886 containerd[1515]: time="2025-04-30T12:42:25.263851326Z" level=error msg="ContainerStatus for \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\": not found" Apr 30 12:42:25.264016 kubelet[2632]: E0430 12:42:25.263973 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\": not found" containerID="12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1" Apr 30 12:42:25.264068 kubelet[2632]: I0430 12:42:25.264023 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1"} err="failed to get container status \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"12bb25c018c3a60ca5fe6e55a5f561b983bc424b1ffe738db7bd9c6f6d6118d1\": not found" Apr 30 12:42:25.264068 kubelet[2632]: I0430 12:42:25.264042 2632 scope.go:117] "RemoveContainer" containerID="500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605" Apr 30 12:42:25.264300 containerd[1515]: time="2025-04-30T12:42:25.264254249Z" level=error msg="ContainerStatus for \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\": not found" Apr 30 12:42:25.264569 kubelet[2632]: E0430 12:42:25.264527 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\": not found" containerID="500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605" Apr 30 12:42:25.264661 kubelet[2632]: I0430 12:42:25.264583 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605"} err="failed to get container status \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\": rpc error: code = NotFound desc = an error occurred when try to find container \"500726efecc58447363d278e08280b14707d425586c6721336c77278359bf605\": not found" Apr 30 12:42:25.712493 kubelet[2632]: E0430 12:42:25.712439 2632 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:42:26.648862 kubelet[2632]: I0430 12:42:26.648799 2632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b095b7-0ddb-4743-8e69-fe17232195cb" path="/var/lib/kubelet/pods/c6b095b7-0ddb-4743-8e69-fe17232195cb/volumes" Apr 30 12:42:26.649843 kubelet[2632]: I0430 12:42:26.649810 2632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d94bc2e0-5d54-4c8b-a857-24186da688cf" path="/var/lib/kubelet/pods/d94bc2e0-5d54-4c8b-a857-24186da688cf/volumes" Apr 30 12:42:26.842412 sshd[4507]: Connection closed by 10.0.0.1 port 50582 Apr 30 12:42:26.844722 sshd-session[4504]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:26.856748 systemd[1]: sshd@26-10.0.0.14:22-10.0.0.1:50582.service: Deactivated successfully. Apr 30 12:42:26.859780 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 12:42:26.861161 systemd-logind[1493]: Session 27 logged out. Waiting for processes to exit. Apr 30 12:42:26.872957 systemd[1]: Started sshd@27-10.0.0.14:22-10.0.0.1:43654.service - OpenSSH per-connection server daemon (10.0.0.1:43654). Apr 30 12:42:26.876071 systemd-logind[1493]: Removed session 27. Apr 30 12:42:26.882288 kubelet[2632]: I0430 12:42:26.878514 2632 memory_manager.go:355] "RemoveStaleState removing state" podUID="c6b095b7-0ddb-4743-8e69-fe17232195cb" containerName="cilium-agent" Apr 30 12:42:26.882288 kubelet[2632]: I0430 12:42:26.878551 2632 memory_manager.go:355] "RemoveStaleState removing state" podUID="d94bc2e0-5d54-4c8b-a857-24186da688cf" containerName="cilium-operator" Apr 30 12:42:26.900757 systemd[1]: Created slice kubepods-burstable-pod1ad620aa_5c86_43a0_b0cd_dd73cd90501a.slice - libcontainer container kubepods-burstable-pod1ad620aa_5c86_43a0_b0cd_dd73cd90501a.slice. Apr 30 12:42:26.922117 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 43654 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:26.925490 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:26.932259 systemd-logind[1493]: New session 28 of user core. Apr 30 12:42:26.942765 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 12:42:26.986079 kubelet[2632]: I0430 12:42:26.986015 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-bpf-maps\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986079 kubelet[2632]: I0430 12:42:26.986078 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-cilium-ipsec-secrets\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986079 kubelet[2632]: I0430 12:42:26.986109 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-host-proc-sys-net\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986381 kubelet[2632]: I0430 12:42:26.986157 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-cilium-run\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986381 kubelet[2632]: I0430 12:42:26.986180 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-etc-cni-netd\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986381 kubelet[2632]: I0430 12:42:26.986203 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-cni-path\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986381 kubelet[2632]: I0430 12:42:26.986223 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-lib-modules\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986381 kubelet[2632]: I0430 12:42:26.986241 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-xtables-lock\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986381 kubelet[2632]: I0430 12:42:26.986260 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-hubble-tls\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986628 kubelet[2632]: I0430 12:42:26.986279 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-hostproc\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986628 kubelet[2632]: I0430 12:42:26.986300 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx94f\" (UniqueName: \"kubernetes.io/projected/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-kube-api-access-cx94f\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986628 kubelet[2632]: I0430 12:42:26.986348 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-cilium-config-path\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986628 kubelet[2632]: I0430 12:42:26.986417 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-cilium-cgroup\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986628 kubelet[2632]: I0430 12:42:26.986443 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-clustermesh-secrets\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.986770 kubelet[2632]: I0430 12:42:26.986469 2632 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1ad620aa-5c86-43a0-b0cd-dd73cd90501a-host-proc-sys-kernel\") pod \"cilium-lkv8m\" (UID: \"1ad620aa-5c86-43a0-b0cd-dd73cd90501a\") " pod="kube-system/cilium-lkv8m" Apr 30 12:42:26.996178 sshd[4522]: Connection closed by 10.0.0.1 port 43654 Apr 30 12:42:26.996736 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:27.009876 systemd[1]: sshd@27-10.0.0.14:22-10.0.0.1:43654.service: Deactivated successfully. Apr 30 12:42:27.012144 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 12:42:27.013829 systemd-logind[1493]: Session 28 logged out. Waiting for processes to exit. Apr 30 12:42:27.018684 systemd[1]: Started sshd@28-10.0.0.14:22-10.0.0.1:43664.service - OpenSSH per-connection server daemon (10.0.0.1:43664). Apr 30 12:42:27.019770 systemd-logind[1493]: Removed session 28. Apr 30 12:42:27.061292 sshd[4530]: Accepted publickey for core from 10.0.0.1 port 43664 ssh2: RSA SHA256:x8d+Yt6Vge8+9/q7h4nFVC2td0QqN6pzDi7tnTJkYGE Apr 30 12:42:27.062997 sshd-session[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:42:27.069602 systemd-logind[1493]: New session 29 of user core. Apr 30 12:42:27.085025 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 12:42:27.505348 kubelet[2632]: E0430 12:42:27.505263 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:27.506100 containerd[1515]: time="2025-04-30T12:42:27.506055389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lkv8m,Uid:1ad620aa-5c86-43a0-b0cd-dd73cd90501a,Namespace:kube-system,Attempt:0,}" Apr 30 12:42:27.645254 kubelet[2632]: E0430 12:42:27.645161 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qdw88" podUID="9cfa2ac7-63a5-47ba-8bae-9d518db31439" Apr 30 12:42:27.838230 containerd[1515]: time="2025-04-30T12:42:27.837721767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:42:27.838230 containerd[1515]: time="2025-04-30T12:42:27.837843176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:42:27.838230 containerd[1515]: time="2025-04-30T12:42:27.837865739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:27.838230 containerd[1515]: time="2025-04-30T12:42:27.838037003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:42:27.867617 systemd[1]: Started cri-containerd-3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5.scope - libcontainer container 3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5. Apr 30 12:42:27.895728 containerd[1515]: time="2025-04-30T12:42:27.895680735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lkv8m,Uid:1ad620aa-5c86-43a0-b0cd-dd73cd90501a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\"" Apr 30 12:42:27.896672 kubelet[2632]: E0430 12:42:27.896641 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:27.898729 containerd[1515]: time="2025-04-30T12:42:27.898696613Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:42:28.598183 containerd[1515]: time="2025-04-30T12:42:28.598092544Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ad8d2f574dced9f761bbce68a0839bbe20daa52bf6ae45d2911c59b02b84bf96\"" Apr 30 12:42:28.599140 containerd[1515]: time="2025-04-30T12:42:28.598843524Z" level=info msg="StartContainer for \"ad8d2f574dced9f761bbce68a0839bbe20daa52bf6ae45d2911c59b02b84bf96\"" Apr 30 12:42:28.631586 systemd[1]: Started cri-containerd-ad8d2f574dced9f761bbce68a0839bbe20daa52bf6ae45d2911c59b02b84bf96.scope - libcontainer container ad8d2f574dced9f761bbce68a0839bbe20daa52bf6ae45d2911c59b02b84bf96. Apr 30 12:42:28.684658 systemd[1]: cri-containerd-ad8d2f574dced9f761bbce68a0839bbe20daa52bf6ae45d2911c59b02b84bf96.scope: Deactivated successfully. Apr 30 12:42:28.709175 containerd[1515]: time="2025-04-30T12:42:28.709068318Z" level=info msg="StartContainer for \"ad8d2f574dced9f761bbce68a0839bbe20daa52bf6ae45d2911c59b02b84bf96\" returns successfully" Apr 30 12:42:28.781760 containerd[1515]: time="2025-04-30T12:42:28.781671441Z" level=info msg="shim disconnected" id=ad8d2f574dced9f761bbce68a0839bbe20daa52bf6ae45d2911c59b02b84bf96 namespace=k8s.io Apr 30 12:42:28.781760 containerd[1515]: time="2025-04-30T12:42:28.781744940Z" level=warning msg="cleaning up after shim disconnected" id=ad8d2f574dced9f761bbce68a0839bbe20daa52bf6ae45d2911c59b02b84bf96 namespace=k8s.io Apr 30 12:42:28.781760 containerd[1515]: time="2025-04-30T12:42:28.781756692Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:29.093363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad8d2f574dced9f761bbce68a0839bbe20daa52bf6ae45d2911c59b02b84bf96-rootfs.mount: Deactivated successfully. Apr 30 12:42:29.456733 kubelet[2632]: E0430 12:42:29.456683 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:29.459036 containerd[1515]: time="2025-04-30T12:42:29.458974928Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:42:29.645842 kubelet[2632]: E0430 12:42:29.645761 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qdw88" podUID="9cfa2ac7-63a5-47ba-8bae-9d518db31439" Apr 30 12:42:29.857159 containerd[1515]: time="2025-04-30T12:42:29.856970426Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8da7be48da08ad4bcdf53ecffb42958f966d2fcddb7f189f14b5aa120f6d0c7a\"" Apr 30 12:42:29.858260 containerd[1515]: time="2025-04-30T12:42:29.857883053Z" level=info msg="StartContainer for \"8da7be48da08ad4bcdf53ecffb42958f966d2fcddb7f189f14b5aa120f6d0c7a\"" Apr 30 12:42:29.905809 systemd[1]: Started cri-containerd-8da7be48da08ad4bcdf53ecffb42958f966d2fcddb7f189f14b5aa120f6d0c7a.scope - libcontainer container 8da7be48da08ad4bcdf53ecffb42958f966d2fcddb7f189f14b5aa120f6d0c7a. Apr 30 12:42:29.954123 containerd[1515]: time="2025-04-30T12:42:29.954026377Z" level=info msg="StartContainer for \"8da7be48da08ad4bcdf53ecffb42958f966d2fcddb7f189f14b5aa120f6d0c7a\" returns successfully" Apr 30 12:42:29.960100 systemd[1]: cri-containerd-8da7be48da08ad4bcdf53ecffb42958f966d2fcddb7f189f14b5aa120f6d0c7a.scope: Deactivated successfully. Apr 30 12:42:30.004333 containerd[1515]: time="2025-04-30T12:42:30.004231283Z" level=info msg="shim disconnected" id=8da7be48da08ad4bcdf53ecffb42958f966d2fcddb7f189f14b5aa120f6d0c7a namespace=k8s.io Apr 30 12:42:30.004333 containerd[1515]: time="2025-04-30T12:42:30.004307948Z" level=warning msg="cleaning up after shim disconnected" id=8da7be48da08ad4bcdf53ecffb42958f966d2fcddb7f189f14b5aa120f6d0c7a namespace=k8s.io Apr 30 12:42:30.004706 containerd[1515]: time="2025-04-30T12:42:30.004321063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:30.023674 containerd[1515]: time="2025-04-30T12:42:30.023592983Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:42:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:42:30.094053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8da7be48da08ad4bcdf53ecffb42958f966d2fcddb7f189f14b5aa120f6d0c7a-rootfs.mount: Deactivated successfully. Apr 30 12:42:30.461032 kubelet[2632]: E0430 12:42:30.460995 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:30.462883 containerd[1515]: time="2025-04-30T12:42:30.462833188Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:42:30.714109 kubelet[2632]: E0430 12:42:30.713922 2632 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:42:31.425193 containerd[1515]: time="2025-04-30T12:42:31.425024614Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aa4b1898182d4fdb28923b044a36a63c657a314e6e8b073e0756ba6e027a2642\"" Apr 30 12:42:31.425943 containerd[1515]: time="2025-04-30T12:42:31.425778030Z" level=info msg="StartContainer for \"aa4b1898182d4fdb28923b044a36a63c657a314e6e8b073e0756ba6e027a2642\"" Apr 30 12:42:31.464766 systemd[1]: Started cri-containerd-aa4b1898182d4fdb28923b044a36a63c657a314e6e8b073e0756ba6e027a2642.scope - libcontainer container aa4b1898182d4fdb28923b044a36a63c657a314e6e8b073e0756ba6e027a2642. Apr 30 12:42:31.504880 systemd[1]: cri-containerd-aa4b1898182d4fdb28923b044a36a63c657a314e6e8b073e0756ba6e027a2642.scope: Deactivated successfully. Apr 30 12:42:31.645972 kubelet[2632]: E0430 12:42:31.645872 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qdw88" podUID="9cfa2ac7-63a5-47ba-8bae-9d518db31439" Apr 30 12:42:31.740942 containerd[1515]: time="2025-04-30T12:42:31.740762741Z" level=info msg="StartContainer for \"aa4b1898182d4fdb28923b044a36a63c657a314e6e8b073e0756ba6e027a2642\" returns successfully" Apr 30 12:42:31.763793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa4b1898182d4fdb28923b044a36a63c657a314e6e8b073e0756ba6e027a2642-rootfs.mount: Deactivated successfully. Apr 30 12:42:31.965574 containerd[1515]: time="2025-04-30T12:42:31.965452662Z" level=info msg="shim disconnected" id=aa4b1898182d4fdb28923b044a36a63c657a314e6e8b073e0756ba6e027a2642 namespace=k8s.io Apr 30 12:42:31.965574 containerd[1515]: time="2025-04-30T12:42:31.965523156Z" level=warning msg="cleaning up after shim disconnected" id=aa4b1898182d4fdb28923b044a36a63c657a314e6e8b073e0756ba6e027a2642 namespace=k8s.io Apr 30 12:42:31.965574 containerd[1515]: time="2025-04-30T12:42:31.965534267Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:32.473712 kubelet[2632]: E0430 12:42:32.473650 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:32.476577 containerd[1515]: time="2025-04-30T12:42:32.476516716Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:42:32.520140 containerd[1515]: time="2025-04-30T12:42:32.520042587Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6a9a2c5c95c2a9a29c51d0d71ea1bb1f566919c30e4b6c2da7b6b3b3393a758\"" Apr 30 12:42:32.521099 containerd[1515]: time="2025-04-30T12:42:32.521011619Z" level=info msg="StartContainer for \"c6a9a2c5c95c2a9a29c51d0d71ea1bb1f566919c30e4b6c2da7b6b3b3393a758\"" Apr 30 12:42:32.564241 systemd[1]: Started cri-containerd-c6a9a2c5c95c2a9a29c51d0d71ea1bb1f566919c30e4b6c2da7b6b3b3393a758.scope - libcontainer container c6a9a2c5c95c2a9a29c51d0d71ea1bb1f566919c30e4b6c2da7b6b3b3393a758. Apr 30 12:42:32.606007 systemd[1]: cri-containerd-c6a9a2c5c95c2a9a29c51d0d71ea1bb1f566919c30e4b6c2da7b6b3b3393a758.scope: Deactivated successfully. Apr 30 12:42:32.677948 containerd[1515]: time="2025-04-30T12:42:32.677849387Z" level=info msg="StartContainer for \"c6a9a2c5c95c2a9a29c51d0d71ea1bb1f566919c30e4b6c2da7b6b3b3393a758\" returns successfully" Apr 30 12:42:32.716626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6a9a2c5c95c2a9a29c51d0d71ea1bb1f566919c30e4b6c2da7b6b3b3393a758-rootfs.mount: Deactivated successfully. Apr 30 12:42:32.742179 containerd[1515]: time="2025-04-30T12:42:32.741955731Z" level=info msg="shim disconnected" id=c6a9a2c5c95c2a9a29c51d0d71ea1bb1f566919c30e4b6c2da7b6b3b3393a758 namespace=k8s.io Apr 30 12:42:32.742179 containerd[1515]: time="2025-04-30T12:42:32.742024501Z" level=warning msg="cleaning up after shim disconnected" id=c6a9a2c5c95c2a9a29c51d0d71ea1bb1f566919c30e4b6c2da7b6b3b3393a758 namespace=k8s.io Apr 30 12:42:32.742179 containerd[1515]: time="2025-04-30T12:42:32.742034009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:42:32.961899 kubelet[2632]: I0430 12:42:32.961809 2632 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T12:42:32Z","lastTransitionTime":"2025-04-30T12:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 12:42:33.478927 kubelet[2632]: E0430 12:42:33.478839 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:33.481045 containerd[1515]: time="2025-04-30T12:42:33.480963497Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:42:33.504538 containerd[1515]: time="2025-04-30T12:42:33.504430403Z" level=info msg="CreateContainer within sandbox \"3e48cf9325e7021ceb8b7d0434ac7afeeee45e063ff7763e08f25819145e9dc5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6a01372a6a63b44792112fb48c6bfb9883486a56e6bffc9d57f9d9505aace83\"" Apr 30 12:42:33.505957 containerd[1515]: time="2025-04-30T12:42:33.505090481Z" level=info msg="StartContainer for \"c6a01372a6a63b44792112fb48c6bfb9883486a56e6bffc9d57f9d9505aace83\"" Apr 30 12:42:33.541557 systemd[1]: run-containerd-runc-k8s.io-c6a01372a6a63b44792112fb48c6bfb9883486a56e6bffc9d57f9d9505aace83-runc.4IokWZ.mount: Deactivated successfully. Apr 30 12:42:33.552753 systemd[1]: Started cri-containerd-c6a01372a6a63b44792112fb48c6bfb9883486a56e6bffc9d57f9d9505aace83.scope - libcontainer container c6a01372a6a63b44792112fb48c6bfb9883486a56e6bffc9d57f9d9505aace83. Apr 30 12:42:33.604265 containerd[1515]: time="2025-04-30T12:42:33.604067879Z" level=info msg="StartContainer for \"c6a01372a6a63b44792112fb48c6bfb9883486a56e6bffc9d57f9d9505aace83\" returns successfully" Apr 30 12:42:33.645920 kubelet[2632]: E0430 12:42:33.645341 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qdw88" podUID="9cfa2ac7-63a5-47ba-8bae-9d518db31439" Apr 30 12:42:34.149511 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 12:42:34.490288 kubelet[2632]: E0430 12:42:34.490047 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:34.514732 kubelet[2632]: I0430 12:42:34.514626 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lkv8m" podStartSLOduration=8.51459079 podStartE2EDuration="8.51459079s" podCreationTimestamp="2025-04-30 12:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:42:34.514245666 +0000 UTC m=+113.960053026" watchObservedRunningTime="2025-04-30 12:42:34.51459079 +0000 UTC m=+113.960398140" Apr 30 12:42:35.492191 kubelet[2632]: E0430 12:42:35.492132 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:35.645127 kubelet[2632]: E0430 12:42:35.645048 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-qdw88" podUID="9cfa2ac7-63a5-47ba-8bae-9d518db31439" Apr 30 12:42:36.493661 kubelet[2632]: E0430 12:42:36.493615 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:36.646114 kubelet[2632]: E0430 12:42:36.646048 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:37.645379 kubelet[2632]: E0430 12:42:37.645334 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:38.200741 systemd-networkd[1430]: lxc_health: Link UP Apr 30 12:42:38.201171 systemd-networkd[1430]: lxc_health: Gained carrier Apr 30 12:42:39.507996 kubelet[2632]: E0430 12:42:39.507601 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:40.158571 systemd-networkd[1430]: lxc_health: Gained IPv6LL Apr 30 12:42:40.506177 kubelet[2632]: E0430 12:42:40.505887 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:40.629328 containerd[1515]: time="2025-04-30T12:42:40.629273817Z" level=info msg="StopPodSandbox for \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\"" Apr 30 12:42:40.629931 containerd[1515]: time="2025-04-30T12:42:40.629419442Z" level=info msg="TearDown network for sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" successfully" Apr 30 12:42:40.629931 containerd[1515]: time="2025-04-30T12:42:40.629433378Z" level=info msg="StopPodSandbox for \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" returns successfully" Apr 30 12:42:40.629931 containerd[1515]: time="2025-04-30T12:42:40.629883318Z" level=info msg="RemovePodSandbox for \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\"" Apr 30 12:42:40.630210 containerd[1515]: time="2025-04-30T12:42:40.629920038Z" level=info msg="Forcibly stopping sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\"" Apr 30 12:42:40.630336 containerd[1515]: time="2025-04-30T12:42:40.630231927Z" level=info msg="TearDown network for sandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" successfully" Apr 30 12:42:40.780958 containerd[1515]: time="2025-04-30T12:42:40.780750816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:42:40.780958 containerd[1515]: time="2025-04-30T12:42:40.780841337Z" level=info msg="RemovePodSandbox \"cb74effdf294171246079620402ce13ef6ab9fec22cd66ca994e3613552f8c31\" returns successfully" Apr 30 12:42:40.781466 containerd[1515]: time="2025-04-30T12:42:40.781434558Z" level=info msg="StopPodSandbox for \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\"" Apr 30 12:42:40.781559 containerd[1515]: time="2025-04-30T12:42:40.781541160Z" level=info msg="TearDown network for sandbox \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\" successfully" Apr 30 12:42:40.781618 containerd[1515]: time="2025-04-30T12:42:40.781558753Z" level=info msg="StopPodSandbox for \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\" returns successfully" Apr 30 12:42:40.782111 containerd[1515]: time="2025-04-30T12:42:40.782020655Z" level=info msg="RemovePodSandbox for \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\"" Apr 30 12:42:40.782111 containerd[1515]: time="2025-04-30T12:42:40.782053087Z" level=info msg="Forcibly stopping sandbox \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\"" Apr 30 12:42:40.782296 containerd[1515]: time="2025-04-30T12:42:40.782125634Z" level=info msg="TearDown network for sandbox \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\" successfully" Apr 30 12:42:40.919225 containerd[1515]: time="2025-04-30T12:42:40.919127546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:42:40.919471 containerd[1515]: time="2025-04-30T12:42:40.919241191Z" level=info msg="RemovePodSandbox \"39c39864e4314d219634c2c3acc451451e86524f3ea3e6b7e8e9e04fc4e8a744\" returns successfully" Apr 30 12:42:41.508450 kubelet[2632]: E0430 12:42:41.507919 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 12:42:45.153306 sshd[4535]: Connection closed by 10.0.0.1 port 43664 Apr 30 12:42:45.184362 sshd-session[4530]: pam_unix(sshd:session): session closed for user core Apr 30 12:42:45.189486 systemd[1]: sshd@28-10.0.0.14:22-10.0.0.1:43664.service: Deactivated successfully. Apr 30 12:42:45.191961 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 12:42:45.192953 systemd-logind[1493]: Session 29 logged out. Waiting for processes to exit. Apr 30 12:42:45.194206 systemd-logind[1493]: Removed session 29.