Feb 13 15:34:28.879634 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:34:28.879654 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:34:28.879665 kernel: BIOS-provided physical RAM map: Feb 13 15:34:28.879671 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:34:28.879677 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:34:28.879683 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:34:28.879690 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:34:28.879697 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:34:28.879703 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:34:28.879709 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:34:28.879725 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:34:28.879731 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:34:28.879737 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:34:28.879743 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:34:28.879751 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:34:28.879758 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:34:28.879767 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:34:28.879773 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:34:28.879780 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:34:28.879885 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:34:28.879892 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:34:28.879899 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:34:28.879906 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:34:28.879913 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:34:28.879919 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:34:28.879926 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:34:28.879933 kernel: NX (Execute Disable) protection: active Feb 13 15:34:28.879942 kernel: APIC: Static calls initialized Feb 13 15:34:28.879949 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:34:28.879956 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:34:28.879963 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:34:28.879969 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:34:28.879976 kernel: extended physical RAM map: Feb 13 15:34:28.879983 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:34:28.879990 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:34:28.879996 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:34:28.880003 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:34:28.880010 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:34:28.880019 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:34:28.880026 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:34:28.880036 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:34:28.880043 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:34:28.880050 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:34:28.880057 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:34:28.880064 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:34:28.880073 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:34:28.880081 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:34:28.880088 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:34:28.880095 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:34:28.880102 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:34:28.880109 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:34:28.880116 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:34:28.880123 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:34:28.880130 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:34:28.880140 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:34:28.880147 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:34:28.880154 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:34:28.880161 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:34:28.880168 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:34:28.880175 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:34:28.880182 kernel: efi: EFI v2.7 by EDK II Feb 13 15:34:28.880189 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:34:28.880197 kernel: random: crng init done Feb 13 15:34:28.880204 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:34:28.880211 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:34:28.880220 kernel: secureboot: Secure boot disabled Feb 13 15:34:28.880227 kernel: SMBIOS 2.8 present. Feb 13 15:34:28.880234 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:34:28.880242 kernel: Hypervisor detected: KVM Feb 13 15:34:28.880249 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:34:28.880256 kernel: kvm-clock: using sched offset of 2580222061 cycles Feb 13 15:34:28.880263 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:34:28.880271 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:34:28.880279 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:34:28.880286 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:34:28.880293 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:34:28.880303 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:34:28.880310 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:34:28.880317 kernel: Using GB pages for direct mapping Feb 13 15:34:28.880325 kernel: ACPI: Early table checksum verification disabled Feb 13 15:34:28.880332 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:34:28.880339 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:34:28.880347 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:28.880354 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:28.880361 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:34:28.880371 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:28.880378 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:28.880386 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:28.880393 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:34:28.880400 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:34:28.880407 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:34:28.880415 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:34:28.880422 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:34:28.880429 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:34:28.880438 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:34:28.880446 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:34:28.880453 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:34:28.880460 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:34:28.880467 kernel: No NUMA configuration found Feb 13 15:34:28.880474 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:34:28.880481 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:34:28.880489 kernel: Zone ranges: Feb 13 15:34:28.880496 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:34:28.880506 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:34:28.880513 kernel: Normal empty Feb 13 15:34:28.880520 kernel: Movable zone start for each node Feb 13 15:34:28.880527 kernel: Early memory node ranges Feb 13 15:34:28.880535 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:34:28.880542 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:34:28.880549 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:34:28.880556 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:34:28.880563 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:34:28.880573 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:34:28.880580 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:34:28.880587 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:34:28.880594 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:34:28.880601 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:34:28.880609 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:34:28.880623 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:34:28.880633 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:34:28.880641 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:34:28.880648 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:34:28.880656 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:34:28.880663 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:34:28.880673 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:34:28.880680 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:34:28.880688 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:34:28.880696 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:34:28.880703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:34:28.880719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:34:28.880727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:34:28.880735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:34:28.880742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:34:28.880750 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:34:28.880757 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:34:28.880765 kernel: TSC deadline timer available Feb 13 15:34:28.880773 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:34:28.880780 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:34:28.880810 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:34:28.880818 kernel: kvm-guest: setup PV sched yield Feb 13 15:34:28.880825 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:34:28.880833 kernel: Booting paravirtualized kernel on KVM Feb 13 15:34:28.880841 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:34:28.880849 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:34:28.880856 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:34:28.880864 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:34:28.880871 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:34:28.880881 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:34:28.880889 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:34:28.880897 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:34:28.880905 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:34:28.880913 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:34:28.880920 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:34:28.880928 kernel: Fallback order for Node 0: 0 Feb 13 15:34:28.880935 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:34:28.880943 kernel: Policy zone: DMA32 Feb 13 15:34:28.880952 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:34:28.880960 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 15:34:28.880968 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:34:28.880975 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:34:28.880983 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:34:28.880990 kernel: Dynamic Preempt: voluntary Feb 13 15:34:28.880998 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:34:28.881006 kernel: rcu: RCU event tracing is enabled. Feb 13 15:34:28.881013 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:34:28.881024 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:34:28.881031 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:34:28.881039 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:34:28.881046 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:34:28.881054 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:34:28.881061 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:34:28.881069 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:34:28.881076 kernel: Console: colour dummy device 80x25 Feb 13 15:34:28.881083 kernel: printk: console [ttyS0] enabled Feb 13 15:34:28.881093 kernel: ACPI: Core revision 20230628 Feb 13 15:34:28.881101 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:34:28.881108 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:34:28.881116 kernel: x2apic enabled Feb 13 15:34:28.881123 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:34:28.881131 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:34:28.881138 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:34:28.881146 kernel: kvm-guest: setup PV IPIs Feb 13 15:34:28.881153 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:34:28.881163 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:34:28.881170 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:34:28.881178 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:34:28.881185 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:34:28.881192 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:34:28.881200 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:34:28.881207 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:34:28.881215 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:34:28.881222 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:34:28.881232 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:34:28.881239 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:34:28.881247 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:34:28.881254 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:34:28.881262 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:34:28.881270 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:34:28.881277 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:34:28.881285 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:34:28.881294 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:34:28.881312 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:34:28.881320 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:34:28.881328 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:34:28.881343 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:34:28.881359 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:34:28.881375 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:34:28.881390 kernel: landlock: Up and running. Feb 13 15:34:28.881398 kernel: SELinux: Initializing. Feb 13 15:34:28.881423 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:34:28.881445 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:34:28.881460 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:34:28.881467 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:34:28.881488 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:34:28.881497 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:34:28.881504 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:34:28.881512 kernel: ... version: 0 Feb 13 15:34:28.881519 kernel: ... bit width: 48 Feb 13 15:34:28.881529 kernel: ... generic registers: 6 Feb 13 15:34:28.881536 kernel: ... value mask: 0000ffffffffffff Feb 13 15:34:28.881544 kernel: ... max period: 00007fffffffffff Feb 13 15:34:28.881551 kernel: ... fixed-purpose events: 0 Feb 13 15:34:28.881559 kernel: ... event mask: 000000000000003f Feb 13 15:34:28.881566 kernel: signal: max sigframe size: 1776 Feb 13 15:34:28.881573 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:34:28.881581 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:34:28.881588 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:34:28.881598 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:34:28.881605 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:34:28.881612 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:34:28.881620 kernel: smpboot: Max logical packages: 1 Feb 13 15:34:28.881627 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:34:28.881635 kernel: devtmpfs: initialized Feb 13 15:34:28.881642 kernel: x86/mm: Memory block size: 128MB Feb 13 15:34:28.881649 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:34:28.881657 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:34:28.881667 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:34:28.881675 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:34:28.881682 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:34:28.881690 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:34:28.881697 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:34:28.881705 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:34:28.881712 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:34:28.881728 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:34:28.881735 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:34:28.881745 kernel: audit: type=2000 audit(1739460869.479:1): state=initialized audit_enabled=0 res=1 Feb 13 15:34:28.881752 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:34:28.881760 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:34:28.881767 kernel: cpuidle: using governor menu Feb 13 15:34:28.881775 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:34:28.881782 kernel: dca service started, version 1.12.1 Feb 13 15:34:28.881807 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:34:28.881814 kernel: PCI: Using configuration type 1 for base access Feb 13 15:34:28.881822 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:34:28.881832 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:34:28.881840 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:34:28.881865 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:34:28.881872 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:34:28.881880 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:34:28.881887 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:34:28.881894 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:34:28.881902 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:34:28.881909 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:34:28.881919 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:34:28.881927 kernel: ACPI: Interpreter enabled Feb 13 15:34:28.881934 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:34:28.881942 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:34:28.881949 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:34:28.881957 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:34:28.881964 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:34:28.881972 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:34:28.882168 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:34:28.882305 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:34:28.882429 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:34:28.882440 kernel: PCI host bridge to bus 0000:00 Feb 13 15:34:28.882565 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:34:28.882683 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:34:28.882816 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:34:28.882931 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:34:28.883086 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:34:28.883196 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:34:28.883307 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:34:28.883442 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:34:28.883579 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:34:28.883704 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:34:28.883852 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:34:28.883972 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:34:28.884089 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:34:28.884249 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:34:28.884506 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:34:28.884658 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:34:28.886103 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:34:28.886246 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:34:28.886383 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:34:28.886505 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:34:28.886624 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:34:28.886753 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:34:28.886897 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:34:28.887023 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:34:28.887142 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:34:28.887262 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:34:28.887380 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:34:28.887512 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:34:28.887632 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:34:28.887770 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:34:28.887969 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:34:28.888101 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:34:28.888228 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:34:28.888349 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:34:28.888359 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:34:28.888367 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:34:28.888375 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:34:28.888387 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:34:28.888395 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:34:28.888402 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:34:28.888410 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:34:28.888418 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:34:28.888426 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:34:28.888433 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:34:28.888441 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:34:28.888449 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:34:28.888459 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:34:28.888499 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:34:28.888506 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:34:28.888514 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:34:28.888521 kernel: iommu: Default domain type: Translated Feb 13 15:34:28.888528 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:34:28.888536 kernel: efivars: Registered efivars operations Feb 13 15:34:28.888543 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:34:28.888551 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:34:28.888561 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:34:28.888568 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:34:28.888575 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:34:28.888582 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:34:28.888590 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:34:28.888597 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:34:28.888605 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:34:28.888612 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:34:28.888746 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:34:28.888936 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:34:28.889055 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:34:28.889066 kernel: vgaarb: loaded Feb 13 15:34:28.889074 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:34:28.889081 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:34:28.889089 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:34:28.889096 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:34:28.889104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:34:28.889115 kernel: pnp: PnP ACPI init Feb 13 15:34:28.889241 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:34:28.889251 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:34:28.889259 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:34:28.889268 kernel: NET: Registered PF_INET protocol family Feb 13 15:34:28.889294 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:34:28.889304 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:34:28.889312 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:34:28.889322 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:34:28.889330 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:34:28.889338 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:34:28.889346 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:34:28.889353 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:34:28.889361 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:34:28.889369 kernel: NET: Registered PF_XDP protocol family Feb 13 15:34:28.889490 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:34:28.889611 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:34:28.889734 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:34:28.889888 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:34:28.890000 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:34:28.890109 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:34:28.890219 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:34:28.890328 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:34:28.890338 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:34:28.890346 kernel: Initialise system trusted keyrings Feb 13 15:34:28.890358 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:34:28.890366 kernel: Key type asymmetric registered Feb 13 15:34:28.890374 kernel: Asymmetric key parser 'x509' registered Feb 13 15:34:28.890382 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:34:28.890390 kernel: io scheduler mq-deadline registered Feb 13 15:34:28.890397 kernel: io scheduler kyber registered Feb 13 15:34:28.890405 kernel: io scheduler bfq registered Feb 13 15:34:28.890413 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:34:28.890422 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:34:28.890433 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:34:28.890443 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:34:28.890451 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:34:28.890459 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:34:28.890467 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:34:28.890476 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:34:28.890486 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:34:28.890615 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:34:28.890626 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:34:28.890748 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:34:28.890930 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:34:28 UTC (1739460868) Feb 13 15:34:28.891044 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:34:28.891055 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:34:28.891066 kernel: efifb: probing for efifb Feb 13 15:34:28.891074 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:34:28.891082 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:34:28.891090 kernel: efifb: scrolling: redraw Feb 13 15:34:28.891098 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:34:28.891106 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:34:28.891114 kernel: fb0: EFI VGA frame buffer device Feb 13 15:34:28.891124 kernel: pstore: Using crash dump compression: deflate Feb 13 15:34:28.891132 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:34:28.891140 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:34:28.891149 kernel: Segment Routing with IPv6 Feb 13 15:34:28.891157 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:34:28.891165 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:34:28.891173 kernel: Key type dns_resolver registered Feb 13 15:34:28.891180 kernel: IPI shorthand broadcast: enabled Feb 13 15:34:28.891188 kernel: sched_clock: Marking stable (622003122, 150544778)->(785043447, -12495547) Feb 13 15:34:28.891196 kernel: registered taskstats version 1 Feb 13 15:34:28.891204 kernel: Loading compiled-in X.509 certificates Feb 13 15:34:28.891229 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:34:28.891240 kernel: Key type .fscrypt registered Feb 13 15:34:28.891248 kernel: Key type fscrypt-provisioning registered Feb 13 15:34:28.891256 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:34:28.891264 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:34:28.891271 kernel: ima: No architecture policies found Feb 13 15:34:28.891279 kernel: clk: Disabling unused clocks Feb 13 15:34:28.891287 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:34:28.891295 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:34:28.891303 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:34:28.891313 kernel: Run /init as init process Feb 13 15:34:28.891321 kernel: with arguments: Feb 13 15:34:28.891329 kernel: /init Feb 13 15:34:28.891336 kernel: with environment: Feb 13 15:34:28.891344 kernel: HOME=/ Feb 13 15:34:28.891352 kernel: TERM=linux Feb 13 15:34:28.891360 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:34:28.891370 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:34:28.891382 systemd[1]: Detected virtualization kvm. Feb 13 15:34:28.891391 systemd[1]: Detected architecture x86-64. Feb 13 15:34:28.891399 systemd[1]: Running in initrd. Feb 13 15:34:28.891408 systemd[1]: No hostname configured, using default hostname. Feb 13 15:34:28.891416 systemd[1]: Hostname set to . Feb 13 15:34:28.891424 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:34:28.891433 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:34:28.891441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:34:28.891452 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:34:28.891461 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:34:28.891470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:34:28.891478 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:34:28.891487 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:34:28.891497 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:34:28.891508 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:34:28.891516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:34:28.891525 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:34:28.891533 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:34:28.891542 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:34:28.891550 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:34:28.891558 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:34:28.891567 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:34:28.891575 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:34:28.891586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:34:28.891594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:34:28.891603 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:34:28.891611 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:34:28.891620 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:34:28.891628 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:34:28.891637 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:34:28.891645 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:34:28.891656 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:34:28.891664 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:34:28.891672 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:34:28.891681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:34:28.891689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:28.891698 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:34:28.891706 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:34:28.891723 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:34:28.891752 systemd-journald[193]: Collecting audit messages is disabled. Feb 13 15:34:28.891773 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:34:28.891782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:28.891802 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:34:28.891810 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:34:28.891818 systemd-journald[193]: Journal started Feb 13 15:34:28.891836 systemd-journald[193]: Runtime Journal (/run/log/journal/4fe8ff5a9db549b8b92ddc99900d4b1c) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:34:28.884657 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 15:34:28.896418 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:34:28.904978 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:34:28.908017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:34:28.910141 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:28.915514 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:34:28.919585 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:34:28.919608 kernel: Bridge firewalling registered Feb 13 15:34:28.918484 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 15:34:28.919528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:34:28.926932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:34:28.928154 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:34:28.930370 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:34:28.936639 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:28.944796 dracut-cmdline[222]: dracut-dracut-053 Feb 13 15:34:28.949264 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:34:28.947980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:34:28.981415 systemd-resolved[237]: Positive Trust Anchors: Feb 13 15:34:28.981430 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:34:28.981461 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:34:28.992049 systemd-resolved[237]: Defaulting to hostname 'linux'. Feb 13 15:34:28.993881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:34:28.994006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:34:29.035818 kernel: SCSI subsystem initialized Feb 13 15:34:29.045811 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:34:29.056812 kernel: iscsi: registered transport (tcp) Feb 13 15:34:29.077822 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:34:29.077879 kernel: QLogic iSCSI HBA Driver Feb 13 15:34:29.126253 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:34:29.132982 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:34:29.156817 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:34:29.156862 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:34:29.157812 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:34:29.198813 kernel: raid6: avx2x4 gen() 30232 MB/s Feb 13 15:34:29.215808 kernel: raid6: avx2x2 gen() 31170 MB/s Feb 13 15:34:29.232878 kernel: raid6: avx2x1 gen() 25836 MB/s Feb 13 15:34:29.232897 kernel: raid6: using algorithm avx2x2 gen() 31170 MB/s Feb 13 15:34:29.250889 kernel: raid6: .... xor() 19905 MB/s, rmw enabled Feb 13 15:34:29.250929 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:34:29.270850 kernel: xor: automatically using best checksumming function avx Feb 13 15:34:29.424827 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:34:29.437776 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:34:29.453926 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:34:29.466492 systemd-udevd[417]: Using default interface naming scheme 'v255'. Feb 13 15:34:29.471136 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:34:29.477934 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:34:29.489567 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Feb 13 15:34:29.521638 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:34:29.532934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:34:29.596263 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:34:29.604938 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:34:29.615197 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:34:29.618585 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:34:29.621199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:34:29.623706 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:34:29.627834 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:34:29.647055 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:34:29.647215 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:34:29.647228 kernel: GPT:9289727 != 19775487 Feb 13 15:34:29.647243 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:34:29.647254 kernel: GPT:9289727 != 19775487 Feb 13 15:34:29.647264 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:34:29.647274 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:34:29.647284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:34:29.632949 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:34:29.649227 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:34:29.652893 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:34:29.653871 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:29.656922 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:34:29.661434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:34:29.661555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:29.664731 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:29.672810 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:34:29.672852 kernel: libata version 3.00 loaded. Feb 13 15:34:29.672871 kernel: AES CTR mode by8 optimization enabled Feb 13 15:34:29.674977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:29.687397 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (465) Feb 13 15:34:29.687423 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:34:29.710949 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:34:29.710972 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:34:29.711143 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:34:29.713802 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (468) Feb 13 15:34:29.713815 kernel: scsi host0: ahci Feb 13 15:34:29.713986 kernel: scsi host1: ahci Feb 13 15:34:29.714135 kernel: scsi host2: ahci Feb 13 15:34:29.714284 kernel: scsi host3: ahci Feb 13 15:34:29.714429 kernel: scsi host4: ahci Feb 13 15:34:29.714572 kernel: scsi host5: ahci Feb 13 15:34:29.714728 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:34:29.714743 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:34:29.714754 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:34:29.714764 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:34:29.714774 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:34:29.714796 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:34:29.692337 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:34:29.697300 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:34:29.711117 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:34:29.712529 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:34:29.719182 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:34:29.729912 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:34:29.731096 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:34:29.731150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:29.733497 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:29.736383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:29.740652 disk-uuid[556]: Primary Header is updated. Feb 13 15:34:29.740652 disk-uuid[556]: Secondary Entries is updated. Feb 13 15:34:29.740652 disk-uuid[556]: Secondary Header is updated. Feb 13 15:34:29.744008 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:34:29.747816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:34:29.754868 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:29.766944 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:34:29.794949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:30.018508 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:34:30.018573 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:34:30.018584 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:34:30.018595 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:34:30.019816 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:34:30.020814 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:34:30.021966 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:34:30.021977 kernel: ata3.00: applying bridge limits Feb 13 15:34:30.022804 kernel: ata3.00: configured for UDMA/100 Feb 13 15:34:30.023809 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:34:30.083816 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:34:30.097760 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:34:30.097779 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:34:30.749360 disk-uuid[558]: The operation has completed successfully. Feb 13 15:34:30.750877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:34:30.778021 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:34:30.778139 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:34:30.800968 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:34:30.807009 sh[599]: Success Feb 13 15:34:30.819820 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:34:30.855416 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:34:30.873255 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:34:30.877471 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:34:30.890200 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:34:30.890228 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:34:30.890245 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:34:30.892312 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:34:30.892326 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:34:30.897011 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:34:30.899258 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:34:30.913914 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:34:30.916405 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:34:30.924384 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:30.924417 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:34:30.924428 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:34:30.926809 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:34:30.935595 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:34:30.937340 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:30.947483 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:34:30.955962 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:34:31.086849 ignition[691]: Ignition 2.20.0 Feb 13 15:34:31.087597 ignition[691]: Stage: fetch-offline Feb 13 15:34:31.087670 ignition[691]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:31.087682 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:34:31.087846 ignition[691]: parsed url from cmdline: "" Feb 13 15:34:31.087851 ignition[691]: no config URL provided Feb 13 15:34:31.087856 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:34:31.087865 ignition[691]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:34:31.087899 ignition[691]: op(1): [started] loading QEMU firmware config module Feb 13 15:34:31.087905 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:34:31.097269 ignition[691]: op(1): [finished] loading QEMU firmware config module Feb 13 15:34:31.139113 ignition[691]: parsing config with SHA512: 01d1344858eb277d63f5eb7c0c276af1cce45fa79597dfe0a058b2880382aa260e91f87ea546e90d418be1fbcc31ffaa9d424c6ab3fd466d3802a15ea083bc56 Feb 13 15:34:31.147192 unknown[691]: fetched base config from "system" Feb 13 15:34:31.147210 unknown[691]: fetched user config from "qemu" Feb 13 15:34:31.147711 ignition[691]: fetch-offline: fetch-offline passed Feb 13 15:34:31.147745 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:34:31.148524 ignition[691]: Ignition finished successfully Feb 13 15:34:31.157943 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:34:31.158214 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:34:31.182090 systemd-networkd[787]: lo: Link UP Feb 13 15:34:31.182102 systemd-networkd[787]: lo: Gained carrier Feb 13 15:34:31.183669 systemd-networkd[787]: Enumeration completed Feb 13 15:34:31.183757 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:34:31.184076 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:31.184080 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:34:31.185074 systemd-networkd[787]: eth0: Link UP Feb 13 15:34:31.185078 systemd-networkd[787]: eth0: Gained carrier Feb 13 15:34:31.185084 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:31.186101 systemd[1]: Reached target network.target - Network. Feb 13 15:34:31.187851 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:34:31.197829 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:34:31.197908 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:34:31.214125 ignition[790]: Ignition 2.20.0 Feb 13 15:34:31.214137 ignition[790]: Stage: kargs Feb 13 15:34:31.214283 ignition[790]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:31.214294 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:34:31.215160 ignition[790]: kargs: kargs passed Feb 13 15:34:31.215199 ignition[790]: Ignition finished successfully Feb 13 15:34:31.221749 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:34:31.233922 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:34:31.246461 ignition[800]: Ignition 2.20.0 Feb 13 15:34:31.246471 ignition[800]: Stage: disks Feb 13 15:34:31.246629 ignition[800]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:31.246641 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:34:31.250361 ignition[800]: disks: disks passed Feb 13 15:34:31.250410 ignition[800]: Ignition finished successfully Feb 13 15:34:31.253849 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:34:31.254126 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:34:31.254458 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:34:31.254806 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:34:31.255128 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:34:31.255455 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:34:31.266989 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:34:31.279756 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:34:31.286430 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:34:31.298889 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:34:31.381814 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:34:31.382569 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:34:31.383195 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:34:31.394876 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:34:31.396798 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:34:31.399024 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:34:31.399066 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:34:31.405242 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (819) Feb 13 15:34:31.399089 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:34:31.408848 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:31.408867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:34:31.408877 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:34:31.410797 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:34:31.412239 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:34:31.418368 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:34:31.419294 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:34:31.454022 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:34:31.459122 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:34:31.463824 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:34:31.467611 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:34:31.553275 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:34:31.564879 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:34:31.569153 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:34:31.572840 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:31.591641 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:34:31.598482 ignition[932]: INFO : Ignition 2.20.0 Feb 13 15:34:31.598482 ignition[932]: INFO : Stage: mount Feb 13 15:34:31.600184 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:31.600184 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:34:31.600184 ignition[932]: INFO : mount: mount passed Feb 13 15:34:31.600184 ignition[932]: INFO : Ignition finished successfully Feb 13 15:34:31.605734 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:34:31.613923 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:34:31.889132 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:34:31.901952 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:34:31.911478 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Feb 13 15:34:31.911557 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:34:31.911569 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:34:31.912338 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:34:31.915805 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:34:31.917056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:34:32.006211 ignition[962]: INFO : Ignition 2.20.0 Feb 13 15:34:32.006211 ignition[962]: INFO : Stage: files Feb 13 15:34:32.008082 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:32.008082 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:34:32.008082 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:34:32.011511 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:34:32.011511 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:34:32.014422 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:34:32.014422 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:34:32.014422 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:34:32.014422 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:34:32.014422 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:34:32.012424 unknown[962]: wrote ssh authorized keys file for user: core Feb 13 15:34:32.052395 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:34:32.149112 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:34:32.149112 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:34:32.152973 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:34:32.323001 systemd-networkd[787]: eth0: Gained IPv6LL Feb 13 15:34:32.626508 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:34:32.710130 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:34:32.710130 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:34:32.714222 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:34:33.079201 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:34:33.815533 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:34:33.815533 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:34:33.819613 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:34:33.819613 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:34:33.819613 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:34:33.819613 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:34:33.819613 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:34:33.819613 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:34:33.819613 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:34:33.819613 ignition[962]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:34:33.846002 ignition[962]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:34:33.851643 ignition[962]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:34:33.853480 ignition[962]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:34:33.853480 ignition[962]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:34:33.853480 ignition[962]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:34:33.853480 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:34:33.853480 ignition[962]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:34:33.853480 ignition[962]: INFO : files: files passed Feb 13 15:34:33.853480 ignition[962]: INFO : Ignition finished successfully Feb 13 15:34:33.865314 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:34:33.876915 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:34:33.878652 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:34:33.880513 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:34:33.880628 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:34:33.888982 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:34:33.891827 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:34:33.891827 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:34:33.895028 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:34:33.894478 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:34:33.896849 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:34:33.913009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:34:33.935654 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:34:33.935804 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:34:33.936939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:34:33.940054 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:34:33.941970 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:34:33.942692 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:34:33.960054 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:34:33.970930 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:34:33.979647 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:34:33.979803 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:34:33.983083 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:34:33.984212 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:34:33.984318 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:34:33.986243 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:34:33.986575 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:34:33.987084 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:34:33.987414 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:34:33.987754 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:34:33.988257 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:34:33.988586 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:34:33.989106 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:34:33.989432 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:34:33.989771 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:34:33.990250 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:34:33.990358 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:34:34.009969 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:34:34.010104 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:34:34.013203 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:34:34.013300 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:34:34.014374 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:34:34.014480 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:34:34.016742 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:34:34.016867 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:34:34.017320 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:34:34.017568 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:34:34.026851 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:34:34.027003 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:34:34.029544 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:34:34.031216 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:34:34.031306 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:34:34.032946 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:34:34.033031 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:34:34.034706 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:34:34.034833 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:34:34.036560 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:34:34.036709 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:34:34.050917 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:34:34.052510 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:34:34.052890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:34:34.053061 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:34:34.054687 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:34:34.054802 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:34:34.065016 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:34:34.065128 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:34:34.072517 ignition[1017]: INFO : Ignition 2.20.0 Feb 13 15:34:34.072517 ignition[1017]: INFO : Stage: umount Feb 13 15:34:34.074246 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:34:34.074246 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:34:34.074246 ignition[1017]: INFO : umount: umount passed Feb 13 15:34:34.074246 ignition[1017]: INFO : Ignition finished successfully Feb 13 15:34:34.075904 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:34:34.076044 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:34:34.077698 systemd[1]: Stopped target network.target - Network. Feb 13 15:34:34.079104 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:34:34.079170 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:34:34.081213 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:34:34.081262 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:34:34.083156 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:34:34.083204 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:34:34.085076 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:34:34.085123 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:34:34.087098 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:34:34.089053 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:34:34.090840 systemd-networkd[787]: eth0: DHCPv6 lease lost Feb 13 15:34:34.092105 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:34:34.092631 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:34:34.092755 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:34:34.094934 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:34:34.095014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:34:34.101891 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:34:34.103539 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:34:34.103615 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:34:34.107955 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:34:34.111364 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:34:34.111496 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:34:34.123348 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:34:34.124448 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:34:34.127380 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:34:34.128412 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:34:34.131474 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:34:34.132536 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:34:34.134643 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:34:34.134688 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:34:34.137695 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:34:34.138618 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:34:34.140843 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:34:34.140899 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:34:34.143871 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:34:34.144842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:34:34.160919 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:34:34.163164 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:34:34.164086 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:34.166149 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:34:34.166197 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:34:34.168283 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:34:34.169304 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:34:34.172758 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:34:34.173391 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:34:34.176549 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:34:34.176631 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:34:34.177638 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:34:34.177686 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:34:34.179920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:34:34.179968 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:34.182449 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:34:34.182558 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:34:34.299253 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:34:34.299412 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:34:34.301934 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:34:34.303297 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:34:34.303360 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:34:34.322942 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:34:34.329494 systemd[1]: Switching root. Feb 13 15:34:34.355236 systemd-journald[193]: Journal stopped Feb 13 15:34:35.544019 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Feb 13 15:34:35.544095 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:34:35.544110 kernel: SELinux: policy capability open_perms=1 Feb 13 15:34:35.544121 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:34:35.544133 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:34:35.544144 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:34:35.544156 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:34:35.544170 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:34:35.544181 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:34:35.544193 kernel: audit: type=1403 audit(1739460874.816:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:34:35.544205 systemd[1]: Successfully loaded SELinux policy in 42.438ms. Feb 13 15:34:35.544235 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.407ms. Feb 13 15:34:35.544251 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:34:35.544263 systemd[1]: Detected virtualization kvm. Feb 13 15:34:35.544276 systemd[1]: Detected architecture x86-64. Feb 13 15:34:35.544288 systemd[1]: Detected first boot. Feb 13 15:34:35.544302 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:34:35.544314 zram_generator::config[1062]: No configuration found. Feb 13 15:34:35.544328 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:34:35.544340 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:34:35.544352 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:34:35.544365 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:34:35.544377 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:34:35.544390 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:34:35.544404 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:34:35.544416 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:34:35.544428 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:34:35.544440 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:34:35.544452 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:34:35.544464 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:34:35.544478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:34:35.544490 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:34:35.544502 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:34:35.544518 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:34:35.544530 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:34:35.544542 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:34:35.544562 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:34:35.544574 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:34:35.544586 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:34:35.544599 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:34:35.544612 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:34:35.544627 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:34:35.544639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:34:35.544651 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:34:35.544663 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:34:35.544674 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:34:35.544686 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:34:35.544698 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:34:35.544710 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:34:35.544725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:34:35.544737 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:34:35.544750 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:34:35.544762 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:34:35.544774 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:34:35.544799 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:34:35.544811 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:35.544823 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:34:35.544835 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:34:35.544850 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:34:35.544863 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:34:35.544875 systemd[1]: Reached target machines.target - Containers. Feb 13 15:34:35.544887 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:34:35.544900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:35.544912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:34:35.544924 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:34:35.544936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:35.544951 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:34:35.544965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:35.544979 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:34:35.544997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:35.545011 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:34:35.545023 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:34:35.545037 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:34:35.545049 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:34:35.545066 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:34:35.545080 kernel: fuse: init (API version 7.39) Feb 13 15:34:35.545092 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:34:35.545103 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:34:35.545116 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:34:35.545132 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:34:35.545164 systemd-journald[1132]: Collecting audit messages is disabled. Feb 13 15:34:35.545186 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:34:35.545199 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:34:35.545213 kernel: loop: module loaded Feb 13 15:34:35.545225 systemd[1]: Stopped verity-setup.service. Feb 13 15:34:35.545237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:35.545249 systemd-journald[1132]: Journal started Feb 13 15:34:35.545271 systemd-journald[1132]: Runtime Journal (/run/log/journal/4fe8ff5a9db549b8b92ddc99900d4b1c) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:34:35.326113 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:34:35.344437 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:34:35.344900 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:34:35.548840 kernel: ACPI: bus type drm_connector registered Feb 13 15:34:35.554808 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:34:35.556773 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:34:35.557997 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:34:35.559255 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:34:35.560402 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:34:35.561643 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:34:35.562878 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:34:35.564114 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:34:35.565577 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:34:35.567323 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:34:35.567531 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:34:35.569312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:35.569499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:35.571268 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:34:35.571473 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:34:35.573171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:35.573469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:35.575418 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:34:35.575731 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:34:35.577750 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:35.578175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:35.580006 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:34:35.581738 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:34:35.583883 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:34:35.611778 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:34:35.626933 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:34:35.630277 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:34:35.631746 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:34:35.631806 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:34:35.634653 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:34:35.637536 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:34:35.640841 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:34:35.642092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:35.644054 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:34:35.648535 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:34:35.649811 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:34:35.657054 systemd-journald[1132]: Time spent on flushing to /var/log/journal/4fe8ff5a9db549b8b92ddc99900d4b1c is 47.348ms for 1043 entries. Feb 13 15:34:35.657054 systemd-journald[1132]: System Journal (/var/log/journal/4fe8ff5a9db549b8b92ddc99900d4b1c) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:34:35.712447 systemd-journald[1132]: Received client request to flush runtime journal. Feb 13 15:34:35.655047 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:34:35.656202 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:34:35.665917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:34:35.711029 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:34:35.718294 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:34:35.722807 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 15:34:35.724435 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:34:35.726732 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:34:35.729318 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:34:35.732224 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:34:35.734091 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:34:35.735692 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:34:35.737515 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:34:35.746628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:34:35.744608 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Feb 13 15:34:35.744626 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Feb 13 15:34:35.745159 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:34:35.763031 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:34:35.768001 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:34:35.769904 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:34:35.775034 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:34:35.779722 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:34:35.780132 kernel: loop1: detected capacity change from 0 to 211296 Feb 13 15:34:35.791208 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:34:35.791904 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:34:35.817140 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:34:35.855035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:34:35.856948 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 15:34:35.882588 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Feb 13 15:34:35.882608 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Feb 13 15:34:35.888293 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:34:35.898819 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 15:34:35.912812 kernel: loop4: detected capacity change from 0 to 211296 Feb 13 15:34:35.919820 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 15:34:35.929182 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:34:35.929848 (sd-merge)[1203]: Merged extensions into '/usr'. Feb 13 15:34:35.933646 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:34:35.933663 systemd[1]: Reloading... Feb 13 15:34:36.035830 zram_generator::config[1229]: No configuration found. Feb 13 15:34:36.169621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:34:36.170755 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:34:36.218839 systemd[1]: Reloading finished in 284 ms. Feb 13 15:34:36.250279 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:34:36.251940 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:34:36.265050 systemd[1]: Starting ensure-sysext.service... Feb 13 15:34:36.267365 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:34:36.272291 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:34:36.272308 systemd[1]: Reloading... Feb 13 15:34:36.298937 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:34:36.299341 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:34:36.300373 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:34:36.300803 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Feb 13 15:34:36.300890 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Feb 13 15:34:36.305864 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:34:36.305878 systemd-tmpfiles[1267]: Skipping /boot Feb 13 15:34:36.325568 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:34:36.325674 systemd-tmpfiles[1267]: Skipping /boot Feb 13 15:34:36.344811 zram_generator::config[1294]: No configuration found. Feb 13 15:34:36.506778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:34:36.555984 systemd[1]: Reloading finished in 283 ms. Feb 13 15:34:36.577236 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:34:36.591422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:34:36.599054 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:34:36.601405 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:34:36.603822 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:34:36.608043 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:34:36.614019 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:34:36.616992 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:34:36.624895 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:34:36.627357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:36.627538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:36.632006 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:34:36.635234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:34:36.638004 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:36.639228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:36.639324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:36.644518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:36.644743 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:36.644967 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:36.645109 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:36.647440 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:34:36.649939 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:36.650141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:36.655008 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Feb 13 15:34:36.661269 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:34:36.665267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:34:36.665456 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:34:36.667462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:34:36.667652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:34:36.672680 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:36.673054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:34:36.674722 augenrules[1366]: No rules Feb 13 15:34:36.681069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:34:36.684777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:34:36.686096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:34:36.686208 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:34:36.687463 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:34:36.689471 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:34:36.690251 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:34:36.691763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:34:36.694265 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:34:36.694487 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:34:36.696164 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:34:36.696392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:34:36.698259 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:34:36.698414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:34:36.705473 systemd[1]: Finished ensure-sysext.service. Feb 13 15:34:36.706853 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:34:36.727055 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:34:36.728258 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:34:36.740924 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:34:36.742074 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:34:36.742452 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:34:36.750256 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:34:36.811820 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Feb 13 15:34:36.816093 systemd-resolved[1335]: Positive Trust Anchors: Feb 13 15:34:36.816113 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:34:36.816146 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:34:36.821408 systemd-resolved[1335]: Defaulting to hostname 'linux'. Feb 13 15:34:36.823360 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:34:36.824844 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:34:36.848270 systemd-networkd[1404]: lo: Link UP Feb 13 15:34:36.848284 systemd-networkd[1404]: lo: Gained carrier Feb 13 15:34:36.849722 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:34:36.849924 systemd-networkd[1404]: Enumeration completed Feb 13 15:34:36.850439 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:36.850443 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:34:36.852652 systemd-networkd[1404]: eth0: Link UP Feb 13 15:34:36.852661 systemd-networkd[1404]: eth0: Gained carrier Feb 13 15:34:36.852674 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:36.852957 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:34:36.854116 systemd[1]: Reached target network.target - Network. Feb 13 15:34:36.861032 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:34:36.861974 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:34:36.865930 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:34:36.867374 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:34:36.868716 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:34:36.869920 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:34:36.870059 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:34:36.871530 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Feb 13 15:34:38.494631 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:34:38.494759 systemd-timesyncd[1405]: Initial clock synchronization to Thu 2025-02-13 15:34:38.494509 UTC. Feb 13 15:34:38.495269 systemd-resolved[1335]: Clock change detected. Flushing caches. Feb 13 15:34:38.497497 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:34:38.499797 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:34:38.518509 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:34:38.598309 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:34:38.598653 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:34:38.598856 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:34:38.599219 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:34:38.669039 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:34:38.667076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:38.688948 kernel: kvm_amd: TSC scaling supported Feb 13 15:34:38.689076 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:34:38.689138 kernel: kvm_amd: Nested Paging enabled Feb 13 15:34:38.689168 kernel: kvm_amd: LBR virtualization supported Feb 13 15:34:38.689210 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:34:38.689234 kernel: kvm_amd: Virtual GIF supported Feb 13 15:34:38.681537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:34:38.681866 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:38.701910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:34:38.709483 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:34:38.741980 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:34:38.746646 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:34:38.751635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:34:38.757136 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:34:38.794662 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:34:38.796131 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:34:38.797247 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:34:38.798407 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:34:38.799656 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:34:38.801078 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:34:38.802270 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:34:38.803522 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:34:38.804753 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:34:38.804781 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:34:38.805676 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:34:38.807280 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:34:38.810023 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:34:38.819976 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:34:38.822274 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:34:38.823807 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:34:38.828568 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:34:38.829632 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:34:38.830676 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:34:38.831708 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:34:38.831737 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:34:38.833016 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:34:38.835105 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:34:38.839533 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:34:38.839647 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:34:38.842626 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:34:38.843807 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:34:38.846182 jq[1443]: false Feb 13 15:34:38.846586 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:34:38.849593 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:34:38.855255 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:34:38.859187 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:34:38.863763 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:34:38.866014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:34:38.866435 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:34:38.867533 extend-filesystems[1444]: Found loop3 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found loop4 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found loop5 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found sr0 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found vda Feb 13 15:34:38.867533 extend-filesystems[1444]: Found vda1 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found vda2 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found vda3 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found usr Feb 13 15:34:38.867533 extend-filesystems[1444]: Found vda4 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found vda6 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found vda7 Feb 13 15:34:38.867533 extend-filesystems[1444]: Found vda9 Feb 13 15:34:38.867533 extend-filesystems[1444]: Checking size of /dev/vda9 Feb 13 15:34:38.869428 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:34:38.879355 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:34:38.884930 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:34:38.889909 dbus-daemon[1442]: [system] SELinux support is enabled Feb 13 15:34:38.893809 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:34:38.897913 update_engine[1457]: I20250213 15:34:38.895689 1457 main.cc:92] Flatcar Update Engine starting Feb 13 15:34:38.897913 update_engine[1457]: I20250213 15:34:38.896952 1457 update_check_scheduler.cc:74] Next update check in 5m20s Feb 13 15:34:38.897415 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:34:38.897714 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:34:38.898041 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:34:38.898251 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:34:38.899905 jq[1458]: true Feb 13 15:34:38.901321 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:34:38.901503 extend-filesystems[1444]: Resized partition /dev/vda9 Feb 13 15:34:38.901575 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:34:38.910475 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:34:38.911542 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:34:38.916279 jq[1468]: true Feb 13 15:34:38.922478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Feb 13 15:34:38.930898 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:34:38.930943 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:34:38.950422 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:34:38.950443 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:34:38.953875 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:34:38.957466 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:34:38.966671 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:34:38.978889 tar[1466]: linux-amd64/helm Feb 13 15:34:38.993676 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:34:38.993701 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:34:38.995873 systemd-logind[1455]: New seat seat0. Feb 13 15:34:39.018085 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:34:39.027268 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:34:39.027822 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:34:39.028482 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:34:39.054369 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:34:39.054369 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:34:39.054369 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:34:39.058418 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Feb 13 15:34:39.060722 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:34:39.056493 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:34:39.056738 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:34:39.061489 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:34:39.063131 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:34:39.074674 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:34:39.081676 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:40696.service - OpenSSH per-connection server daemon (10.0.0.1:40696). Feb 13 15:34:39.083468 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:34:39.084271 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:34:39.085083 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:34:39.119895 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:34:39.137181 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:34:39.144752 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:34:39.149733 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:34:39.151035 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:34:39.175591 sshd[1513]: Accepted publickey for core from 10.0.0.1 port 40696 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:34:39.177137 sshd-session[1513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:39.185869 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:34:39.246706 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:34:39.251397 systemd-logind[1455]: New session 1 of user core. Feb 13 15:34:39.265503 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:34:39.276068 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:34:39.279963 (systemd)[1529]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:34:39.291056 containerd[1469]: time="2025-02-13T15:34:39.290978883Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:34:39.332304 containerd[1469]: time="2025-02-13T15:34:39.332235614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:39.339530 containerd[1469]: time="2025-02-13T15:34:39.339335159Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:39.339530 containerd[1469]: time="2025-02-13T15:34:39.339387046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:34:39.339530 containerd[1469]: time="2025-02-13T15:34:39.339408837Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:34:39.339673 containerd[1469]: time="2025-02-13T15:34:39.339650791Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:34:39.339673 containerd[1469]: time="2025-02-13T15:34:39.339668734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:39.339780 containerd[1469]: time="2025-02-13T15:34:39.339757821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:39.339780 containerd[1469]: time="2025-02-13T15:34:39.339777258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:39.340030 containerd[1469]: time="2025-02-13T15:34:39.340003251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:39.340030 containerd[1469]: time="2025-02-13T15:34:39.340022918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:39.340083 containerd[1469]: time="2025-02-13T15:34:39.340036875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:39.340083 containerd[1469]: time="2025-02-13T15:34:39.340047725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:39.340247 containerd[1469]: time="2025-02-13T15:34:39.340152401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:39.340530 containerd[1469]: time="2025-02-13T15:34:39.340431635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:34:39.340592 containerd[1469]: time="2025-02-13T15:34:39.340573130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:34:39.340592 containerd[1469]: time="2025-02-13T15:34:39.340589781Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:34:39.340740 containerd[1469]: time="2025-02-13T15:34:39.340712371Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:34:39.340796 containerd[1469]: time="2025-02-13T15:34:39.340780279Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:34:39.351073 containerd[1469]: time="2025-02-13T15:34:39.351029981Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:34:39.351111 containerd[1469]: time="2025-02-13T15:34:39.351078812Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:34:39.351111 containerd[1469]: time="2025-02-13T15:34:39.351095754Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:34:39.351167 containerd[1469]: time="2025-02-13T15:34:39.351126592Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:34:39.351167 containerd[1469]: time="2025-02-13T15:34:39.351144175Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:34:39.351316 containerd[1469]: time="2025-02-13T15:34:39.351296751Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:34:39.351597 containerd[1469]: time="2025-02-13T15:34:39.351551989Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:34:39.351726 containerd[1469]: time="2025-02-13T15:34:39.351679759Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:34:39.351726 containerd[1469]: time="2025-02-13T15:34:39.351699987Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:34:39.351726 containerd[1469]: time="2025-02-13T15:34:39.351712831Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:34:39.351726 containerd[1469]: time="2025-02-13T15:34:39.351724984Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:34:39.351822 containerd[1469]: time="2025-02-13T15:34:39.351737367Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:34:39.351822 containerd[1469]: time="2025-02-13T15:34:39.351749540Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:34:39.351822 containerd[1469]: time="2025-02-13T15:34:39.351762294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:34:39.351822 containerd[1469]: time="2025-02-13T15:34:39.351775619Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:34:39.351822 containerd[1469]: time="2025-02-13T15:34:39.351787140Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:34:39.351822 containerd[1469]: time="2025-02-13T15:34:39.351797881Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:34:39.351822 containerd[1469]: time="2025-02-13T15:34:39.351807899Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:34:39.351940 containerd[1469]: time="2025-02-13T15:34:39.351827817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.351940 containerd[1469]: time="2025-02-13T15:34:39.351841322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.351940 containerd[1469]: time="2025-02-13T15:34:39.351853425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.351940 containerd[1469]: time="2025-02-13T15:34:39.351865107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.351940 containerd[1469]: time="2025-02-13T15:34:39.351887308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.351940 containerd[1469]: time="2025-02-13T15:34:39.351900303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.351940 containerd[1469]: time="2025-02-13T15:34:39.351911614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.351940 containerd[1469]: time="2025-02-13T15:34:39.351922925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.351940 containerd[1469]: time="2025-02-13T15:34:39.351935228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.352100 containerd[1469]: time="2025-02-13T15:34:39.351948553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.352100 containerd[1469]: time="2025-02-13T15:34:39.351971887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.352100 containerd[1469]: time="2025-02-13T15:34:39.351984741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.352100 containerd[1469]: time="2025-02-13T15:34:39.351996894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.352100 containerd[1469]: time="2025-02-13T15:34:39.352010259Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:34:39.352100 containerd[1469]: time="2025-02-13T15:34:39.352028754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.352100 containerd[1469]: time="2025-02-13T15:34:39.352041027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.352100 containerd[1469]: time="2025-02-13T15:34:39.352051897Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:34:39.352100 containerd[1469]: time="2025-02-13T15:34:39.352107141Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:34:39.352291 containerd[1469]: time="2025-02-13T15:34:39.352123611Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:34:39.352291 containerd[1469]: time="2025-02-13T15:34:39.352133540Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:34:39.352291 containerd[1469]: time="2025-02-13T15:34:39.352146705Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:34:39.352291 containerd[1469]: time="2025-02-13T15:34:39.352155521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.352291 containerd[1469]: time="2025-02-13T15:34:39.352186700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:34:39.352291 containerd[1469]: time="2025-02-13T15:34:39.352197911Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:34:39.352291 containerd[1469]: time="2025-02-13T15:34:39.352208781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:34:39.352599 containerd[1469]: time="2025-02-13T15:34:39.352554910Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:34:39.352599 containerd[1469]: time="2025-02-13T15:34:39.352601047Z" level=info msg="Connect containerd service" Feb 13 15:34:39.352769 containerd[1469]: time="2025-02-13T15:34:39.352635491Z" level=info msg="using legacy CRI server" Feb 13 15:34:39.352769 containerd[1469]: time="2025-02-13T15:34:39.352642515Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:34:39.352817 containerd[1469]: time="2025-02-13T15:34:39.352790222Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:34:39.354216 containerd[1469]: time="2025-02-13T15:34:39.354181180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:34:39.355798 containerd[1469]: time="2025-02-13T15:34:39.355663280Z" level=info msg="Start subscribing containerd event" Feb 13 15:34:39.355855 containerd[1469]: time="2025-02-13T15:34:39.355807139Z" level=info msg="Start recovering state" Feb 13 15:34:39.356000 containerd[1469]: time="2025-02-13T15:34:39.355929208Z" level=info msg="Start event monitor" Feb 13 15:34:39.356000 containerd[1469]: time="2025-02-13T15:34:39.355960387Z" level=info msg="Start snapshots syncer" Feb 13 15:34:39.356000 containerd[1469]: time="2025-02-13T15:34:39.355975165Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:34:39.356000 containerd[1469]: time="2025-02-13T15:34:39.355984492Z" level=info msg="Start streaming server" Feb 13 15:34:39.357149 containerd[1469]: time="2025-02-13T15:34:39.357030293Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:34:39.357276 containerd[1469]: time="2025-02-13T15:34:39.357259443Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:34:39.357487 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:34:39.357676 containerd[1469]: time="2025-02-13T15:34:39.357496047Z" level=info msg="containerd successfully booted in 0.068055s" Feb 13 15:34:39.445963 systemd[1529]: Queued start job for default target default.target. Feb 13 15:34:39.460697 systemd[1529]: Created slice app.slice - User Application Slice. Feb 13 15:34:39.460722 systemd[1529]: Reached target paths.target - Paths. Feb 13 15:34:39.460735 systemd[1529]: Reached target timers.target - Timers. Feb 13 15:34:39.462404 systemd[1529]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:34:39.476993 systemd[1529]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:34:39.477109 systemd[1529]: Reached target sockets.target - Sockets. Feb 13 15:34:39.477129 systemd[1529]: Reached target basic.target - Basic System. Feb 13 15:34:39.477182 systemd[1529]: Reached target default.target - Main User Target. Feb 13 15:34:39.477215 systemd[1529]: Startup finished in 190ms. Feb 13 15:34:39.477301 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:34:39.479793 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:34:39.533701 tar[1466]: linux-amd64/LICENSE Feb 13 15:34:39.534329 tar[1466]: linux-amd64/README.md Feb 13 15:34:39.543463 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:40706.service - OpenSSH per-connection server daemon (10.0.0.1:40706). Feb 13 15:34:39.556214 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:34:39.588551 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 40706 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:34:39.590010 sshd-session[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:39.594473 systemd-logind[1455]: New session 2 of user core. Feb 13 15:34:39.604576 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:34:39.639644 systemd-networkd[1404]: eth0: Gained IPv6LL Feb 13 15:34:39.643360 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:34:39.645162 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:34:39.657667 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:34:39.660604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:34:39.662949 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:34:39.668612 sshd[1549]: Connection closed by 10.0.0.1 port 40706 Feb 13 15:34:39.669135 sshd-session[1546]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:39.673916 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:40712.service - OpenSSH per-connection server daemon (10.0.0.1:40712). Feb 13 15:34:39.678430 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:40706.service: Deactivated successfully. Feb 13 15:34:39.682149 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:34:39.684837 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:34:39.685995 systemd-logind[1455]: Removed session 2. Feb 13 15:34:39.690117 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:34:39.690392 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:34:39.692594 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:34:39.694989 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:34:39.719058 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 40712 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:34:39.720901 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:39.725050 systemd-logind[1455]: New session 3 of user core. Feb 13 15:34:39.735571 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:34:39.790948 sshd[1573]: Connection closed by 10.0.0.1 port 40712 Feb 13 15:34:39.791954 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:39.796438 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:40712.service: Deactivated successfully. Feb 13 15:34:39.798266 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:34:39.798866 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:34:39.799722 systemd-logind[1455]: Removed session 3. Feb 13 15:34:40.687998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:34:40.689851 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:34:40.694498 systemd[1]: Startup finished in 750ms (kernel) + 6.112s (initrd) + 4.297s (userspace) = 11.160s. Feb 13 15:34:40.698613 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:34:41.167365 kubelet[1582]: E0213 15:34:41.167204 1582 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:34:41.171835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:34:41.172049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:34:41.172496 systemd[1]: kubelet.service: Consumed 1.393s CPU time. Feb 13 15:34:49.802727 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:49086.service - OpenSSH per-connection server daemon (10.0.0.1:49086). Feb 13 15:34:49.845196 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 49086 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:34:49.846632 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:49.850494 systemd-logind[1455]: New session 4 of user core. Feb 13 15:34:49.859570 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:34:49.912691 sshd[1599]: Connection closed by 10.0.0.1 port 49086 Feb 13 15:34:49.913054 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:49.920188 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:49086.service: Deactivated successfully. Feb 13 15:34:49.921939 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:34:49.923396 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:34:49.924664 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:49102.service - OpenSSH per-connection server daemon (10.0.0.1:49102). Feb 13 15:34:49.925622 systemd-logind[1455]: Removed session 4. Feb 13 15:34:49.966267 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 49102 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:34:49.967796 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:49.972118 systemd-logind[1455]: New session 5 of user core. Feb 13 15:34:49.981640 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:34:50.030934 sshd[1606]: Connection closed by 10.0.0.1 port 49102 Feb 13 15:34:50.031293 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:50.044327 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:49102.service: Deactivated successfully. Feb 13 15:34:50.046041 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:34:50.047699 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:34:50.057803 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:49114.service - OpenSSH per-connection server daemon (10.0.0.1:49114). Feb 13 15:34:50.058882 systemd-logind[1455]: Removed session 5. Feb 13 15:34:50.095186 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 49114 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:34:50.096798 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:50.101160 systemd-logind[1455]: New session 6 of user core. Feb 13 15:34:50.110580 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:34:50.165721 sshd[1613]: Connection closed by 10.0.0.1 port 49114 Feb 13 15:34:50.166104 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:50.182068 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:49114.service: Deactivated successfully. Feb 13 15:34:50.184566 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:34:50.186493 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:34:50.195891 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:49128.service - OpenSSH per-connection server daemon (10.0.0.1:49128). Feb 13 15:34:50.197019 systemd-logind[1455]: Removed session 6. Feb 13 15:34:50.235132 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 49128 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:34:50.236820 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:50.240782 systemd-logind[1455]: New session 7 of user core. Feb 13 15:34:50.255610 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:34:50.314586 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:34:50.314968 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:34:50.339228 sudo[1621]: pam_unix(sudo:session): session closed for user root Feb 13 15:34:50.341152 sshd[1620]: Connection closed by 10.0.0.1 port 49128 Feb 13 15:34:50.341693 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:50.356681 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:49128.service: Deactivated successfully. Feb 13 15:34:50.358523 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:34:50.360334 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:34:50.374060 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:49130.service - OpenSSH per-connection server daemon (10.0.0.1:49130). Feb 13 15:34:50.375316 systemd-logind[1455]: Removed session 7. Feb 13 15:34:50.412259 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 49130 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:34:50.414001 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:50.417978 systemd-logind[1455]: New session 8 of user core. Feb 13 15:34:50.427626 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:34:50.481056 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:34:50.481390 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:34:50.485146 sudo[1630]: pam_unix(sudo:session): session closed for user root Feb 13 15:34:50.491370 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:34:50.491724 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:34:50.506719 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:34:50.537206 augenrules[1652]: No rules Feb 13 15:34:50.539023 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:34:50.539251 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:34:50.540671 sudo[1629]: pam_unix(sudo:session): session closed for user root Feb 13 15:34:50.542128 sshd[1628]: Connection closed by 10.0.0.1 port 49130 Feb 13 15:34:50.542707 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Feb 13 15:34:50.555308 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:49130.service: Deactivated successfully. Feb 13 15:34:50.557085 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:34:50.558679 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:34:50.568765 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:49146.service - OpenSSH per-connection server daemon (10.0.0.1:49146). Feb 13 15:34:50.569869 systemd-logind[1455]: Removed session 8. Feb 13 15:34:50.605912 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 49146 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:34:50.607505 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:34:50.612028 systemd-logind[1455]: New session 9 of user core. Feb 13 15:34:50.621604 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:34:50.674633 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:34:50.674984 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:34:50.954656 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:34:50.954817 (dockerd)[1683]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:34:51.206194 dockerd[1683]: time="2025-02-13T15:34:51.206041339Z" level=info msg="Starting up" Feb 13 15:34:51.212141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:34:51.219618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:34:51.482611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:34:51.487000 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:34:51.545058 kubelet[1715]: E0213 15:34:51.544993 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:34:51.549595 dockerd[1683]: time="2025-02-13T15:34:51.549547916Z" level=info msg="Loading containers: start." Feb 13 15:34:51.552816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:34:51.553035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:34:51.715485 kernel: Initializing XFRM netlink socket Feb 13 15:34:51.793207 systemd-networkd[1404]: docker0: Link UP Feb 13 15:34:51.836783 dockerd[1683]: time="2025-02-13T15:34:51.836739933Z" level=info msg="Loading containers: done." Feb 13 15:34:51.851509 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3797684670-merged.mount: Deactivated successfully. Feb 13 15:34:51.857735 dockerd[1683]: time="2025-02-13T15:34:51.857680154Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:34:51.857852 dockerd[1683]: time="2025-02-13T15:34:51.857794218Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:34:51.857960 dockerd[1683]: time="2025-02-13T15:34:51.857936274Z" level=info msg="Daemon has completed initialization" Feb 13 15:34:51.895816 dockerd[1683]: time="2025-02-13T15:34:51.895748355Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:34:51.895956 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:34:52.586284 containerd[1469]: time="2025-02-13T15:34:52.586240384Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:34:53.259576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116399247.mount: Deactivated successfully. Feb 13 15:34:54.592166 containerd[1469]: time="2025-02-13T15:34:54.592099857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:54.592893 containerd[1469]: time="2025-02-13T15:34:54.592834975Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142283" Feb 13 15:34:54.594158 containerd[1469]: time="2025-02-13T15:34:54.594122270Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:54.596928 containerd[1469]: time="2025-02-13T15:34:54.596861787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:54.598051 containerd[1469]: time="2025-02-13T15:34:54.598008147Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 2.011720615s" Feb 13 15:34:54.598051 containerd[1469]: time="2025-02-13T15:34:54.598049354Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 15:34:54.632457 containerd[1469]: time="2025-02-13T15:34:54.632404292Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:34:56.489269 containerd[1469]: time="2025-02-13T15:34:56.489203143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:56.490123 containerd[1469]: time="2025-02-13T15:34:56.490043439Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213164" Feb 13 15:34:56.491459 containerd[1469]: time="2025-02-13T15:34:56.491420661Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:56.494288 containerd[1469]: time="2025-02-13T15:34:56.494238596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:56.495239 containerd[1469]: time="2025-02-13T15:34:56.495209427Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 1.862768005s" Feb 13 15:34:56.495284 containerd[1469]: time="2025-02-13T15:34:56.495239383Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 15:34:56.519672 containerd[1469]: time="2025-02-13T15:34:56.519637077Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:34:57.428354 containerd[1469]: time="2025-02-13T15:34:57.428285177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:57.429180 containerd[1469]: time="2025-02-13T15:34:57.429099824Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334056" Feb 13 15:34:57.430299 containerd[1469]: time="2025-02-13T15:34:57.430267574Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:57.433160 containerd[1469]: time="2025-02-13T15:34:57.433115425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:57.434102 containerd[1469]: time="2025-02-13T15:34:57.434070346Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 914.404525ms" Feb 13 15:34:57.434139 containerd[1469]: time="2025-02-13T15:34:57.434101995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 15:34:57.456540 containerd[1469]: time="2025-02-13T15:34:57.456508906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:34:58.522888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781460907.mount: Deactivated successfully. Feb 13 15:34:59.445761 containerd[1469]: time="2025-02-13T15:34:59.445696854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:59.464397 containerd[1469]: time="2025-02-13T15:34:59.464328885Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:34:59.479616 containerd[1469]: time="2025-02-13T15:34:59.479561472Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:59.481819 containerd[1469]: time="2025-02-13T15:34:59.481768661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:34:59.482474 containerd[1469]: time="2025-02-13T15:34:59.482418470Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 2.025878024s" Feb 13 15:34:59.482474 containerd[1469]: time="2025-02-13T15:34:59.482461541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:34:59.578771 containerd[1469]: time="2025-02-13T15:34:59.578720390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:35:00.147618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount820713976.mount: Deactivated successfully. Feb 13 15:35:00.885827 containerd[1469]: time="2025-02-13T15:35:00.885775115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:00.886774 containerd[1469]: time="2025-02-13T15:35:00.886725637Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:35:00.888089 containerd[1469]: time="2025-02-13T15:35:00.888044651Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:00.891801 containerd[1469]: time="2025-02-13T15:35:00.891761050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:00.892947 containerd[1469]: time="2025-02-13T15:35:00.892879498Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.314111538s" Feb 13 15:35:00.892947 containerd[1469]: time="2025-02-13T15:35:00.892941885Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:35:00.914461 containerd[1469]: time="2025-02-13T15:35:00.914421006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:35:01.427941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024718034.mount: Deactivated successfully. Feb 13 15:35:01.434127 containerd[1469]: time="2025-02-13T15:35:01.434090331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:01.434874 containerd[1469]: time="2025-02-13T15:35:01.434837182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:35:01.436107 containerd[1469]: time="2025-02-13T15:35:01.436068391Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:01.438295 containerd[1469]: time="2025-02-13T15:35:01.438265171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:01.438916 containerd[1469]: time="2025-02-13T15:35:01.438885925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 524.410256ms" Feb 13 15:35:01.438959 containerd[1469]: time="2025-02-13T15:35:01.438915610Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:35:01.460109 containerd[1469]: time="2025-02-13T15:35:01.460057890Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:35:01.782256 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:35:01.791610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:01.960878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:01.965517 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:35:02.126179 kubelet[2071]: E0213 15:35:02.125978 2071 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:35:02.130791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:35:02.131148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:35:02.266853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3889888425.mount: Deactivated successfully. Feb 13 15:35:04.712959 containerd[1469]: time="2025-02-13T15:35:04.712884741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:04.713591 containerd[1469]: time="2025-02-13T15:35:04.713543006Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Feb 13 15:35:04.714850 containerd[1469]: time="2025-02-13T15:35:04.714813839Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:04.717558 containerd[1469]: time="2025-02-13T15:35:04.717519393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:04.718773 containerd[1469]: time="2025-02-13T15:35:04.718743038Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.258645924s" Feb 13 15:35:04.718813 containerd[1469]: time="2025-02-13T15:35:04.718777021Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 15:35:06.787696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:06.803780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:06.819069 systemd[1]: Reloading requested from client PID 2211 ('systemctl') (unit session-9.scope)... Feb 13 15:35:06.819084 systemd[1]: Reloading... Feb 13 15:35:06.898480 zram_generator::config[2254]: No configuration found. Feb 13 15:35:07.087724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:07.163091 systemd[1]: Reloading finished in 343 ms. Feb 13 15:35:07.227654 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:35:07.227769 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:35:07.228071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:07.230787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:07.374284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:07.378682 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:35:07.438263 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:07.438263 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:35:07.438263 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:07.438687 kubelet[2299]: I0213 15:35:07.438299 2299 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:35:07.731753 kubelet[2299]: I0213 15:35:07.731621 2299 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:35:07.731753 kubelet[2299]: I0213 15:35:07.731651 2299 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:35:07.731899 kubelet[2299]: I0213 15:35:07.731835 2299 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:35:07.830590 kubelet[2299]: E0213 15:35:07.830552 2299 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:07.833428 kubelet[2299]: I0213 15:35:07.833413 2299 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:35:07.848347 kubelet[2299]: I0213 15:35:07.848295 2299 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:35:07.848659 kubelet[2299]: I0213 15:35:07.848629 2299 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:35:07.848859 kubelet[2299]: I0213 15:35:07.848828 2299 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:35:07.849001 kubelet[2299]: I0213 15:35:07.848862 2299 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:35:07.849001 kubelet[2299]: I0213 15:35:07.848876 2299 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:35:07.849077 kubelet[2299]: I0213 15:35:07.849014 2299 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:07.849173 kubelet[2299]: I0213 15:35:07.849145 2299 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:35:07.849173 kubelet[2299]: I0213 15:35:07.849167 2299 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:35:07.849238 kubelet[2299]: I0213 15:35:07.849204 2299 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:35:07.849238 kubelet[2299]: I0213 15:35:07.849226 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:35:07.849846 kubelet[2299]: W0213 15:35:07.849584 2299 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:07.849846 kubelet[2299]: E0213 15:35:07.849632 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:07.850142 kubelet[2299]: W0213 15:35:07.850083 2299 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:07.850142 kubelet[2299]: E0213 15:35:07.850117 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:07.850553 kubelet[2299]: I0213 15:35:07.850529 2299 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:35:07.852854 kubelet[2299]: I0213 15:35:07.852832 2299 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:35:07.852923 kubelet[2299]: W0213 15:35:07.852903 2299 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:35:07.853531 kubelet[2299]: I0213 15:35:07.853502 2299 server.go:1256] "Started kubelet" Feb 13 15:35:07.853578 kubelet[2299]: I0213 15:35:07.853561 2299 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:35:07.854579 kubelet[2299]: I0213 15:35:07.854552 2299 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:35:07.855224 kubelet[2299]: I0213 15:35:07.854986 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:35:07.856508 kubelet[2299]: I0213 15:35:07.856484 2299 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:35:07.857639 kubelet[2299]: I0213 15:35:07.856676 2299 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:35:07.859264 kubelet[2299]: E0213 15:35:07.858248 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:07.859264 kubelet[2299]: I0213 15:35:07.858291 2299 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:35:07.859264 kubelet[2299]: I0213 15:35:07.858373 2299 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:35:07.859264 kubelet[2299]: I0213 15:35:07.858422 2299 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:35:07.859264 kubelet[2299]: W0213 15:35:07.858796 2299 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:07.859264 kubelet[2299]: E0213 15:35:07.858843 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:07.859781 kubelet[2299]: E0213 15:35:07.859763 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Feb 13 15:35:07.860355 kubelet[2299]: I0213 15:35:07.860029 2299 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:35:07.860355 kubelet[2299]: I0213 15:35:07.860118 2299 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:35:07.860909 kubelet[2299]: E0213 15:35:07.860891 2299 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:35:07.861305 kubelet[2299]: E0213 15:35:07.861275 2299 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce7ac948ce94 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:35:07.853467284 +0000 UTC m=+0.470382336,LastTimestamp:2025-02-13 15:35:07.853467284 +0000 UTC m=+0.470382336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:35:07.861305 kubelet[2299]: I0213 15:35:07.861293 2299 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:35:07.880179 kubelet[2299]: I0213 15:35:07.879959 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:35:07.881378 kubelet[2299]: I0213 15:35:07.881340 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:35:07.881500 kubelet[2299]: I0213 15:35:07.881382 2299 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:35:07.881500 kubelet[2299]: I0213 15:35:07.881404 2299 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:35:07.881598 kubelet[2299]: E0213 15:35:07.881568 2299 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:35:07.883724 kubelet[2299]: W0213 15:35:07.883681 2299 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:07.884497 kubelet[2299]: E0213 15:35:07.883727 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:07.884497 kubelet[2299]: I0213 15:35:07.884282 2299 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:35:07.884497 kubelet[2299]: I0213 15:35:07.884298 2299 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:35:07.884497 kubelet[2299]: I0213 15:35:07.884317 2299 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:07.887786 kubelet[2299]: I0213 15:35:07.887762 2299 policy_none.go:49] "None policy: Start" Feb 13 15:35:07.888277 kubelet[2299]: I0213 15:35:07.888249 2299 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:35:07.888277 kubelet[2299]: I0213 15:35:07.888274 2299 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:35:07.895294 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:35:07.907680 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:35:07.910922 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:35:07.931554 kubelet[2299]: I0213 15:35:07.931509 2299 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:35:07.931925 kubelet[2299]: I0213 15:35:07.931902 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:35:07.933277 kubelet[2299]: E0213 15:35:07.933243 2299 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:35:07.960297 kubelet[2299]: I0213 15:35:07.960258 2299 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:07.960714 kubelet[2299]: E0213 15:35:07.960684 2299 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Feb 13 15:35:07.981972 kubelet[2299]: I0213 15:35:07.981857 2299 topology_manager.go:215] "Topology Admit Handler" podUID="17b334ee845a56a8a2315edf9b7658b2" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:35:07.983092 kubelet[2299]: I0213 15:35:07.983048 2299 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:35:07.984257 kubelet[2299]: I0213 15:35:07.984150 2299 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:35:07.990054 systemd[1]: Created slice kubepods-burstable-pod17b334ee845a56a8a2315edf9b7658b2.slice - libcontainer container kubepods-burstable-pod17b334ee845a56a8a2315edf9b7658b2.slice. Feb 13 15:35:08.005162 systemd[1]: Created slice kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice - libcontainer container kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice. Feb 13 15:35:08.009251 systemd[1]: Created slice kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice - libcontainer container kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice. Feb 13 15:35:08.060104 kubelet[2299]: I0213 15:35:08.060058 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17b334ee845a56a8a2315edf9b7658b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"17b334ee845a56a8a2315edf9b7658b2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:08.060104 kubelet[2299]: I0213 15:35:08.060100 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17b334ee845a56a8a2315edf9b7658b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"17b334ee845a56a8a2315edf9b7658b2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:08.060104 kubelet[2299]: I0213 15:35:08.060125 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17b334ee845a56a8a2315edf9b7658b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"17b334ee845a56a8a2315edf9b7658b2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:08.060327 kubelet[2299]: I0213 15:35:08.060144 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:08.060327 kubelet[2299]: I0213 15:35:08.060165 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:08.060327 kubelet[2299]: I0213 15:35:08.060185 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:35:08.060327 kubelet[2299]: I0213 15:35:08.060258 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:08.060327 kubelet[2299]: I0213 15:35:08.060308 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:08.060435 kubelet[2299]: I0213 15:35:08.060329 2299 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:08.060706 kubelet[2299]: E0213 15:35:08.060685 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Feb 13 15:35:08.162042 kubelet[2299]: I0213 15:35:08.162024 2299 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:08.162411 kubelet[2299]: E0213 15:35:08.162387 2299 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Feb 13 15:35:08.303185 kubelet[2299]: E0213 15:35:08.303145 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:08.303829 containerd[1469]: time="2025-02-13T15:35:08.303789720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:17b334ee845a56a8a2315edf9b7658b2,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:08.308108 kubelet[2299]: E0213 15:35:08.308082 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:08.308631 containerd[1469]: time="2025-02-13T15:35:08.308586796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:08.311873 kubelet[2299]: E0213 15:35:08.311830 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:08.312243 containerd[1469]: time="2025-02-13T15:35:08.312211544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:08.461532 kubelet[2299]: E0213 15:35:08.461507 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Feb 13 15:35:08.564052 kubelet[2299]: I0213 15:35:08.563904 2299 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:08.564275 kubelet[2299]: E0213 15:35:08.564237 2299 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Feb 13 15:35:08.681178 kubelet[2299]: W0213 15:35:08.681118 2299 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:08.681178 kubelet[2299]: E0213 15:35:08.681176 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:08.712571 kubelet[2299]: W0213 15:35:08.712531 2299 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:08.712571 kubelet[2299]: E0213 15:35:08.712568 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:08.905555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161628741.mount: Deactivated successfully. Feb 13 15:35:08.912597 containerd[1469]: time="2025-02-13T15:35:08.912555022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:08.915342 containerd[1469]: time="2025-02-13T15:35:08.915271757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:35:08.916245 containerd[1469]: time="2025-02-13T15:35:08.916201260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:08.917979 containerd[1469]: time="2025-02-13T15:35:08.917947715Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:08.918786 containerd[1469]: time="2025-02-13T15:35:08.918718541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:35:08.919651 containerd[1469]: time="2025-02-13T15:35:08.919615944Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:08.920500 containerd[1469]: time="2025-02-13T15:35:08.920470687Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:35:08.921499 containerd[1469]: time="2025-02-13T15:35:08.921444373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:35:08.922148 containerd[1469]: time="2025-02-13T15:35:08.922118417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.215254ms" Feb 13 15:35:08.925206 containerd[1469]: time="2025-02-13T15:35:08.925171813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.476624ms" Feb 13 15:35:08.925889 containerd[1469]: time="2025-02-13T15:35:08.925856537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.578369ms" Feb 13 15:35:09.069072 kubelet[2299]: W0213 15:35:09.068963 2299 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:09.069072 kubelet[2299]: E0213 15:35:09.069076 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:09.150232 containerd[1469]: time="2025-02-13T15:35:09.150029151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:09.150232 containerd[1469]: time="2025-02-13T15:35:09.150082150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:09.150232 containerd[1469]: time="2025-02-13T15:35:09.150095706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:09.151470 containerd[1469]: time="2025-02-13T15:35:09.150270784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:09.151885 containerd[1469]: time="2025-02-13T15:35:09.149701456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:09.151885 containerd[1469]: time="2025-02-13T15:35:09.151838253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:09.151967 containerd[1469]: time="2025-02-13T15:35:09.151860916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:09.152389 containerd[1469]: time="2025-02-13T15:35:09.152088082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:09.152771 containerd[1469]: time="2025-02-13T15:35:09.152583351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:09.152904 containerd[1469]: time="2025-02-13T15:35:09.152866852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:09.153129 containerd[1469]: time="2025-02-13T15:35:09.152890988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:09.165575 containerd[1469]: time="2025-02-13T15:35:09.163701261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:09.190646 systemd[1]: Started cri-containerd-860a06cc4fc5e1e2d86ae8dc2b1b66faae62d4eb799ba732c9367bf2537fd6ed.scope - libcontainer container 860a06cc4fc5e1e2d86ae8dc2b1b66faae62d4eb799ba732c9367bf2537fd6ed. Feb 13 15:35:09.195226 systemd[1]: Started cri-containerd-49a8c3cb212f27e4605cb88c8ae9408ef6be4cc4b9dd2dac7c39ceb0220f1474.scope - libcontainer container 49a8c3cb212f27e4605cb88c8ae9408ef6be4cc4b9dd2dac7c39ceb0220f1474. Feb 13 15:35:09.197005 systemd[1]: Started cri-containerd-8a38946d235042b04f68e9fccdf7712c4b223378dfe32e90f36b3efe09aa4cd7.scope - libcontainer container 8a38946d235042b04f68e9fccdf7712c4b223378dfe32e90f36b3efe09aa4cd7. Feb 13 15:35:09.240384 containerd[1469]: time="2025-02-13T15:35:09.240295252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"860a06cc4fc5e1e2d86ae8dc2b1b66faae62d4eb799ba732c9367bf2537fd6ed\"" Feb 13 15:35:09.240831 containerd[1469]: time="2025-02-13T15:35:09.240545571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:17b334ee845a56a8a2315edf9b7658b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a38946d235042b04f68e9fccdf7712c4b223378dfe32e90f36b3efe09aa4cd7\"" Feb 13 15:35:09.241874 kubelet[2299]: E0213 15:35:09.241689 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:09.241874 kubelet[2299]: E0213 15:35:09.241696 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:09.245221 containerd[1469]: time="2025-02-13T15:35:09.245181816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"49a8c3cb212f27e4605cb88c8ae9408ef6be4cc4b9dd2dac7c39ceb0220f1474\"" Feb 13 15:35:09.245365 containerd[1469]: time="2025-02-13T15:35:09.245269370Z" level=info msg="CreateContainer within sandbox \"860a06cc4fc5e1e2d86ae8dc2b1b66faae62d4eb799ba732c9367bf2537fd6ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:35:09.245878 containerd[1469]: time="2025-02-13T15:35:09.245729864Z" level=info msg="CreateContainer within sandbox \"8a38946d235042b04f68e9fccdf7712c4b223378dfe32e90f36b3efe09aa4cd7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:35:09.245929 kubelet[2299]: E0213 15:35:09.245784 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:09.247300 containerd[1469]: time="2025-02-13T15:35:09.247268710Z" level=info msg="CreateContainer within sandbox \"49a8c3cb212f27e4605cb88c8ae9408ef6be4cc4b9dd2dac7c39ceb0220f1474\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:35:09.261941 kubelet[2299]: E0213 15:35:09.261919 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" Feb 13 15:35:09.267446 containerd[1469]: time="2025-02-13T15:35:09.267397037Z" level=info msg="CreateContainer within sandbox \"860a06cc4fc5e1e2d86ae8dc2b1b66faae62d4eb799ba732c9367bf2537fd6ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"98e612beac9620761558312ec0d1417d14580e7f39acca40e4bc31ebfb2b7cb9\"" Feb 13 15:35:09.267963 containerd[1469]: time="2025-02-13T15:35:09.267924447Z" level=info msg="StartContainer for \"98e612beac9620761558312ec0d1417d14580e7f39acca40e4bc31ebfb2b7cb9\"" Feb 13 15:35:09.274217 kubelet[2299]: W0213 15:35:09.274164 2299 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:09.274303 kubelet[2299]: E0213 15:35:09.274235 2299 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Feb 13 15:35:09.276356 containerd[1469]: time="2025-02-13T15:35:09.276320202Z" level=info msg="CreateContainer within sandbox \"8a38946d235042b04f68e9fccdf7712c4b223378dfe32e90f36b3efe09aa4cd7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"812b4b2a5ff1aa8ee67d329ce3f03931764095460e149b0a906e97ae02cf87cf\"" Feb 13 15:35:09.277006 containerd[1469]: time="2025-02-13T15:35:09.276800572Z" level=info msg="StartContainer for \"812b4b2a5ff1aa8ee67d329ce3f03931764095460e149b0a906e97ae02cf87cf\"" Feb 13 15:35:09.280199 containerd[1469]: time="2025-02-13T15:35:09.280126750Z" level=info msg="CreateContainer within sandbox \"49a8c3cb212f27e4605cb88c8ae9408ef6be4cc4b9dd2dac7c39ceb0220f1474\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"41d7e8d5eda2309dcc372528a362c0b2f4fffb85fc61b16f754025568f2d4445\"" Feb 13 15:35:09.280626 containerd[1469]: time="2025-02-13T15:35:09.280598445Z" level=info msg="StartContainer for \"41d7e8d5eda2309dcc372528a362c0b2f4fffb85fc61b16f754025568f2d4445\"" Feb 13 15:35:09.294637 systemd[1]: Started cri-containerd-98e612beac9620761558312ec0d1417d14580e7f39acca40e4bc31ebfb2b7cb9.scope - libcontainer container 98e612beac9620761558312ec0d1417d14580e7f39acca40e4bc31ebfb2b7cb9. Feb 13 15:35:09.311576 systemd[1]: Started cri-containerd-812b4b2a5ff1aa8ee67d329ce3f03931764095460e149b0a906e97ae02cf87cf.scope - libcontainer container 812b4b2a5ff1aa8ee67d329ce3f03931764095460e149b0a906e97ae02cf87cf. Feb 13 15:35:09.317344 systemd[1]: Started cri-containerd-41d7e8d5eda2309dcc372528a362c0b2f4fffb85fc61b16f754025568f2d4445.scope - libcontainer container 41d7e8d5eda2309dcc372528a362c0b2f4fffb85fc61b16f754025568f2d4445. Feb 13 15:35:09.365958 kubelet[2299]: I0213 15:35:09.365918 2299 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:09.366715 kubelet[2299]: E0213 15:35:09.366606 2299 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Feb 13 15:35:09.367027 containerd[1469]: time="2025-02-13T15:35:09.366944234Z" level=info msg="StartContainer for \"98e612beac9620761558312ec0d1417d14580e7f39acca40e4bc31ebfb2b7cb9\" returns successfully" Feb 13 15:35:09.367525 containerd[1469]: time="2025-02-13T15:35:09.367062235Z" level=info msg="StartContainer for \"812b4b2a5ff1aa8ee67d329ce3f03931764095460e149b0a906e97ae02cf87cf\" returns successfully" Feb 13 15:35:09.367525 containerd[1469]: time="2025-02-13T15:35:09.367086220Z" level=info msg="StartContainer for \"41d7e8d5eda2309dcc372528a362c0b2f4fffb85fc61b16f754025568f2d4445\" returns successfully" Feb 13 15:35:09.889883 kubelet[2299]: E0213 15:35:09.889793 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:09.892472 kubelet[2299]: E0213 15:35:09.892388 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:09.894474 kubelet[2299]: E0213 15:35:09.894381 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:10.865506 kubelet[2299]: E0213 15:35:10.865460 2299 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:35:10.898561 kubelet[2299]: E0213 15:35:10.897788 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:10.968639 kubelet[2299]: I0213 15:35:10.968599 2299 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:10.974220 kubelet[2299]: I0213 15:35:10.974194 2299 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:35:10.979776 kubelet[2299]: E0213 15:35:10.979754 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.080233 kubelet[2299]: E0213 15:35:11.080181 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.180422 kubelet[2299]: E0213 15:35:11.180305 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.280801 kubelet[2299]: E0213 15:35:11.280767 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.381548 kubelet[2299]: E0213 15:35:11.381502 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.482115 kubelet[2299]: E0213 15:35:11.482026 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.582585 kubelet[2299]: E0213 15:35:11.582549 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.683084 kubelet[2299]: E0213 15:35:11.683040 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.783573 kubelet[2299]: E0213 15:35:11.783537 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.884292 kubelet[2299]: E0213 15:35:11.884244 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:11.922945 kubelet[2299]: E0213 15:35:11.922909 2299 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:11.985191 kubelet[2299]: E0213 15:35:11.985122 2299 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:35:12.853204 kubelet[2299]: I0213 15:35:12.853161 2299 apiserver.go:52] "Watching apiserver" Feb 13 15:35:12.858524 kubelet[2299]: I0213 15:35:12.858495 2299 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:35:13.511375 systemd[1]: Reloading requested from client PID 2581 ('systemctl') (unit session-9.scope)... Feb 13 15:35:13.511390 systemd[1]: Reloading... Feb 13 15:35:13.586504 zram_generator::config[2620]: No configuration found. Feb 13 15:35:13.692746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:35:13.781442 systemd[1]: Reloading finished in 269 ms. Feb 13 15:35:13.825204 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:13.844775 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:35:13.845040 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:13.852831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:35:13.993492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:35:13.999437 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:35:14.049195 kubelet[2665]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:14.049195 kubelet[2665]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:35:14.049195 kubelet[2665]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:35:14.049613 kubelet[2665]: I0213 15:35:14.049272 2665 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:35:14.054042 kubelet[2665]: I0213 15:35:14.053999 2665 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:35:14.054042 kubelet[2665]: I0213 15:35:14.054022 2665 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:35:14.054288 kubelet[2665]: I0213 15:35:14.054259 2665 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:35:14.056029 kubelet[2665]: I0213 15:35:14.056001 2665 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:35:14.058040 kubelet[2665]: I0213 15:35:14.057990 2665 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:35:14.125921 kubelet[2665]: I0213 15:35:14.125868 2665 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:35:14.126148 kubelet[2665]: I0213 15:35:14.126124 2665 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:35:14.126324 kubelet[2665]: I0213 15:35:14.126301 2665 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:35:14.126400 kubelet[2665]: I0213 15:35:14.126331 2665 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:35:14.126400 kubelet[2665]: I0213 15:35:14.126341 2665 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:35:14.126400 kubelet[2665]: I0213 15:35:14.126376 2665 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:14.126507 kubelet[2665]: I0213 15:35:14.126501 2665 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:35:14.126532 kubelet[2665]: I0213 15:35:14.126517 2665 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:35:14.126556 kubelet[2665]: I0213 15:35:14.126549 2665 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:35:14.126580 kubelet[2665]: I0213 15:35:14.126563 2665 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:35:14.127684 kubelet[2665]: I0213 15:35:14.127639 2665 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:35:14.127894 kubelet[2665]: I0213 15:35:14.127879 2665 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:35:14.128496 kubelet[2665]: I0213 15:35:14.128310 2665 server.go:1256] "Started kubelet" Feb 13 15:35:14.131242 kubelet[2665]: I0213 15:35:14.130114 2665 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:35:14.137118 kubelet[2665]: I0213 15:35:14.136414 2665 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:35:14.137167 kubelet[2665]: I0213 15:35:14.137149 2665 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:35:14.138400 kubelet[2665]: I0213 15:35:14.137959 2665 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:35:14.138400 kubelet[2665]: I0213 15:35:14.138169 2665 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:35:14.139819 kubelet[2665]: I0213 15:35:14.139667 2665 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:35:14.139819 kubelet[2665]: I0213 15:35:14.139771 2665 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:35:14.140484 kubelet[2665]: I0213 15:35:14.139946 2665 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:35:14.142081 kubelet[2665]: E0213 15:35:14.141537 2665 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:35:14.142245 kubelet[2665]: I0213 15:35:14.142227 2665 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:35:14.142396 kubelet[2665]: I0213 15:35:14.142378 2665 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:35:14.143436 sudo[2682]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:35:14.143860 sudo[2682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:35:14.146417 kubelet[2665]: I0213 15:35:14.145793 2665 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:35:14.151071 kubelet[2665]: I0213 15:35:14.151054 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:35:14.156255 kubelet[2665]: I0213 15:35:14.156226 2665 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:35:14.156296 kubelet[2665]: I0213 15:35:14.156266 2665 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:35:14.156296 kubelet[2665]: I0213 15:35:14.156290 2665 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:35:14.156368 kubelet[2665]: E0213 15:35:14.156344 2665 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:35:14.181040 kubelet[2665]: I0213 15:35:14.181003 2665 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:35:14.181040 kubelet[2665]: I0213 15:35:14.181027 2665 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:35:14.181040 kubelet[2665]: I0213 15:35:14.181043 2665 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:35:14.181202 kubelet[2665]: I0213 15:35:14.181173 2665 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:35:14.181202 kubelet[2665]: I0213 15:35:14.181193 2665 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:35:14.181202 kubelet[2665]: I0213 15:35:14.181200 2665 policy_none.go:49] "None policy: Start" Feb 13 15:35:14.181842 kubelet[2665]: I0213 15:35:14.181751 2665 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:35:14.181842 kubelet[2665]: I0213 15:35:14.181769 2665 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:35:14.181908 kubelet[2665]: I0213 15:35:14.181899 2665 state_mem.go:75] "Updated machine memory state" Feb 13 15:35:14.186212 kubelet[2665]: I0213 15:35:14.186195 2665 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:35:14.186758 kubelet[2665]: I0213 15:35:14.186710 2665 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:35:14.242663 kubelet[2665]: I0213 15:35:14.242637 2665 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:35:14.249011 kubelet[2665]: I0213 15:35:14.248887 2665 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:35:14.249011 kubelet[2665]: I0213 15:35:14.248938 2665 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:35:14.256916 kubelet[2665]: I0213 15:35:14.256522 2665 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:35:14.256916 kubelet[2665]: I0213 15:35:14.256590 2665 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:35:14.256916 kubelet[2665]: I0213 15:35:14.256617 2665 topology_manager.go:215] "Topology Admit Handler" podUID="17b334ee845a56a8a2315edf9b7658b2" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:35:14.341814 kubelet[2665]: I0213 15:35:14.341688 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:14.341814 kubelet[2665]: I0213 15:35:14.341742 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:14.341814 kubelet[2665]: I0213 15:35:14.341771 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17b334ee845a56a8a2315edf9b7658b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"17b334ee845a56a8a2315edf9b7658b2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:14.341814 kubelet[2665]: I0213 15:35:14.341790 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:35:14.341814 kubelet[2665]: I0213 15:35:14.341808 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17b334ee845a56a8a2315edf9b7658b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"17b334ee845a56a8a2315edf9b7658b2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:14.342017 kubelet[2665]: I0213 15:35:14.341825 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17b334ee845a56a8a2315edf9b7658b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"17b334ee845a56a8a2315edf9b7658b2\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:14.342017 kubelet[2665]: I0213 15:35:14.341843 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:14.342017 kubelet[2665]: I0213 15:35:14.341861 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:14.342017 kubelet[2665]: I0213 15:35:14.341879 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:35:14.571499 kubelet[2665]: E0213 15:35:14.571256 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:14.571499 kubelet[2665]: E0213 15:35:14.571429 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:14.571499 kubelet[2665]: E0213 15:35:14.571491 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:14.625155 sudo[2682]: pam_unix(sudo:session): session closed for user root Feb 13 15:35:15.127098 kubelet[2665]: I0213 15:35:15.127059 2665 apiserver.go:52] "Watching apiserver" Feb 13 15:35:15.140597 kubelet[2665]: I0213 15:35:15.140545 2665 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:35:15.167757 kubelet[2665]: E0213 15:35:15.167411 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:15.167757 kubelet[2665]: E0213 15:35:15.167676 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:15.171509 kubelet[2665]: E0213 15:35:15.171482 2665 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:35:15.171931 kubelet[2665]: E0213 15:35:15.171899 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:15.199704 kubelet[2665]: I0213 15:35:15.199667 2665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.199618227 podStartE2EDuration="1.199618227s" podCreationTimestamp="2025-02-13 15:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:15.188700552 +0000 UTC m=+1.184505405" watchObservedRunningTime="2025-02-13 15:35:15.199618227 +0000 UTC m=+1.195423081" Feb 13 15:35:15.209169 kubelet[2665]: I0213 15:35:15.209112 2665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.209082577 podStartE2EDuration="1.209082577s" podCreationTimestamp="2025-02-13 15:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:15.200279922 +0000 UTC m=+1.196084765" watchObservedRunningTime="2025-02-13 15:35:15.209082577 +0000 UTC m=+1.204887430" Feb 13 15:35:15.216682 kubelet[2665]: I0213 15:35:15.216643 2665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.216596187 podStartE2EDuration="1.216596187s" podCreationTimestamp="2025-02-13 15:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:15.209541273 +0000 UTC m=+1.205346126" watchObservedRunningTime="2025-02-13 15:35:15.216596187 +0000 UTC m=+1.212401050" Feb 13 15:35:15.929958 sudo[1663]: pam_unix(sudo:session): session closed for user root Feb 13 15:35:15.931289 sshd[1662]: Connection closed by 10.0.0.1 port 49146 Feb 13 15:35:15.931819 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:15.936239 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:49146.service: Deactivated successfully. Feb 13 15:35:15.938198 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:35:15.938400 systemd[1]: session-9.scope: Consumed 4.398s CPU time, 190.5M memory peak, 0B memory swap peak. Feb 13 15:35:15.938876 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:35:15.939784 systemd-logind[1455]: Removed session 9. Feb 13 15:35:16.168382 kubelet[2665]: E0213 15:35:16.168338 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:16.168880 kubelet[2665]: E0213 15:35:16.168402 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:19.602255 kubelet[2665]: E0213 15:35:19.602220 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:20.007936 kubelet[2665]: E0213 15:35:20.007813 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:20.172435 kubelet[2665]: E0213 15:35:20.172397 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:20.172435 kubelet[2665]: E0213 15:35:20.172413 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:24.543485 update_engine[1457]: I20250213 15:35:24.543370 1457 update_attempter.cc:509] Updating boot flags... Feb 13 15:35:24.572472 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2748) Feb 13 15:35:24.607484 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2749) Feb 13 15:35:24.637489 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2749) Feb 13 15:35:25.540299 kubelet[2665]: E0213 15:35:25.540256 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:26.942583 kubelet[2665]: I0213 15:35:26.942547 2665 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:35:26.943005 containerd[1469]: time="2025-02-13T15:35:26.942900893Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:35:26.943234 kubelet[2665]: I0213 15:35:26.943177 2665 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:35:27.492947 kubelet[2665]: I0213 15:35:27.492891 2665 topology_manager.go:215] "Topology Admit Handler" podUID="69a285e1-9f8a-471a-8862-c83b69e6792d" podNamespace="kube-system" podName="kube-proxy-z7cmz" Feb 13 15:35:27.499554 kubelet[2665]: I0213 15:35:27.499392 2665 topology_manager.go:215] "Topology Admit Handler" podUID="bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" podNamespace="kube-system" podName="cilium-c5rsg" Feb 13 15:35:27.505237 systemd[1]: Created slice kubepods-besteffort-pod69a285e1_9f8a_471a_8862_c83b69e6792d.slice - libcontainer container kubepods-besteffort-pod69a285e1_9f8a_471a_8862_c83b69e6792d.slice. Feb 13 15:35:27.516234 systemd[1]: Created slice kubepods-burstable-podbc1447e6_b0da_4740_a8ca_9db32d5ef8ff.slice - libcontainer container kubepods-burstable-podbc1447e6_b0da_4740_a8ca_9db32d5ef8ff.slice. Feb 13 15:35:27.618930 kubelet[2665]: I0213 15:35:27.618859 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69a285e1-9f8a-471a-8862-c83b69e6792d-lib-modules\") pod \"kube-proxy-z7cmz\" (UID: \"69a285e1-9f8a-471a-8862-c83b69e6792d\") " pod="kube-system/kube-proxy-z7cmz" Feb 13 15:35:27.618930 kubelet[2665]: I0213 15:35:27.618911 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-xtables-lock\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.618930 kubelet[2665]: I0213 15:35:27.618947 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-clustermesh-secrets\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619138 kubelet[2665]: I0213 15:35:27.618965 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-host-proc-sys-net\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619138 kubelet[2665]: I0213 15:35:27.618983 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-etc-cni-netd\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619138 kubelet[2665]: I0213 15:35:27.619002 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-host-proc-sys-kernel\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619138 kubelet[2665]: I0213 15:35:27.619021 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-cgroup\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619138 kubelet[2665]: I0213 15:35:27.619083 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6lv9\" (UniqueName: \"kubernetes.io/projected/69a285e1-9f8a-471a-8862-c83b69e6792d-kube-api-access-l6lv9\") pod \"kube-proxy-z7cmz\" (UID: \"69a285e1-9f8a-471a-8862-c83b69e6792d\") " pod="kube-system/kube-proxy-z7cmz" Feb 13 15:35:27.619262 kubelet[2665]: I0213 15:35:27.619102 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-run\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619262 kubelet[2665]: I0213 15:35:27.619120 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-lib-modules\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619262 kubelet[2665]: I0213 15:35:27.619137 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-hubble-tls\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619262 kubelet[2665]: I0213 15:35:27.619155 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-bpf-maps\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619262 kubelet[2665]: I0213 15:35:27.619175 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-hostproc\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619262 kubelet[2665]: I0213 15:35:27.619200 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bz6\" (UniqueName: \"kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-kube-api-access-c9bz6\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619398 kubelet[2665]: I0213 15:35:27.619219 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/69a285e1-9f8a-471a-8862-c83b69e6792d-kube-proxy\") pod \"kube-proxy-z7cmz\" (UID: \"69a285e1-9f8a-471a-8862-c83b69e6792d\") " pod="kube-system/kube-proxy-z7cmz" Feb 13 15:35:27.619398 kubelet[2665]: I0213 15:35:27.619237 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cni-path\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619398 kubelet[2665]: I0213 15:35:27.619301 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-config-path\") pod \"cilium-c5rsg\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " pod="kube-system/cilium-c5rsg" Feb 13 15:35:27.619502 kubelet[2665]: I0213 15:35:27.619399 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69a285e1-9f8a-471a-8862-c83b69e6792d-xtables-lock\") pod \"kube-proxy-z7cmz\" (UID: \"69a285e1-9f8a-471a-8862-c83b69e6792d\") " pod="kube-system/kube-proxy-z7cmz" Feb 13 15:35:27.728953 kubelet[2665]: E0213 15:35:27.728705 2665 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:35:27.728953 kubelet[2665]: E0213 15:35:27.728730 2665 projected.go:200] Error preparing data for projected volume kube-api-access-l6lv9 for pod kube-system/kube-proxy-z7cmz: configmap "kube-root-ca.crt" not found Feb 13 15:35:27.728953 kubelet[2665]: E0213 15:35:27.728778 2665 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69a285e1-9f8a-471a-8862-c83b69e6792d-kube-api-access-l6lv9 podName:69a285e1-9f8a-471a-8862-c83b69e6792d nodeName:}" failed. No retries permitted until 2025-02-13 15:35:28.22876038 +0000 UTC m=+14.224565233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l6lv9" (UniqueName: "kubernetes.io/projected/69a285e1-9f8a-471a-8862-c83b69e6792d-kube-api-access-l6lv9") pod "kube-proxy-z7cmz" (UID: "69a285e1-9f8a-471a-8862-c83b69e6792d") : configmap "kube-root-ca.crt" not found Feb 13 15:35:27.728953 kubelet[2665]: E0213 15:35:27.728952 2665 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:35:27.729232 kubelet[2665]: E0213 15:35:27.728966 2665 projected.go:200] Error preparing data for projected volume kube-api-access-c9bz6 for pod kube-system/cilium-c5rsg: configmap "kube-root-ca.crt" not found Feb 13 15:35:27.729232 kubelet[2665]: E0213 15:35:27.728995 2665 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-kube-api-access-c9bz6 podName:bc1447e6-b0da-4740-a8ca-9db32d5ef8ff nodeName:}" failed. No retries permitted until 2025-02-13 15:35:28.228985486 +0000 UTC m=+14.224790339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c9bz6" (UniqueName: "kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-kube-api-access-c9bz6") pod "cilium-c5rsg" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff") : configmap "kube-root-ca.crt" not found Feb 13 15:35:27.987139 kubelet[2665]: I0213 15:35:27.987088 2665 topology_manager.go:215] "Topology Admit Handler" podUID="21ca4706-133b-40cd-9e2f-79e49bbc84a7" podNamespace="kube-system" podName="cilium-operator-5cc964979-v4t6b" Feb 13 15:35:27.998224 systemd[1]: Created slice kubepods-besteffort-pod21ca4706_133b_40cd_9e2f_79e49bbc84a7.slice - libcontainer container kubepods-besteffort-pod21ca4706_133b_40cd_9e2f_79e49bbc84a7.slice. Feb 13 15:35:28.121857 kubelet[2665]: I0213 15:35:28.121809 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-974bg\" (UniqueName: \"kubernetes.io/projected/21ca4706-133b-40cd-9e2f-79e49bbc84a7-kube-api-access-974bg\") pod \"cilium-operator-5cc964979-v4t6b\" (UID: \"21ca4706-133b-40cd-9e2f-79e49bbc84a7\") " pod="kube-system/cilium-operator-5cc964979-v4t6b" Feb 13 15:35:28.121857 kubelet[2665]: I0213 15:35:28.121849 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21ca4706-133b-40cd-9e2f-79e49bbc84a7-cilium-config-path\") pod \"cilium-operator-5cc964979-v4t6b\" (UID: \"21ca4706-133b-40cd-9e2f-79e49bbc84a7\") " pod="kube-system/cilium-operator-5cc964979-v4t6b" Feb 13 15:35:28.301935 kubelet[2665]: E0213 15:35:28.301910 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:28.302388 containerd[1469]: time="2025-02-13T15:35:28.302348173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-v4t6b,Uid:21ca4706-133b-40cd-9e2f-79e49bbc84a7,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:28.329089 containerd[1469]: time="2025-02-13T15:35:28.328946406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:28.329089 containerd[1469]: time="2025-02-13T15:35:28.329039232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:28.329489 containerd[1469]: time="2025-02-13T15:35:28.329060452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:28.329489 containerd[1469]: time="2025-02-13T15:35:28.329401497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:28.348611 systemd[1]: Started cri-containerd-2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700.scope - libcontainer container 2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700. Feb 13 15:35:28.381037 containerd[1469]: time="2025-02-13T15:35:28.380980496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-v4t6b,Uid:21ca4706-133b-40cd-9e2f-79e49bbc84a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700\"" Feb 13 15:35:28.381681 kubelet[2665]: E0213 15:35:28.381662 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:28.382776 containerd[1469]: time="2025-02-13T15:35:28.382721107Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:35:28.416875 kubelet[2665]: E0213 15:35:28.416847 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:28.417293 containerd[1469]: time="2025-02-13T15:35:28.417266188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7cmz,Uid:69a285e1-9f8a-471a-8862-c83b69e6792d,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:28.419531 kubelet[2665]: E0213 15:35:28.419512 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:28.419774 containerd[1469]: time="2025-02-13T15:35:28.419756577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5rsg,Uid:bc1447e6-b0da-4740-a8ca-9db32d5ef8ff,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:28.446638 containerd[1469]: time="2025-02-13T15:35:28.446512980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:28.446638 containerd[1469]: time="2025-02-13T15:35:28.446612888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:28.446883 containerd[1469]: time="2025-02-13T15:35:28.446633448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:28.446883 containerd[1469]: time="2025-02-13T15:35:28.446789282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:28.448364 containerd[1469]: time="2025-02-13T15:35:28.447948875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:28.448364 containerd[1469]: time="2025-02-13T15:35:28.448014579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:28.448364 containerd[1469]: time="2025-02-13T15:35:28.448033384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:28.448364 containerd[1469]: time="2025-02-13T15:35:28.448141189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:28.473583 systemd[1]: Started cri-containerd-ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8.scope - libcontainer container ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8. Feb 13 15:35:28.475073 systemd[1]: Started cri-containerd-de8247f820567613f0696dcbb24842ed6cbd4232cc1c534cbbc57466a9273f75.scope - libcontainer container de8247f820567613f0696dcbb24842ed6cbd4232cc1c534cbbc57466a9273f75. Feb 13 15:35:28.497164 containerd[1469]: time="2025-02-13T15:35:28.497125041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5rsg,Uid:bc1447e6-b0da-4740-a8ca-9db32d5ef8ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\"" Feb 13 15:35:28.497723 kubelet[2665]: E0213 15:35:28.497702 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:28.502993 containerd[1469]: time="2025-02-13T15:35:28.502931500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7cmz,Uid:69a285e1-9f8a-471a-8862-c83b69e6792d,Namespace:kube-system,Attempt:0,} returns sandbox id \"de8247f820567613f0696dcbb24842ed6cbd4232cc1c534cbbc57466a9273f75\"" Feb 13 15:35:28.503570 kubelet[2665]: E0213 15:35:28.503547 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:28.505331 containerd[1469]: time="2025-02-13T15:35:28.505304037Z" level=info msg="CreateContainer within sandbox \"de8247f820567613f0696dcbb24842ed6cbd4232cc1c534cbbc57466a9273f75\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:35:28.523064 containerd[1469]: time="2025-02-13T15:35:28.523018272Z" level=info msg="CreateContainer within sandbox \"de8247f820567613f0696dcbb24842ed6cbd4232cc1c534cbbc57466a9273f75\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81d56bfb42373264ed8e8ef493ea8d575168b6ee1744b312d9436db985d5efea\"" Feb 13 15:35:28.523602 containerd[1469]: time="2025-02-13T15:35:28.523576528Z" level=info msg="StartContainer for \"81d56bfb42373264ed8e8ef493ea8d575168b6ee1744b312d9436db985d5efea\"" Feb 13 15:35:28.550582 systemd[1]: Started cri-containerd-81d56bfb42373264ed8e8ef493ea8d575168b6ee1744b312d9436db985d5efea.scope - libcontainer container 81d56bfb42373264ed8e8ef493ea8d575168b6ee1744b312d9436db985d5efea. Feb 13 15:35:28.580404 containerd[1469]: time="2025-02-13T15:35:28.579771414Z" level=info msg="StartContainer for \"81d56bfb42373264ed8e8ef493ea8d575168b6ee1744b312d9436db985d5efea\" returns successfully" Feb 13 15:35:29.188158 kubelet[2665]: E0213 15:35:29.188126 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:29.195745 kubelet[2665]: I0213 15:35:29.195715 2665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-z7cmz" podStartSLOduration=2.1955987869999998 podStartE2EDuration="2.195598787s" podCreationTimestamp="2025-02-13 15:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:35:29.195529566 +0000 UTC m=+15.191334419" watchObservedRunningTime="2025-02-13 15:35:29.195598787 +0000 UTC m=+15.191403640" Feb 13 15:35:29.625745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346528131.mount: Deactivated successfully. Feb 13 15:35:34.415359 containerd[1469]: time="2025-02-13T15:35:34.415293907Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:34.416010 containerd[1469]: time="2025-02-13T15:35:34.415960224Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:35:34.417094 containerd[1469]: time="2025-02-13T15:35:34.417061651Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:34.418345 containerd[1469]: time="2025-02-13T15:35:34.418294747Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.035546016s" Feb 13 15:35:34.418345 containerd[1469]: time="2025-02-13T15:35:34.418334601Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:35:34.421579 containerd[1469]: time="2025-02-13T15:35:34.421546839Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:35:34.428714 containerd[1469]: time="2025-02-13T15:35:34.428676947Z" level=info msg="CreateContainer within sandbox \"2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:35:34.444095 containerd[1469]: time="2025-02-13T15:35:34.444049678Z" level=info msg="CreateContainer within sandbox \"2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\"" Feb 13 15:35:34.444582 containerd[1469]: time="2025-02-13T15:35:34.444549540Z" level=info msg="StartContainer for \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\"" Feb 13 15:35:34.468030 systemd[1]: run-containerd-runc-k8s.io-ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75-runc.eLZeBq.mount: Deactivated successfully. Feb 13 15:35:34.484619 systemd[1]: Started cri-containerd-ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75.scope - libcontainer container ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75. Feb 13 15:35:34.512687 containerd[1469]: time="2025-02-13T15:35:34.512362819Z" level=info msg="StartContainer for \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\" returns successfully" Feb 13 15:35:35.200253 kubelet[2665]: E0213 15:35:35.200219 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:35.209868 kubelet[2665]: I0213 15:35:35.209830 2665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-v4t6b" podStartSLOduration=2.173464824 podStartE2EDuration="8.209787447s" podCreationTimestamp="2025-02-13 15:35:27 +0000 UTC" firstStartedPulling="2025-02-13 15:35:28.382366407 +0000 UTC m=+14.378171260" lastFinishedPulling="2025-02-13 15:35:34.41868903 +0000 UTC m=+20.414493883" observedRunningTime="2025-02-13 15:35:35.209506729 +0000 UTC m=+21.205311582" watchObservedRunningTime="2025-02-13 15:35:35.209787447 +0000 UTC m=+21.205592301" Feb 13 15:35:36.201994 kubelet[2665]: E0213 15:35:36.201952 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:43.724965 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:53322.service - OpenSSH per-connection server daemon (10.0.0.1:53322). Feb 13 15:35:43.788889 sshd[3097]: Accepted publickey for core from 10.0.0.1 port 53322 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:35:43.790587 sshd-session[3097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:43.794822 systemd-logind[1455]: New session 10 of user core. Feb 13 15:35:43.800582 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:35:43.929248 sshd[3099]: Connection closed by 10.0.0.1 port 53322 Feb 13 15:35:43.929602 sshd-session[3097]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:43.933483 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:53322.service: Deactivated successfully. Feb 13 15:35:43.935168 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:35:43.935838 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:35:43.936758 systemd-logind[1455]: Removed session 10. Feb 13 15:35:46.511325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559727867.mount: Deactivated successfully. Feb 13 15:35:48.598410 containerd[1469]: time="2025-02-13T15:35:48.598336477Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:48.599130 containerd[1469]: time="2025-02-13T15:35:48.599076629Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:35:48.600407 containerd[1469]: time="2025-02-13T15:35:48.600373988Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:35:48.602670 containerd[1469]: time="2025-02-13T15:35:48.602600863Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.181012364s" Feb 13 15:35:48.602670 containerd[1469]: time="2025-02-13T15:35:48.602658471Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:35:48.605024 containerd[1469]: time="2025-02-13T15:35:48.604994181Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:35:48.616629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615764608.mount: Deactivated successfully. Feb 13 15:35:48.617875 containerd[1469]: time="2025-02-13T15:35:48.617836229Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\"" Feb 13 15:35:48.618527 containerd[1469]: time="2025-02-13T15:35:48.618302546Z" level=info msg="StartContainer for \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\"" Feb 13 15:35:48.659618 systemd[1]: Started cri-containerd-bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb.scope - libcontainer container bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb. Feb 13 15:35:48.735151 systemd[1]: cri-containerd-bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb.scope: Deactivated successfully. Feb 13 15:35:48.797865 containerd[1469]: time="2025-02-13T15:35:48.797818118Z" level=info msg="StartContainer for \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\" returns successfully" Feb 13 15:35:48.942953 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:47734.service - OpenSSH per-connection server daemon (10.0.0.1:47734). Feb 13 15:35:49.140856 sshd[3193]: Accepted publickey for core from 10.0.0.1 port 47734 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:35:49.143019 sshd-session[3193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:49.148414 systemd-logind[1455]: New session 11 of user core. Feb 13 15:35:49.157710 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:35:49.278959 containerd[1469]: time="2025-02-13T15:35:49.278763881Z" level=info msg="shim disconnected" id=bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb namespace=k8s.io Feb 13 15:35:49.278959 containerd[1469]: time="2025-02-13T15:35:49.278865211Z" level=warning msg="cleaning up after shim disconnected" id=bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb namespace=k8s.io Feb 13 15:35:49.278959 containerd[1469]: time="2025-02-13T15:35:49.278876402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:35:49.327576 sshd[3195]: Connection closed by 10.0.0.1 port 47734 Feb 13 15:35:49.328032 sshd-session[3193]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:49.332770 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:47734.service: Deactivated successfully. Feb 13 15:35:49.335086 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:35:49.335909 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:35:49.336897 systemd-logind[1455]: Removed session 11. Feb 13 15:35:49.583758 kubelet[2665]: E0213 15:35:49.583232 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:49.587341 containerd[1469]: time="2025-02-13T15:35:49.587189333Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:35:49.603094 containerd[1469]: time="2025-02-13T15:35:49.603027790Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\"" Feb 13 15:35:49.603710 containerd[1469]: time="2025-02-13T15:35:49.603657262Z" level=info msg="StartContainer for \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\"" Feb 13 15:35:49.613952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb-rootfs.mount: Deactivated successfully. Feb 13 15:35:49.637577 systemd[1]: Started cri-containerd-a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432.scope - libcontainer container a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432. Feb 13 15:35:49.663169 containerd[1469]: time="2025-02-13T15:35:49.663112700Z" level=info msg="StartContainer for \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\" returns successfully" Feb 13 15:35:49.674670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:35:49.675097 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:49.675238 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:35:49.683372 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:35:49.683766 systemd[1]: cri-containerd-a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432.scope: Deactivated successfully. Feb 13 15:35:49.697560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432-rootfs.mount: Deactivated successfully. Feb 13 15:35:49.698615 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:35:49.713179 containerd[1469]: time="2025-02-13T15:35:49.713095867Z" level=info msg="shim disconnected" id=a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432 namespace=k8s.io Feb 13 15:35:49.713179 containerd[1469]: time="2025-02-13T15:35:49.713168142Z" level=warning msg="cleaning up after shim disconnected" id=a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432 namespace=k8s.io Feb 13 15:35:49.713179 containerd[1469]: time="2025-02-13T15:35:49.713177179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:35:50.586278 kubelet[2665]: E0213 15:35:50.586234 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:50.589011 containerd[1469]: time="2025-02-13T15:35:50.588975430Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:35:50.605614 containerd[1469]: time="2025-02-13T15:35:50.605556327Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\"" Feb 13 15:35:50.606084 containerd[1469]: time="2025-02-13T15:35:50.606028715Z" level=info msg="StartContainer for \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\"" Feb 13 15:35:50.629791 systemd[1]: run-containerd-runc-k8s.io-990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8-runc.aSEkUW.mount: Deactivated successfully. Feb 13 15:35:50.639573 systemd[1]: Started cri-containerd-990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8.scope - libcontainer container 990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8. Feb 13 15:35:50.669585 containerd[1469]: time="2025-02-13T15:35:50.669537115Z" level=info msg="StartContainer for \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\" returns successfully" Feb 13 15:35:50.671651 systemd[1]: cri-containerd-990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8.scope: Deactivated successfully. Feb 13 15:35:50.689661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8-rootfs.mount: Deactivated successfully. Feb 13 15:35:50.695617 containerd[1469]: time="2025-02-13T15:35:50.695551047Z" level=info msg="shim disconnected" id=990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8 namespace=k8s.io Feb 13 15:35:50.695617 containerd[1469]: time="2025-02-13T15:35:50.695617142Z" level=warning msg="cleaning up after shim disconnected" id=990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8 namespace=k8s.io Feb 13 15:35:50.695760 containerd[1469]: time="2025-02-13T15:35:50.695626119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:35:51.590559 kubelet[2665]: E0213 15:35:51.590504 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:51.592801 containerd[1469]: time="2025-02-13T15:35:51.592753078Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:35:51.805430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3211892567.mount: Deactivated successfully. Feb 13 15:35:51.889889 containerd[1469]: time="2025-02-13T15:35:51.889651582Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\"" Feb 13 15:35:51.890786 containerd[1469]: time="2025-02-13T15:35:51.890732904Z" level=info msg="StartContainer for \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\"" Feb 13 15:35:51.941830 systemd[1]: Started cri-containerd-80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d.scope - libcontainer container 80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d. Feb 13 15:35:51.969158 systemd[1]: cri-containerd-80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d.scope: Deactivated successfully. Feb 13 15:35:52.040747 containerd[1469]: time="2025-02-13T15:35:52.040678855Z" level=info msg="StartContainer for \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\" returns successfully" Feb 13 15:35:52.071188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d-rootfs.mount: Deactivated successfully. Feb 13 15:35:52.208653 containerd[1469]: time="2025-02-13T15:35:52.208436373Z" level=info msg="shim disconnected" id=80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d namespace=k8s.io Feb 13 15:35:52.208653 containerd[1469]: time="2025-02-13T15:35:52.208539277Z" level=warning msg="cleaning up after shim disconnected" id=80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d namespace=k8s.io Feb 13 15:35:52.208653 containerd[1469]: time="2025-02-13T15:35:52.208554485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:35:52.594854 kubelet[2665]: E0213 15:35:52.594821 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:52.597026 containerd[1469]: time="2025-02-13T15:35:52.596974972Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:35:52.709725 containerd[1469]: time="2025-02-13T15:35:52.709665037Z" level=info msg="CreateContainer within sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\"" Feb 13 15:35:52.710282 containerd[1469]: time="2025-02-13T15:35:52.710233525Z" level=info msg="StartContainer for \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\"" Feb 13 15:35:52.739612 systemd[1]: Started cri-containerd-23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40.scope - libcontainer container 23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40. Feb 13 15:35:52.793229 containerd[1469]: time="2025-02-13T15:35:52.793081058Z" level=info msg="StartContainer for \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\" returns successfully" Feb 13 15:35:52.947535 kubelet[2665]: I0213 15:35:52.947406 2665 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:35:53.098032 kubelet[2665]: I0213 15:35:53.097980 2665 topology_manager.go:215] "Topology Admit Handler" podUID="d79beadc-52fc-424a-9716-af15e5eece7c" podNamespace="kube-system" podName="coredns-76f75df574-nlgkb" Feb 13 15:35:53.099978 kubelet[2665]: I0213 15:35:53.099726 2665 topology_manager.go:215] "Topology Admit Handler" podUID="e5d0c4e9-bb18-41a2-8630-f622112c3f8c" podNamespace="kube-system" podName="coredns-76f75df574-kglzs" Feb 13 15:35:53.109193 systemd[1]: Created slice kubepods-burstable-podd79beadc_52fc_424a_9716_af15e5eece7c.slice - libcontainer container kubepods-burstable-podd79beadc_52fc_424a_9716_af15e5eece7c.slice. Feb 13 15:35:53.115376 systemd[1]: Created slice kubepods-burstable-pode5d0c4e9_bb18_41a2_8630_f622112c3f8c.slice - libcontainer container kubepods-burstable-pode5d0c4e9_bb18_41a2_8630_f622112c3f8c.slice. Feb 13 15:35:53.190211 kubelet[2665]: I0213 15:35:53.190169 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpt8\" (UniqueName: \"kubernetes.io/projected/e5d0c4e9-bb18-41a2-8630-f622112c3f8c-kube-api-access-hnpt8\") pod \"coredns-76f75df574-kglzs\" (UID: \"e5d0c4e9-bb18-41a2-8630-f622112c3f8c\") " pod="kube-system/coredns-76f75df574-kglzs" Feb 13 15:35:53.190211 kubelet[2665]: I0213 15:35:53.190214 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d79beadc-52fc-424a-9716-af15e5eece7c-config-volume\") pod \"coredns-76f75df574-nlgkb\" (UID: \"d79beadc-52fc-424a-9716-af15e5eece7c\") " pod="kube-system/coredns-76f75df574-nlgkb" Feb 13 15:35:53.190211 kubelet[2665]: I0213 15:35:53.190235 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5d0c4e9-bb18-41a2-8630-f622112c3f8c-config-volume\") pod \"coredns-76f75df574-kglzs\" (UID: \"e5d0c4e9-bb18-41a2-8630-f622112c3f8c\") " pod="kube-system/coredns-76f75df574-kglzs" Feb 13 15:35:53.190211 kubelet[2665]: I0213 15:35:53.190255 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vkr5\" (UniqueName: \"kubernetes.io/projected/d79beadc-52fc-424a-9716-af15e5eece7c-kube-api-access-9vkr5\") pod \"coredns-76f75df574-nlgkb\" (UID: \"d79beadc-52fc-424a-9716-af15e5eece7c\") " pod="kube-system/coredns-76f75df574-nlgkb" Feb 13 15:35:53.413385 kubelet[2665]: E0213 15:35:53.413330 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:53.414286 containerd[1469]: time="2025-02-13T15:35:53.414241277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nlgkb,Uid:d79beadc-52fc-424a-9716-af15e5eece7c,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:53.417911 kubelet[2665]: E0213 15:35:53.417878 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:53.418324 containerd[1469]: time="2025-02-13T15:35:53.418284813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kglzs,Uid:e5d0c4e9-bb18-41a2-8630-f622112c3f8c,Namespace:kube-system,Attempt:0,}" Feb 13 15:35:53.603120 kubelet[2665]: E0213 15:35:53.603078 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:53.644781 kubelet[2665]: I0213 15:35:53.644493 2665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-c5rsg" podStartSLOduration=6.539613493 podStartE2EDuration="26.644433962s" podCreationTimestamp="2025-02-13 15:35:27 +0000 UTC" firstStartedPulling="2025-02-13 15:35:28.49812395 +0000 UTC m=+14.493928793" lastFinishedPulling="2025-02-13 15:35:48.602944409 +0000 UTC m=+34.598749262" observedRunningTime="2025-02-13 15:35:53.644349974 +0000 UTC m=+39.640154827" watchObservedRunningTime="2025-02-13 15:35:53.644433962 +0000 UTC m=+39.640238815" Feb 13 15:35:54.340004 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:47736.service - OpenSSH per-connection server daemon (10.0.0.1:47736). Feb 13 15:35:54.388685 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 47736 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:35:54.390405 sshd-session[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:54.394557 systemd-logind[1455]: New session 12 of user core. Feb 13 15:35:54.402613 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:35:54.526730 sshd[3537]: Connection closed by 10.0.0.1 port 47736 Feb 13 15:35:54.527244 sshd-session[3535]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:54.531477 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:47736.service: Deactivated successfully. Feb 13 15:35:54.533634 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:35:54.534324 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:35:54.535506 systemd-logind[1455]: Removed session 12. Feb 13 15:35:54.605225 kubelet[2665]: E0213 15:35:54.605116 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:54.957875 systemd-networkd[1404]: cilium_host: Link UP Feb 13 15:35:54.958087 systemd-networkd[1404]: cilium_net: Link UP Feb 13 15:35:54.958092 systemd-networkd[1404]: cilium_net: Gained carrier Feb 13 15:35:54.958383 systemd-networkd[1404]: cilium_host: Gained carrier Feb 13 15:35:54.969466 systemd-networkd[1404]: cilium_net: Gained IPv6LL Feb 13 15:35:55.063699 systemd-networkd[1404]: cilium_vxlan: Link UP Feb 13 15:35:55.063713 systemd-networkd[1404]: cilium_vxlan: Gained carrier Feb 13 15:35:55.271605 systemd-networkd[1404]: cilium_host: Gained IPv6LL Feb 13 15:35:55.273478 kernel: NET: Registered PF_ALG protocol family Feb 13 15:35:55.606383 kubelet[2665]: E0213 15:35:55.606347 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:55.934231 systemd-networkd[1404]: lxc_health: Link UP Feb 13 15:35:55.947173 systemd-networkd[1404]: lxc_health: Gained carrier Feb 13 15:35:56.495749 systemd-networkd[1404]: lxc47462adf0afc: Link UP Feb 13 15:35:56.499119 systemd-networkd[1404]: lxc6a25e57c0a09: Link UP Feb 13 15:35:56.506545 kernel: eth0: renamed from tmp210bb Feb 13 15:35:56.515479 kernel: eth0: renamed from tmp22ef5 Feb 13 15:35:56.519808 systemd-networkd[1404]: lxc6a25e57c0a09: Gained carrier Feb 13 15:35:56.520980 systemd-networkd[1404]: lxc47462adf0afc: Gained carrier Feb 13 15:35:56.608879 kubelet[2665]: E0213 15:35:56.608650 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:56.824877 systemd-networkd[1404]: cilium_vxlan: Gained IPv6LL Feb 13 15:35:57.143634 systemd-networkd[1404]: lxc_health: Gained IPv6LL Feb 13 15:35:57.610585 kubelet[2665]: E0213 15:35:57.610552 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:58.039639 systemd-networkd[1404]: lxc6a25e57c0a09: Gained IPv6LL Feb 13 15:35:58.167679 systemd-networkd[1404]: lxc47462adf0afc: Gained IPv6LL Feb 13 15:35:58.612572 kubelet[2665]: E0213 15:35:58.612533 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:59.546477 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:56236.service - OpenSSH per-connection server daemon (10.0.0.1:56236). Feb 13 15:35:59.593320 sshd[3935]: Accepted publickey for core from 10.0.0.1 port 56236 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:35:59.595058 sshd-session[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:59.603307 systemd-logind[1455]: New session 13 of user core. Feb 13 15:35:59.621046 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:35:59.759711 sshd[3942]: Connection closed by 10.0.0.1 port 56236 Feb 13 15:35:59.763395 sshd-session[3935]: pam_unix(sshd:session): session closed for user core Feb 13 15:35:59.770248 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:56236.service: Deactivated successfully. Feb 13 15:35:59.773953 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:35:59.777507 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:35:59.786821 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:56252.service - OpenSSH per-connection server daemon (10.0.0.1:56252). Feb 13 15:35:59.788662 systemd-logind[1455]: Removed session 13. Feb 13 15:35:59.827692 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 56252 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:35:59.828770 sshd-session[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:35:59.833800 systemd-logind[1455]: New session 14 of user core. Feb 13 15:35:59.840724 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:35:59.853201 containerd[1469]: time="2025-02-13T15:35:59.852667771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:59.853521 containerd[1469]: time="2025-02-13T15:35:59.853246949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:59.853521 containerd[1469]: time="2025-02-13T15:35:59.853265364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:59.853521 containerd[1469]: time="2025-02-13T15:35:59.853365141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:35:59.853521 containerd[1469]: time="2025-02-13T15:35:59.853429672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:35:59.853521 containerd[1469]: time="2025-02-13T15:35:59.853442095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:59.853521 containerd[1469]: time="2025-02-13T15:35:59.853362957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:59.853706 containerd[1469]: time="2025-02-13T15:35:59.853643563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:35:59.868489 systemd[1]: run-containerd-runc-k8s.io-22ef534a9921f769f01333e9603b2a3f3f3a10cedd0e2a01fe8bd896e4ba0e7d-runc.RVhFNQ.mount: Deactivated successfully. Feb 13 15:35:59.881572 systemd[1]: Started cri-containerd-210bb40fe234c5c4ed44ffe7c5b2aecea2031816f8b8b60c332e2a5e97b8ec43.scope - libcontainer container 210bb40fe234c5c4ed44ffe7c5b2aecea2031816f8b8b60c332e2a5e97b8ec43. Feb 13 15:35:59.883063 systemd[1]: Started cri-containerd-22ef534a9921f769f01333e9603b2a3f3f3a10cedd0e2a01fe8bd896e4ba0e7d.scope - libcontainer container 22ef534a9921f769f01333e9603b2a3f3f3a10cedd0e2a01fe8bd896e4ba0e7d. Feb 13 15:35:59.896308 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:35:59.898242 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:35:59.925402 containerd[1469]: time="2025-02-13T15:35:59.925357413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nlgkb,Uid:d79beadc-52fc-424a-9716-af15e5eece7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"22ef534a9921f769f01333e9603b2a3f3f3a10cedd0e2a01fe8bd896e4ba0e7d\"" Feb 13 15:35:59.926099 kubelet[2665]: E0213 15:35:59.926070 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:59.933384 containerd[1469]: time="2025-02-13T15:35:59.933342334Z" level=info msg="CreateContainer within sandbox \"22ef534a9921f769f01333e9603b2a3f3f3a10cedd0e2a01fe8bd896e4ba0e7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:35:59.941001 containerd[1469]: time="2025-02-13T15:35:59.940966949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kglzs,Uid:e5d0c4e9-bb18-41a2-8630-f622112c3f8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"210bb40fe234c5c4ed44ffe7c5b2aecea2031816f8b8b60c332e2a5e97b8ec43\"" Feb 13 15:35:59.941776 kubelet[2665]: E0213 15:35:59.941615 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:35:59.943164 containerd[1469]: time="2025-02-13T15:35:59.943140820Z" level=info msg="CreateContainer within sandbox \"210bb40fe234c5c4ed44ffe7c5b2aecea2031816f8b8b60c332e2a5e97b8ec43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:35:59.969108 containerd[1469]: time="2025-02-13T15:35:59.969057879Z" level=info msg="CreateContainer within sandbox \"22ef534a9921f769f01333e9603b2a3f3f3a10cedd0e2a01fe8bd896e4ba0e7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d6cc0cec7f5882091949671c2d53c7c4c81b57f0a448763fbe592d0c7bba86a\"" Feb 13 15:35:59.969792 containerd[1469]: time="2025-02-13T15:35:59.969758784Z" level=info msg="StartContainer for \"2d6cc0cec7f5882091949671c2d53c7c4c81b57f0a448763fbe592d0c7bba86a\"" Feb 13 15:35:59.983952 containerd[1469]: time="2025-02-13T15:35:59.983902739Z" level=info msg="CreateContainer within sandbox \"210bb40fe234c5c4ed44ffe7c5b2aecea2031816f8b8b60c332e2a5e97b8ec43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71249bf604f86862899f4ad6e1f1761a021162760e22252ba665365ddd408fe2\"" Feb 13 15:35:59.985334 containerd[1469]: time="2025-02-13T15:35:59.984769116Z" level=info msg="StartContainer for \"71249bf604f86862899f4ad6e1f1761a021162760e22252ba665365ddd408fe2\"" Feb 13 15:36:00.000595 systemd[1]: Started cri-containerd-2d6cc0cec7f5882091949671c2d53c7c4c81b57f0a448763fbe592d0c7bba86a.scope - libcontainer container 2d6cc0cec7f5882091949671c2d53c7c4c81b57f0a448763fbe592d0c7bba86a. Feb 13 15:36:00.018911 sshd[3977]: Connection closed by 10.0.0.1 port 56252 Feb 13 15:36:00.022169 sshd-session[3959]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:00.027072 systemd[1]: Started cri-containerd-71249bf604f86862899f4ad6e1f1761a021162760e22252ba665365ddd408fe2.scope - libcontainer container 71249bf604f86862899f4ad6e1f1761a021162760e22252ba665365ddd408fe2. Feb 13 15:36:00.028693 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:56252.service: Deactivated successfully. Feb 13 15:36:00.031864 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:36:00.043154 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:36:00.050383 containerd[1469]: time="2025-02-13T15:36:00.050329806Z" level=info msg="StartContainer for \"2d6cc0cec7f5882091949671c2d53c7c4c81b57f0a448763fbe592d0c7bba86a\" returns successfully" Feb 13 15:36:00.051783 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:56258.service - OpenSSH per-connection server daemon (10.0.0.1:56258). Feb 13 15:36:00.052548 systemd-logind[1455]: Removed session 14. Feb 13 15:36:00.075260 containerd[1469]: time="2025-02-13T15:36:00.075146415Z" level=info msg="StartContainer for \"71249bf604f86862899f4ad6e1f1761a021162760e22252ba665365ddd408fe2\" returns successfully" Feb 13 15:36:00.100094 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 56258 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:00.101752 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:00.107277 systemd-logind[1455]: New session 15 of user core. Feb 13 15:36:00.117664 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:36:00.242860 sshd[4128]: Connection closed by 10.0.0.1 port 56258 Feb 13 15:36:00.243277 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:00.247738 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:56258.service: Deactivated successfully. Feb 13 15:36:00.249775 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:36:00.250364 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:36:00.251338 systemd-logind[1455]: Removed session 15. Feb 13 15:36:00.616528 kubelet[2665]: E0213 15:36:00.616430 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:00.618960 kubelet[2665]: E0213 15:36:00.618937 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:00.627781 kubelet[2665]: I0213 15:36:00.627745 2665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kglzs" podStartSLOduration=33.627708123 podStartE2EDuration="33.627708123s" podCreationTimestamp="2025-02-13 15:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:00.627160024 +0000 UTC m=+46.622964877" watchObservedRunningTime="2025-02-13 15:36:00.627708123 +0000 UTC m=+46.623513046" Feb 13 15:36:00.649479 kubelet[2665]: I0213 15:36:00.649409 2665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nlgkb" podStartSLOduration=33.649366662 podStartE2EDuration="33.649366662s" podCreationTimestamp="2025-02-13 15:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:00.648239877 +0000 UTC m=+46.644044730" watchObservedRunningTime="2025-02-13 15:36:00.649366662 +0000 UTC m=+46.645171505" Feb 13 15:36:00.858616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239363913.mount: Deactivated successfully. Feb 13 15:36:01.621869 kubelet[2665]: E0213 15:36:01.621408 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:01.622526 kubelet[2665]: E0213 15:36:01.622513 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:02.623165 kubelet[2665]: E0213 15:36:02.623124 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:02.623601 kubelet[2665]: E0213 15:36:02.623256 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:05.255593 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:33838.service - OpenSSH per-connection server daemon (10.0.0.1:33838). Feb 13 15:36:05.298145 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 33838 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:05.299548 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:05.303904 systemd-logind[1455]: New session 16 of user core. Feb 13 15:36:05.320625 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:36:05.437880 sshd[4158]: Connection closed by 10.0.0.1 port 33838 Feb 13 15:36:05.438286 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:05.442947 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:33838.service: Deactivated successfully. Feb 13 15:36:05.445009 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:36:05.445789 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:36:05.446900 systemd-logind[1455]: Removed session 16. Feb 13 15:36:10.451310 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:33854.service - OpenSSH per-connection server daemon (10.0.0.1:33854). Feb 13 15:36:10.499393 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 33854 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:10.501021 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:10.505341 systemd-logind[1455]: New session 17 of user core. Feb 13 15:36:10.513647 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:36:10.621838 sshd[4172]: Connection closed by 10.0.0.1 port 33854 Feb 13 15:36:10.622386 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:10.630326 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:33854.service: Deactivated successfully. Feb 13 15:36:10.632182 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:36:10.633652 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:36:10.644732 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:33866.service - OpenSSH per-connection server daemon (10.0.0.1:33866). Feb 13 15:36:10.645743 systemd-logind[1455]: Removed session 17. Feb 13 15:36:10.682869 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 33866 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:10.684341 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:10.688084 systemd-logind[1455]: New session 18 of user core. Feb 13 15:36:10.707570 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:36:10.933918 sshd[4186]: Connection closed by 10.0.0.1 port 33866 Feb 13 15:36:10.934392 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:10.942024 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:33866.service: Deactivated successfully. Feb 13 15:36:10.943763 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:36:10.945305 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:36:10.951765 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:33876.service - OpenSSH per-connection server daemon (10.0.0.1:33876). Feb 13 15:36:10.952775 systemd-logind[1455]: Removed session 18. Feb 13 15:36:10.994329 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 33876 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:10.995959 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:11.000004 systemd-logind[1455]: New session 19 of user core. Feb 13 15:36:11.010597 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:36:12.251618 sshd[4198]: Connection closed by 10.0.0.1 port 33876 Feb 13 15:36:12.252098 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:12.264535 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:33876.service: Deactivated successfully. Feb 13 15:36:12.266435 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:36:12.267539 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:36:12.278901 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:33888.service - OpenSSH per-connection server daemon (10.0.0.1:33888). Feb 13 15:36:12.279886 systemd-logind[1455]: Removed session 19. Feb 13 15:36:12.315241 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 33888 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:12.316776 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:12.320814 systemd-logind[1455]: New session 20 of user core. Feb 13 15:36:12.333581 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:36:12.549490 sshd[4219]: Connection closed by 10.0.0.1 port 33888 Feb 13 15:36:12.550304 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:12.560534 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:33888.service: Deactivated successfully. Feb 13 15:36:12.562219 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:36:12.563529 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:36:12.572908 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:33892.service - OpenSSH per-connection server daemon (10.0.0.1:33892). Feb 13 15:36:12.573919 systemd-logind[1455]: Removed session 20. Feb 13 15:36:12.609439 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 33892 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:12.610861 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:12.614495 systemd-logind[1455]: New session 21 of user core. Feb 13 15:36:12.624546 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:36:12.733104 sshd[4231]: Connection closed by 10.0.0.1 port 33892 Feb 13 15:36:12.733499 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:12.737741 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:33892.service: Deactivated successfully. Feb 13 15:36:12.739860 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:36:12.740531 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:36:12.741515 systemd-logind[1455]: Removed session 21. Feb 13 15:36:17.745492 systemd[1]: Started sshd@21-10.0.0.142:22-10.0.0.1:48710.service - OpenSSH per-connection server daemon (10.0.0.1:48710). Feb 13 15:36:17.786880 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 48710 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:17.788346 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:17.792315 systemd-logind[1455]: New session 22 of user core. Feb 13 15:36:17.800579 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:36:17.909572 sshd[4247]: Connection closed by 10.0.0.1 port 48710 Feb 13 15:36:17.909949 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:17.913286 systemd[1]: sshd@21-10.0.0.142:22-10.0.0.1:48710.service: Deactivated successfully. Feb 13 15:36:17.916408 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:36:17.917244 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:36:17.918134 systemd-logind[1455]: Removed session 22. Feb 13 15:36:22.921241 systemd[1]: Started sshd@22-10.0.0.142:22-10.0.0.1:48726.service - OpenSSH per-connection server daemon (10.0.0.1:48726). Feb 13 15:36:22.963376 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 48726 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:22.964760 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:22.968851 systemd-logind[1455]: New session 23 of user core. Feb 13 15:36:22.981585 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:36:23.086976 sshd[4264]: Connection closed by 10.0.0.1 port 48726 Feb 13 15:36:23.087311 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:23.091353 systemd[1]: sshd@22-10.0.0.142:22-10.0.0.1:48726.service: Deactivated successfully. Feb 13 15:36:23.093469 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:36:23.094139 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:36:23.095026 systemd-logind[1455]: Removed session 23. Feb 13 15:36:23.157317 kubelet[2665]: E0213 15:36:23.157257 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:23.157317 kubelet[2665]: E0213 15:36:23.157289 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:28.099539 systemd[1]: Started sshd@23-10.0.0.142:22-10.0.0.1:38962.service - OpenSSH per-connection server daemon (10.0.0.1:38962). Feb 13 15:36:28.141514 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 38962 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:28.143144 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:28.147089 systemd-logind[1455]: New session 24 of user core. Feb 13 15:36:28.161849 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:36:28.267484 sshd[4278]: Connection closed by 10.0.0.1 port 38962 Feb 13 15:36:28.267865 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:28.271855 systemd[1]: sshd@23-10.0.0.142:22-10.0.0.1:38962.service: Deactivated successfully. Feb 13 15:36:28.273816 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:36:28.274486 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:36:28.275369 systemd-logind[1455]: Removed session 24. Feb 13 15:36:33.287486 systemd[1]: Started sshd@24-10.0.0.142:22-10.0.0.1:38970.service - OpenSSH per-connection server daemon (10.0.0.1:38970). Feb 13 15:36:33.330625 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 38970 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:33.332114 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:33.336323 systemd-logind[1455]: New session 25 of user core. Feb 13 15:36:33.343578 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:36:33.453347 sshd[4295]: Connection closed by 10.0.0.1 port 38970 Feb 13 15:36:33.453729 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:33.464288 systemd[1]: sshd@24-10.0.0.142:22-10.0.0.1:38970.service: Deactivated successfully. Feb 13 15:36:33.466098 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:36:33.467726 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:36:33.477669 systemd[1]: Started sshd@25-10.0.0.142:22-10.0.0.1:38982.service - OpenSSH per-connection server daemon (10.0.0.1:38982). Feb 13 15:36:33.478652 systemd-logind[1455]: Removed session 25. Feb 13 15:36:33.516140 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 38982 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:33.517724 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:33.521563 systemd-logind[1455]: New session 26 of user core. Feb 13 15:36:33.538572 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:36:34.157463 kubelet[2665]: E0213 15:36:34.157404 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:35.273490 containerd[1469]: time="2025-02-13T15:36:35.272433107Z" level=info msg="StopContainer for \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\" with timeout 30 (s)" Feb 13 15:36:35.279576 containerd[1469]: time="2025-02-13T15:36:35.279540069Z" level=info msg="Stop container \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\" with signal terminated" Feb 13 15:36:35.294442 systemd[1]: cri-containerd-ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75.scope: Deactivated successfully. Feb 13 15:36:35.311005 containerd[1469]: time="2025-02-13T15:36:35.310572272Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:36:35.315441 containerd[1469]: time="2025-02-13T15:36:35.315397905Z" level=info msg="StopContainer for \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\" with timeout 2 (s)" Feb 13 15:36:35.316507 containerd[1469]: time="2025-02-13T15:36:35.316031743Z" level=info msg="Stop container \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\" with signal terminated" Feb 13 15:36:35.317920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75-rootfs.mount: Deactivated successfully. Feb 13 15:36:35.326908 systemd-networkd[1404]: lxc_health: Link DOWN Feb 13 15:36:35.326918 systemd-networkd[1404]: lxc_health: Lost carrier Feb 13 15:36:35.344603 containerd[1469]: time="2025-02-13T15:36:35.344535014Z" level=info msg="shim disconnected" id=ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75 namespace=k8s.io Feb 13 15:36:35.344932 containerd[1469]: time="2025-02-13T15:36:35.344754072Z" level=warning msg="cleaning up after shim disconnected" id=ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75 namespace=k8s.io Feb 13 15:36:35.344932 containerd[1469]: time="2025-02-13T15:36:35.344776023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:35.354018 systemd[1]: cri-containerd-23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40.scope: Deactivated successfully. Feb 13 15:36:35.354295 systemd[1]: cri-containerd-23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40.scope: Consumed 6.698s CPU time. Feb 13 15:36:35.367337 containerd[1469]: time="2025-02-13T15:36:35.367282019Z" level=info msg="StopContainer for \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\" returns successfully" Feb 13 15:36:35.371006 containerd[1469]: time="2025-02-13T15:36:35.370946547Z" level=info msg="StopPodSandbox for \"2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700\"" Feb 13 15:36:35.376538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40-rootfs.mount: Deactivated successfully. Feb 13 15:36:35.382694 containerd[1469]: time="2025-02-13T15:36:35.382634805Z" level=info msg="shim disconnected" id=23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40 namespace=k8s.io Feb 13 15:36:35.382694 containerd[1469]: time="2025-02-13T15:36:35.382679780Z" level=warning msg="cleaning up after shim disconnected" id=23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40 namespace=k8s.io Feb 13 15:36:35.382694 containerd[1469]: time="2025-02-13T15:36:35.382692555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:35.384996 containerd[1469]: time="2025-02-13T15:36:35.371005710Z" level=info msg="Container to stop \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:36:35.386926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700-shm.mount: Deactivated successfully. Feb 13 15:36:35.392106 systemd[1]: cri-containerd-2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700.scope: Deactivated successfully. Feb 13 15:36:35.405227 containerd[1469]: time="2025-02-13T15:36:35.405175056Z" level=info msg="StopContainer for \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\" returns successfully" Feb 13 15:36:35.405824 containerd[1469]: time="2025-02-13T15:36:35.405791160Z" level=info msg="StopPodSandbox for \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\"" Feb 13 15:36:35.405963 containerd[1469]: time="2025-02-13T15:36:35.405908734Z" level=info msg="Container to stop \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:36:35.406007 containerd[1469]: time="2025-02-13T15:36:35.405972396Z" level=info msg="Container to stop \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:36:35.406007 containerd[1469]: time="2025-02-13T15:36:35.405981633Z" level=info msg="Container to stop \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:36:35.406007 containerd[1469]: time="2025-02-13T15:36:35.405990381Z" level=info msg="Container to stop \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:36:35.406007 containerd[1469]: time="2025-02-13T15:36:35.405999047Z" level=info msg="Container to stop \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:36:35.408308 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8-shm.mount: Deactivated successfully. Feb 13 15:36:35.414842 systemd[1]: cri-containerd-ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8.scope: Deactivated successfully. Feb 13 15:36:35.418233 containerd[1469]: time="2025-02-13T15:36:35.418063151Z" level=info msg="shim disconnected" id=2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700 namespace=k8s.io Feb 13 15:36:35.418542 containerd[1469]: time="2025-02-13T15:36:35.418500695Z" level=warning msg="cleaning up after shim disconnected" id=2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700 namespace=k8s.io Feb 13 15:36:35.418542 containerd[1469]: time="2025-02-13T15:36:35.418523268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:35.437754 containerd[1469]: time="2025-02-13T15:36:35.437687922Z" level=info msg="shim disconnected" id=ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8 namespace=k8s.io Feb 13 15:36:35.437754 containerd[1469]: time="2025-02-13T15:36:35.437753097Z" level=warning msg="cleaning up after shim disconnected" id=ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8 namespace=k8s.io Feb 13 15:36:35.437754 containerd[1469]: time="2025-02-13T15:36:35.437761954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:35.441186 containerd[1469]: time="2025-02-13T15:36:35.440175666Z" level=info msg="TearDown network for sandbox \"2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700\" successfully" Feb 13 15:36:35.441186 containerd[1469]: time="2025-02-13T15:36:35.440195755Z" level=info msg="StopPodSandbox for \"2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700\" returns successfully" Feb 13 15:36:35.454727 containerd[1469]: time="2025-02-13T15:36:35.454682819Z" level=info msg="TearDown network for sandbox \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" successfully" Feb 13 15:36:35.454940 containerd[1469]: time="2025-02-13T15:36:35.454907909Z" level=info msg="StopPodSandbox for \"ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8\" returns successfully" Feb 13 15:36:35.539076 kubelet[2665]: I0213 15:36:35.539017 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cni-path\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539076 kubelet[2665]: I0213 15:36:35.539072 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-lib-modules\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539559 kubelet[2665]: I0213 15:36:35.539097 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-xtables-lock\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539559 kubelet[2665]: I0213 15:36:35.539123 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-host-proc-sys-kernel\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539559 kubelet[2665]: I0213 15:36:35.539162 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-hubble-tls\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539559 kubelet[2665]: I0213 15:36:35.539150 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cni-path" (OuterVolumeSpecName: "cni-path") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.539559 kubelet[2665]: I0213 15:36:35.539168 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.539559 kubelet[2665]: I0213 15:36:35.539186 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-etc-cni-netd\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539809 kubelet[2665]: I0213 15:36:35.539212 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-host-proc-sys-net\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539809 kubelet[2665]: I0213 15:36:35.539218 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.539809 kubelet[2665]: I0213 15:36:35.539211 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.539809 kubelet[2665]: I0213 15:36:35.539239 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9bz6\" (UniqueName: \"kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-kube-api-access-c9bz6\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539809 kubelet[2665]: I0213 15:36:35.539244 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.539926 kubelet[2665]: I0213 15:36:35.539266 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-run\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539926 kubelet[2665]: I0213 15:36:35.539264 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.539926 kubelet[2665]: I0213 15:36:35.539290 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-bpf-maps\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.539926 kubelet[2665]: I0213 15:36:35.539295 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.539926 kubelet[2665]: I0213 15:36:35.539319 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21ca4706-133b-40cd-9e2f-79e49bbc84a7-cilium-config-path\") pod \"21ca4706-133b-40cd-9e2f-79e49bbc84a7\" (UID: \"21ca4706-133b-40cd-9e2f-79e49bbc84a7\") " Feb 13 15:36:35.540054 kubelet[2665]: I0213 15:36:35.539355 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-clustermesh-secrets\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.540054 kubelet[2665]: I0213 15:36:35.539378 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-cgroup\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.540054 kubelet[2665]: I0213 15:36:35.539404 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-974bg\" (UniqueName: \"kubernetes.io/projected/21ca4706-133b-40cd-9e2f-79e49bbc84a7-kube-api-access-974bg\") pod \"21ca4706-133b-40cd-9e2f-79e49bbc84a7\" (UID: \"21ca4706-133b-40cd-9e2f-79e49bbc84a7\") " Feb 13 15:36:35.540054 kubelet[2665]: I0213 15:36:35.539423 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-hostproc\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.540054 kubelet[2665]: I0213 15:36:35.539464 2665 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-config-path\") pod \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\" (UID: \"bc1447e6-b0da-4740-a8ca-9db32d5ef8ff\") " Feb 13 15:36:35.540054 kubelet[2665]: I0213 15:36:35.539512 2665 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.540054 kubelet[2665]: I0213 15:36:35.539528 2665 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.540201 kubelet[2665]: I0213 15:36:35.539541 2665 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.540201 kubelet[2665]: I0213 15:36:35.539559 2665 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.540201 kubelet[2665]: I0213 15:36:35.539571 2665 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.540201 kubelet[2665]: I0213 15:36:35.539583 2665 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.541815 kubelet[2665]: I0213 15:36:35.541796 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.543057 kubelet[2665]: I0213 15:36:35.543023 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:36:35.543936 kubelet[2665]: I0213 15:36:35.543913 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-hostproc" (OuterVolumeSpecName: "hostproc") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.544077 kubelet[2665]: I0213 15:36:35.544051 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:36:35.544230 kubelet[2665]: I0213 15:36:35.544088 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:36:35.544335 kubelet[2665]: I0213 15:36:35.544113 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:36:35.544647 kubelet[2665]: I0213 15:36:35.544623 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-kube-api-access-c9bz6" (OuterVolumeSpecName: "kube-api-access-c9bz6") pod "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" (UID: "bc1447e6-b0da-4740-a8ca-9db32d5ef8ff"). InnerVolumeSpecName "kube-api-access-c9bz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:36:35.545662 kubelet[2665]: I0213 15:36:35.545630 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ca4706-133b-40cd-9e2f-79e49bbc84a7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "21ca4706-133b-40cd-9e2f-79e49bbc84a7" (UID: "21ca4706-133b-40cd-9e2f-79e49bbc84a7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:36:35.546308 kubelet[2665]: I0213 15:36:35.546280 2665 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ca4706-133b-40cd-9e2f-79e49bbc84a7-kube-api-access-974bg" (OuterVolumeSpecName: "kube-api-access-974bg") pod "21ca4706-133b-40cd-9e2f-79e49bbc84a7" (UID: "21ca4706-133b-40cd-9e2f-79e49bbc84a7"). InnerVolumeSpecName "kube-api-access-974bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:36:35.640063 kubelet[2665]: I0213 15:36:35.640009 2665 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.640063 kubelet[2665]: I0213 15:36:35.640047 2665 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.640063 kubelet[2665]: I0213 15:36:35.640064 2665 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c9bz6\" (UniqueName: \"kubernetes.io/projected/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-kube-api-access-c9bz6\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.640063 kubelet[2665]: I0213 15:36:35.640076 2665 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.640296 kubelet[2665]: I0213 15:36:35.640086 2665 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.640296 kubelet[2665]: I0213 15:36:35.640097 2665 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.640296 kubelet[2665]: I0213 15:36:35.640107 2665 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21ca4706-133b-40cd-9e2f-79e49bbc84a7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.640296 kubelet[2665]: I0213 15:36:35.640115 2665 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.640296 kubelet[2665]: I0213 15:36:35.640125 2665 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.640296 kubelet[2665]: I0213 15:36:35.640134 2665 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-974bg\" (UniqueName: \"kubernetes.io/projected/21ca4706-133b-40cd-9e2f-79e49bbc84a7-kube-api-access-974bg\") on node \"localhost\" DevicePath \"\"" Feb 13 15:36:35.698240 kubelet[2665]: I0213 15:36:35.698211 2665 scope.go:117] "RemoveContainer" containerID="23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40" Feb 13 15:36:35.704491 systemd[1]: Removed slice kubepods-burstable-podbc1447e6_b0da_4740_a8ca_9db32d5ef8ff.slice - libcontainer container kubepods-burstable-podbc1447e6_b0da_4740_a8ca_9db32d5ef8ff.slice. Feb 13 15:36:35.704605 systemd[1]: kubepods-burstable-podbc1447e6_b0da_4740_a8ca_9db32d5ef8ff.slice: Consumed 6.800s CPU time. Feb 13 15:36:35.706465 containerd[1469]: time="2025-02-13T15:36:35.706420113Z" level=info msg="RemoveContainer for \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\"" Feb 13 15:36:35.710479 systemd[1]: Removed slice kubepods-besteffort-pod21ca4706_133b_40cd_9e2f_79e49bbc84a7.slice - libcontainer container kubepods-besteffort-pod21ca4706_133b_40cd_9e2f_79e49bbc84a7.slice. Feb 13 15:36:35.715255 containerd[1469]: time="2025-02-13T15:36:35.715211947Z" level=info msg="RemoveContainer for \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\" returns successfully" Feb 13 15:36:35.715610 kubelet[2665]: I0213 15:36:35.715564 2665 scope.go:117] "RemoveContainer" containerID="80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d" Feb 13 15:36:35.716776 containerd[1469]: time="2025-02-13T15:36:35.716741244Z" level=info msg="RemoveContainer for \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\"" Feb 13 15:36:35.726351 containerd[1469]: time="2025-02-13T15:36:35.726296805Z" level=info msg="RemoveContainer for \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\" returns successfully" Feb 13 15:36:35.726559 kubelet[2665]: I0213 15:36:35.726528 2665 scope.go:117] "RemoveContainer" containerID="990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8" Feb 13 15:36:35.727479 containerd[1469]: time="2025-02-13T15:36:35.727423393Z" level=info msg="RemoveContainer for \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\"" Feb 13 15:36:35.744052 containerd[1469]: time="2025-02-13T15:36:35.744019319Z" level=info msg="RemoveContainer for \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\" returns successfully" Feb 13 15:36:35.744210 kubelet[2665]: I0213 15:36:35.744174 2665 scope.go:117] "RemoveContainer" containerID="a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432" Feb 13 15:36:35.745232 containerd[1469]: time="2025-02-13T15:36:35.745198687Z" level=info msg="RemoveContainer for \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\"" Feb 13 15:36:35.805366 containerd[1469]: time="2025-02-13T15:36:35.805194630Z" level=info msg="RemoveContainer for \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\" returns successfully" Feb 13 15:36:35.805558 kubelet[2665]: I0213 15:36:35.805392 2665 scope.go:117] "RemoveContainer" containerID="bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb" Feb 13 15:36:35.806624 containerd[1469]: time="2025-02-13T15:36:35.806590692Z" level=info msg="RemoveContainer for \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\"" Feb 13 15:36:35.873749 containerd[1469]: time="2025-02-13T15:36:35.873704748Z" level=info msg="RemoveContainer for \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\" returns successfully" Feb 13 15:36:35.874007 kubelet[2665]: I0213 15:36:35.873982 2665 scope.go:117] "RemoveContainer" containerID="23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40" Feb 13 15:36:35.874332 containerd[1469]: time="2025-02-13T15:36:35.874283974Z" level=error msg="ContainerStatus for \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\": not found" Feb 13 15:36:35.880895 kubelet[2665]: E0213 15:36:35.880861 2665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\": not found" containerID="23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40" Feb 13 15:36:35.881011 kubelet[2665]: I0213 15:36:35.880990 2665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40"} err="failed to get container status \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\": rpc error: code = NotFound desc = an error occurred when try to find container \"23a0e0c428b80949afd04e005b73ef6e67fc132ffe28852e141bad8087fcbf40\": not found" Feb 13 15:36:35.881038 kubelet[2665]: I0213 15:36:35.881011 2665 scope.go:117] "RemoveContainer" containerID="80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d" Feb 13 15:36:35.881303 containerd[1469]: time="2025-02-13T15:36:35.881252430Z" level=error msg="ContainerStatus for \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\": not found" Feb 13 15:36:35.881471 kubelet[2665]: E0213 15:36:35.881372 2665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\": not found" containerID="80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d" Feb 13 15:36:35.881471 kubelet[2665]: I0213 15:36:35.881393 2665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d"} err="failed to get container status \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"80c1dcfa2035591e7d0b23489e7df734ce5b1d3a5b7a6558ebe3c7107ccd6c7d\": not found" Feb 13 15:36:35.881471 kubelet[2665]: I0213 15:36:35.881401 2665 scope.go:117] "RemoveContainer" containerID="990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8" Feb 13 15:36:35.881586 containerd[1469]: time="2025-02-13T15:36:35.881538115Z" level=error msg="ContainerStatus for \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\": not found" Feb 13 15:36:35.881655 kubelet[2665]: E0213 15:36:35.881628 2665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\": not found" containerID="990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8" Feb 13 15:36:35.881655 kubelet[2665]: I0213 15:36:35.881653 2665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8"} err="failed to get container status \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"990a0698d80f1d3a2dfd65b171bdd4dd3ed8c08fa538efb95934f4f55550fdf8\": not found" Feb 13 15:36:35.881705 kubelet[2665]: I0213 15:36:35.881666 2665 scope.go:117] "RemoveContainer" containerID="a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432" Feb 13 15:36:35.881840 containerd[1469]: time="2025-02-13T15:36:35.881789384Z" level=error msg="ContainerStatus for \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\": not found" Feb 13 15:36:35.881973 kubelet[2665]: E0213 15:36:35.881948 2665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\": not found" containerID="a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432" Feb 13 15:36:35.882008 kubelet[2665]: I0213 15:36:35.881994 2665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432"} err="failed to get container status \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2666f03dca5d5c7f43d5e96c442e8a7c743460ec329bab0bf3fe33b8b950432\": not found" Feb 13 15:36:35.882008 kubelet[2665]: I0213 15:36:35.882005 2665 scope.go:117] "RemoveContainer" containerID="bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb" Feb 13 15:36:35.882167 containerd[1469]: time="2025-02-13T15:36:35.882142838Z" level=error msg="ContainerStatus for \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\": not found" Feb 13 15:36:35.882281 kubelet[2665]: E0213 15:36:35.882262 2665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\": not found" containerID="bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb" Feb 13 15:36:35.882336 kubelet[2665]: I0213 15:36:35.882295 2665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb"} err="failed to get container status \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb027d5710fd46892b936d8ee33d01d6da4c5e37f4c8a42ec892277f07287ceb\": not found" Feb 13 15:36:35.882336 kubelet[2665]: I0213 15:36:35.882309 2665 scope.go:117] "RemoveContainer" containerID="ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75" Feb 13 15:36:35.883334 containerd[1469]: time="2025-02-13T15:36:35.883287852Z" level=info msg="RemoveContainer for \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\"" Feb 13 15:36:35.941402 containerd[1469]: time="2025-02-13T15:36:35.941353935Z" level=info msg="RemoveContainer for \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\" returns successfully" Feb 13 15:36:35.941684 kubelet[2665]: I0213 15:36:35.941638 2665 scope.go:117] "RemoveContainer" containerID="ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75" Feb 13 15:36:35.942052 containerd[1469]: time="2025-02-13T15:36:35.942001751Z" level=error msg="ContainerStatus for \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\": not found" Feb 13 15:36:35.942241 kubelet[2665]: E0213 15:36:35.942221 2665 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\": not found" containerID="ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75" Feb 13 15:36:35.942285 kubelet[2665]: I0213 15:36:35.942268 2665 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75"} err="failed to get container status \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac9105abf02bb745e8b1b3096c2589558cb46c574abfc5edf1f60ae66bd39a75\": not found" Feb 13 15:36:36.160407 kubelet[2665]: I0213 15:36:36.160295 2665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="21ca4706-133b-40cd-9e2f-79e49bbc84a7" path="/var/lib/kubelet/pods/21ca4706-133b-40cd-9e2f-79e49bbc84a7/volumes" Feb 13 15:36:36.161040 kubelet[2665]: I0213 15:36:36.161022 2665 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" path="/var/lib/kubelet/pods/bc1447e6-b0da-4740-a8ca-9db32d5ef8ff/volumes" Feb 13 15:36:36.283621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab525fcdc0c1c8d2d5626d26ae849bedd280cd530c4d0c5cd05336e511038da8-rootfs.mount: Deactivated successfully. Feb 13 15:36:36.283721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2159c5390e2ba575f4812113e5beb372a8c72451a9a348f92b157df694ffd700-rootfs.mount: Deactivated successfully. Feb 13 15:36:36.283796 systemd[1]: var-lib-kubelet-pods-bc1447e6\x2db0da\x2d4740\x2da8ca\x2d9db32d5ef8ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc9bz6.mount: Deactivated successfully. Feb 13 15:36:36.283873 systemd[1]: var-lib-kubelet-pods-21ca4706\x2d133b\x2d40cd\x2d9e2f\x2d79e49bbc84a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d974bg.mount: Deactivated successfully. Feb 13 15:36:36.283949 systemd[1]: var-lib-kubelet-pods-bc1447e6\x2db0da\x2d4740\x2da8ca\x2d9db32d5ef8ff-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:36:36.284035 systemd[1]: var-lib-kubelet-pods-bc1447e6\x2db0da\x2d4740\x2da8ca\x2d9db32d5ef8ff-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:36:37.259739 sshd[4309]: Connection closed by 10.0.0.1 port 38982 Feb 13 15:36:37.260209 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:37.271543 systemd[1]: sshd@25-10.0.0.142:22-10.0.0.1:38982.service: Deactivated successfully. Feb 13 15:36:37.273761 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:36:37.273937 systemd[1]: session-26.scope: Consumed 1.113s CPU time. Feb 13 15:36:37.275299 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:36:37.284708 systemd[1]: Started sshd@26-10.0.0.142:22-10.0.0.1:59246.service - OpenSSH per-connection server daemon (10.0.0.1:59246). Feb 13 15:36:37.285645 systemd-logind[1455]: Removed session 26. Feb 13 15:36:37.324577 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 59246 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:37.326141 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:37.330119 systemd-logind[1455]: New session 27 of user core. Feb 13 15:36:37.339557 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:36:37.727022 sshd[4471]: Connection closed by 10.0.0.1 port 59246 Feb 13 15:36:37.727661 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:37.738730 kubelet[2665]: I0213 15:36:37.738055 2665 topology_manager.go:215] "Topology Admit Handler" podUID="45f61c07-c168-4383-85b7-ea6be42ef26b" podNamespace="kube-system" podName="cilium-h8g2c" Feb 13 15:36:37.738730 kubelet[2665]: E0213 15:36:37.738127 2665 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" containerName="clean-cilium-state" Feb 13 15:36:37.738730 kubelet[2665]: E0213 15:36:37.738138 2665 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" containerName="cilium-agent" Feb 13 15:36:37.738730 kubelet[2665]: E0213 15:36:37.738147 2665 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21ca4706-133b-40cd-9e2f-79e49bbc84a7" containerName="cilium-operator" Feb 13 15:36:37.738730 kubelet[2665]: E0213 15:36:37.738156 2665 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" containerName="mount-cgroup" Feb 13 15:36:37.738730 kubelet[2665]: E0213 15:36:37.738165 2665 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" containerName="mount-bpf-fs" Feb 13 15:36:37.738730 kubelet[2665]: E0213 15:36:37.738175 2665 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" containerName="apply-sysctl-overwrites" Feb 13 15:36:37.738730 kubelet[2665]: I0213 15:36:37.738203 2665 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ca4706-133b-40cd-9e2f-79e49bbc84a7" containerName="cilium-operator" Feb 13 15:36:37.738730 kubelet[2665]: I0213 15:36:37.738214 2665 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1447e6-b0da-4740-a8ca-9db32d5ef8ff" containerName="cilium-agent" Feb 13 15:36:37.739729 systemd[1]: sshd@26-10.0.0.142:22-10.0.0.1:59246.service: Deactivated successfully. Feb 13 15:36:37.742054 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:36:37.745393 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:36:37.752947 systemd[1]: Started sshd@27-10.0.0.142:22-10.0.0.1:59260.service - OpenSSH per-connection server daemon (10.0.0.1:59260). Feb 13 15:36:37.760249 systemd-logind[1455]: Removed session 27. Feb 13 15:36:37.768542 systemd[1]: Created slice kubepods-burstable-pod45f61c07_c168_4383_85b7_ea6be42ef26b.slice - libcontainer container kubepods-burstable-pod45f61c07_c168_4383_85b7_ea6be42ef26b.slice. Feb 13 15:36:37.795287 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 59260 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:37.796854 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:37.800623 systemd-logind[1455]: New session 28 of user core. Feb 13 15:36:37.810583 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:36:37.852293 kubelet[2665]: I0213 15:36:37.852265 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-xtables-lock\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852380 kubelet[2665]: I0213 15:36:37.852304 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-hostproc\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852380 kubelet[2665]: I0213 15:36:37.852322 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-cni-path\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852380 kubelet[2665]: I0213 15:36:37.852344 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p9k7\" (UniqueName: \"kubernetes.io/projected/45f61c07-c168-4383-85b7-ea6be42ef26b-kube-api-access-7p9k7\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852481 kubelet[2665]: I0213 15:36:37.852425 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-cilium-run\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852504 kubelet[2665]: I0213 15:36:37.852491 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-cilium-cgroup\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852526 kubelet[2665]: I0213 15:36:37.852512 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-etc-cni-netd\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852578 kubelet[2665]: I0213 15:36:37.852559 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-bpf-maps\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852635 kubelet[2665]: I0213 15:36:37.852614 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-lib-modules\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852635 kubelet[2665]: I0213 15:36:37.852635 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45f61c07-c168-4383-85b7-ea6be42ef26b-clustermesh-secrets\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852721 kubelet[2665]: I0213 15:36:37.852700 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-host-proc-sys-net\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852745 kubelet[2665]: I0213 15:36:37.852731 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45f61c07-c168-4383-85b7-ea6be42ef26b-host-proc-sys-kernel\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852796 kubelet[2665]: I0213 15:36:37.852782 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45f61c07-c168-4383-85b7-ea6be42ef26b-cilium-config-path\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852820 kubelet[2665]: I0213 15:36:37.852807 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45f61c07-c168-4383-85b7-ea6be42ef26b-cilium-ipsec-secrets\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.852841 kubelet[2665]: I0213 15:36:37.852826 2665 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45f61c07-c168-4383-85b7-ea6be42ef26b-hubble-tls\") pod \"cilium-h8g2c\" (UID: \"45f61c07-c168-4383-85b7-ea6be42ef26b\") " pod="kube-system/cilium-h8g2c" Feb 13 15:36:37.859965 sshd[4484]: Connection closed by 10.0.0.1 port 59260 Feb 13 15:36:37.860285 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:37.873093 systemd[1]: sshd@27-10.0.0.142:22-10.0.0.1:59260.service: Deactivated successfully. Feb 13 15:36:37.874721 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:36:37.876128 systemd-logind[1455]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:36:37.880696 systemd[1]: Started sshd@28-10.0.0.142:22-10.0.0.1:59270.service - OpenSSH per-connection server daemon (10.0.0.1:59270). Feb 13 15:36:37.881519 systemd-logind[1455]: Removed session 28. Feb 13 15:36:37.919461 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 59270 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:36:37.920761 sshd-session[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:36:37.924626 systemd-logind[1455]: New session 29 of user core. Feb 13 15:36:37.934576 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:36:38.072048 kubelet[2665]: E0213 15:36:38.072008 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:38.072577 containerd[1469]: time="2025-02-13T15:36:38.072542350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h8g2c,Uid:45f61c07-c168-4383-85b7-ea6be42ef26b,Namespace:kube-system,Attempt:0,}" Feb 13 15:36:38.092822 containerd[1469]: time="2025-02-13T15:36:38.092707074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:36:38.093610 containerd[1469]: time="2025-02-13T15:36:38.093549980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:36:38.093610 containerd[1469]: time="2025-02-13T15:36:38.093594685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:38.093769 containerd[1469]: time="2025-02-13T15:36:38.093725865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:36:38.120582 systemd[1]: Started cri-containerd-c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494.scope - libcontainer container c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494. Feb 13 15:36:38.143344 containerd[1469]: time="2025-02-13T15:36:38.143305921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h8g2c,Uid:45f61c07-c168-4383-85b7-ea6be42ef26b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\"" Feb 13 15:36:38.143977 kubelet[2665]: E0213 15:36:38.143952 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:38.145483 containerd[1469]: time="2025-02-13T15:36:38.145461527Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:36:38.167763 containerd[1469]: time="2025-02-13T15:36:38.167717354Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"82f5dd0107f550dd747c6b2083e3fd00558d5a2aa4c80e51a1973f62a27a8a2a\"" Feb 13 15:36:38.168139 containerd[1469]: time="2025-02-13T15:36:38.168094262Z" level=info msg="StartContainer for \"82f5dd0107f550dd747c6b2083e3fd00558d5a2aa4c80e51a1973f62a27a8a2a\"" Feb 13 15:36:38.195583 systemd[1]: Started cri-containerd-82f5dd0107f550dd747c6b2083e3fd00558d5a2aa4c80e51a1973f62a27a8a2a.scope - libcontainer container 82f5dd0107f550dd747c6b2083e3fd00558d5a2aa4c80e51a1973f62a27a8a2a. Feb 13 15:36:38.224163 containerd[1469]: time="2025-02-13T15:36:38.224123912Z" level=info msg="StartContainer for \"82f5dd0107f550dd747c6b2083e3fd00558d5a2aa4c80e51a1973f62a27a8a2a\" returns successfully" Feb 13 15:36:38.235716 systemd[1]: cri-containerd-82f5dd0107f550dd747c6b2083e3fd00558d5a2aa4c80e51a1973f62a27a8a2a.scope: Deactivated successfully. Feb 13 15:36:38.273113 containerd[1469]: time="2025-02-13T15:36:38.273041878Z" level=info msg="shim disconnected" id=82f5dd0107f550dd747c6b2083e3fd00558d5a2aa4c80e51a1973f62a27a8a2a namespace=k8s.io Feb 13 15:36:38.273113 containerd[1469]: time="2025-02-13T15:36:38.273100189Z" level=warning msg="cleaning up after shim disconnected" id=82f5dd0107f550dd747c6b2083e3fd00558d5a2aa4c80e51a1973f62a27a8a2a namespace=k8s.io Feb 13 15:36:38.273113 containerd[1469]: time="2025-02-13T15:36:38.273108695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:38.714283 kubelet[2665]: E0213 15:36:38.714239 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:38.716498 containerd[1469]: time="2025-02-13T15:36:38.716463597Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:36:38.728717 containerd[1469]: time="2025-02-13T15:36:38.728677349Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b73724e51f5d18f8aa4494fe9426b43e758a90c2f3fc68bf3678b19b5c0d712\"" Feb 13 15:36:38.729255 containerd[1469]: time="2025-02-13T15:36:38.729215584Z" level=info msg="StartContainer for \"3b73724e51f5d18f8aa4494fe9426b43e758a90c2f3fc68bf3678b19b5c0d712\"" Feb 13 15:36:38.757584 systemd[1]: Started cri-containerd-3b73724e51f5d18f8aa4494fe9426b43e758a90c2f3fc68bf3678b19b5c0d712.scope - libcontainer container 3b73724e51f5d18f8aa4494fe9426b43e758a90c2f3fc68bf3678b19b5c0d712. Feb 13 15:36:38.782566 containerd[1469]: time="2025-02-13T15:36:38.782485629Z" level=info msg="StartContainer for \"3b73724e51f5d18f8aa4494fe9426b43e758a90c2f3fc68bf3678b19b5c0d712\" returns successfully" Feb 13 15:36:38.789580 systemd[1]: cri-containerd-3b73724e51f5d18f8aa4494fe9426b43e758a90c2f3fc68bf3678b19b5c0d712.scope: Deactivated successfully. Feb 13 15:36:38.811285 containerd[1469]: time="2025-02-13T15:36:38.811215074Z" level=info msg="shim disconnected" id=3b73724e51f5d18f8aa4494fe9426b43e758a90c2f3fc68bf3678b19b5c0d712 namespace=k8s.io Feb 13 15:36:38.811285 containerd[1469]: time="2025-02-13T15:36:38.811279056Z" level=warning msg="cleaning up after shim disconnected" id=3b73724e51f5d18f8aa4494fe9426b43e758a90c2f3fc68bf3678b19b5c0d712 namespace=k8s.io Feb 13 15:36:38.811285 containerd[1469]: time="2025-02-13T15:36:38.811288594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:38.823300 containerd[1469]: time="2025-02-13T15:36:38.823249264Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:36:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:36:39.205334 kubelet[2665]: E0213 15:36:39.205306 2665 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:36:39.717099 kubelet[2665]: E0213 15:36:39.717063 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:39.718618 containerd[1469]: time="2025-02-13T15:36:39.718585909Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:36:39.734641 containerd[1469]: time="2025-02-13T15:36:39.734596370Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b605e8654d1745b818b575cdf0da48bfeae430866ffc728b548c18c73f2dc544\"" Feb 13 15:36:39.735278 containerd[1469]: time="2025-02-13T15:36:39.735065614Z" level=info msg="StartContainer for \"b605e8654d1745b818b575cdf0da48bfeae430866ffc728b548c18c73f2dc544\"" Feb 13 15:36:39.764578 systemd[1]: Started cri-containerd-b605e8654d1745b818b575cdf0da48bfeae430866ffc728b548c18c73f2dc544.scope - libcontainer container b605e8654d1745b818b575cdf0da48bfeae430866ffc728b548c18c73f2dc544. Feb 13 15:36:39.793854 containerd[1469]: time="2025-02-13T15:36:39.793809661Z" level=info msg="StartContainer for \"b605e8654d1745b818b575cdf0da48bfeae430866ffc728b548c18c73f2dc544\" returns successfully" Feb 13 15:36:39.793955 systemd[1]: cri-containerd-b605e8654d1745b818b575cdf0da48bfeae430866ffc728b548c18c73f2dc544.scope: Deactivated successfully. Feb 13 15:36:39.817512 containerd[1469]: time="2025-02-13T15:36:39.817436182Z" level=info msg="shim disconnected" id=b605e8654d1745b818b575cdf0da48bfeae430866ffc728b548c18c73f2dc544 namespace=k8s.io Feb 13 15:36:39.817512 containerd[1469]: time="2025-02-13T15:36:39.817507477Z" level=warning msg="cleaning up after shim disconnected" id=b605e8654d1745b818b575cdf0da48bfeae430866ffc728b548c18c73f2dc544 namespace=k8s.io Feb 13 15:36:39.817512 containerd[1469]: time="2025-02-13T15:36:39.817516083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:39.958567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b605e8654d1745b818b575cdf0da48bfeae430866ffc728b548c18c73f2dc544-rootfs.mount: Deactivated successfully. Feb 13 15:36:40.720906 kubelet[2665]: E0213 15:36:40.720871 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:40.722542 containerd[1469]: time="2025-02-13T15:36:40.722367468Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:36:40.736377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1784385857.mount: Deactivated successfully. Feb 13 15:36:40.738739 containerd[1469]: time="2025-02-13T15:36:40.738706284Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"79249464af1ce0e61cf121816dd2fb0abd097a175c9200daf47fa56322ee7631\"" Feb 13 15:36:40.739364 containerd[1469]: time="2025-02-13T15:36:40.739329289Z" level=info msg="StartContainer for \"79249464af1ce0e61cf121816dd2fb0abd097a175c9200daf47fa56322ee7631\"" Feb 13 15:36:40.769592 systemd[1]: Started cri-containerd-79249464af1ce0e61cf121816dd2fb0abd097a175c9200daf47fa56322ee7631.scope - libcontainer container 79249464af1ce0e61cf121816dd2fb0abd097a175c9200daf47fa56322ee7631. Feb 13 15:36:40.794068 systemd[1]: cri-containerd-79249464af1ce0e61cf121816dd2fb0abd097a175c9200daf47fa56322ee7631.scope: Deactivated successfully. Feb 13 15:36:40.796741 containerd[1469]: time="2025-02-13T15:36:40.796708047Z" level=info msg="StartContainer for \"79249464af1ce0e61cf121816dd2fb0abd097a175c9200daf47fa56322ee7631\" returns successfully" Feb 13 15:36:40.818844 containerd[1469]: time="2025-02-13T15:36:40.818776906Z" level=info msg="shim disconnected" id=79249464af1ce0e61cf121816dd2fb0abd097a175c9200daf47fa56322ee7631 namespace=k8s.io Feb 13 15:36:40.818844 containerd[1469]: time="2025-02-13T15:36:40.818835868Z" level=warning msg="cleaning up after shim disconnected" id=79249464af1ce0e61cf121816dd2fb0abd097a175c9200daf47fa56322ee7631 namespace=k8s.io Feb 13 15:36:40.818844 containerd[1469]: time="2025-02-13T15:36:40.818845396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:36:40.959264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79249464af1ce0e61cf121816dd2fb0abd097a175c9200daf47fa56322ee7631-rootfs.mount: Deactivated successfully. Feb 13 15:36:41.728351 kubelet[2665]: E0213 15:36:41.728321 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:41.730810 containerd[1469]: time="2025-02-13T15:36:41.730755161Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:36:41.747509 containerd[1469]: time="2025-02-13T15:36:41.747443129Z" level=info msg="CreateContainer within sandbox \"c91855e49237acc41243b3fe15e757242bf05de7388d17c2b818bce45ef41494\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb95b79b5faab4949eead5fbf3e729e457ffc41b71b9eff75323866940b45ee3\"" Feb 13 15:36:41.751337 containerd[1469]: time="2025-02-13T15:36:41.750569607Z" level=info msg="StartContainer for \"cb95b79b5faab4949eead5fbf3e729e457ffc41b71b9eff75323866940b45ee3\"" Feb 13 15:36:41.781599 systemd[1]: Started cri-containerd-cb95b79b5faab4949eead5fbf3e729e457ffc41b71b9eff75323866940b45ee3.scope - libcontainer container cb95b79b5faab4949eead5fbf3e729e457ffc41b71b9eff75323866940b45ee3. Feb 13 15:36:41.811023 containerd[1469]: time="2025-02-13T15:36:41.810968231Z" level=info msg="StartContainer for \"cb95b79b5faab4949eead5fbf3e729e457ffc41b71b9eff75323866940b45ee3\" returns successfully" Feb 13 15:36:42.235486 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:36:42.733762 kubelet[2665]: E0213 15:36:42.733723 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:42.747173 kubelet[2665]: I0213 15:36:42.747118 2665 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h8g2c" podStartSLOduration=5.747073319 podStartE2EDuration="5.747073319s" podCreationTimestamp="2025-02-13 15:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:36:42.746417191 +0000 UTC m=+88.742222054" watchObservedRunningTime="2025-02-13 15:36:42.747073319 +0000 UTC m=+88.742878172" Feb 13 15:36:44.073351 kubelet[2665]: E0213 15:36:44.073302 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:44.255683 systemd[1]: run-containerd-runc-k8s.io-cb95b79b5faab4949eead5fbf3e729e457ffc41b71b9eff75323866940b45ee3-runc.5TsvbP.mount: Deactivated successfully. Feb 13 15:36:45.336323 systemd-networkd[1404]: lxc_health: Link UP Feb 13 15:36:45.336711 systemd-networkd[1404]: lxc_health: Gained carrier Feb 13 15:36:46.075070 kubelet[2665]: E0213 15:36:46.074811 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:46.741286 kubelet[2665]: E0213 15:36:46.741205 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:47.063770 systemd-networkd[1404]: lxc_health: Gained IPv6LL Feb 13 15:36:47.743587 kubelet[2665]: E0213 15:36:47.743548 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:49.157817 kubelet[2665]: E0213 15:36:49.157771 2665 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:36:50.571134 systemd[1]: run-containerd-runc-k8s.io-cb95b79b5faab4949eead5fbf3e729e457ffc41b71b9eff75323866940b45ee3-runc.Mgk3q9.mount: Deactivated successfully. Feb 13 15:36:52.723511 sshd[4494]: Connection closed by 10.0.0.1 port 59270 Feb 13 15:36:52.724039 sshd-session[4490]: pam_unix(sshd:session): session closed for user core Feb 13 15:36:52.728647 systemd[1]: sshd@28-10.0.0.142:22-10.0.0.1:59270.service: Deactivated successfully. Feb 13 15:36:52.730747 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:36:52.731670 systemd-logind[1455]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:36:52.732643 systemd-logind[1455]: Removed session 29.