Feb 13 15:52:33.892041 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:00:20 -00 2025 Feb 13 15:52:33.892088 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:52:33.892105 kernel: BIOS-provided physical RAM map: Feb 13 15:52:33.892114 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:52:33.892123 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:52:33.892131 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:52:33.892142 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:52:33.892152 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:52:33.892161 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:52:33.892170 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:52:33.892179 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:52:33.892191 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:52:33.892200 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:52:33.892209 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:52:33.892221 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:52:33.892231 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:52:33.892244 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:52:33.892254 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:52:33.892263 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:52:33.892273 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:52:33.892283 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:52:33.892292 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:52:33.892302 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:52:33.892311 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:52:33.892322 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:52:33.892331 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:52:33.892341 kernel: NX (Execute Disable) protection: active Feb 13 15:52:33.892354 kernel: APIC: Static calls initialized Feb 13 15:52:33.892363 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:52:33.892372 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:52:33.892381 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:52:33.892390 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:52:33.892399 kernel: extended physical RAM map: Feb 13 15:52:33.892409 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:52:33.892418 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:52:33.892427 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:52:33.892437 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:52:33.892446 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:52:33.892455 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:52:33.892468 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:52:33.892481 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:52:33.892491 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:52:33.892500 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:52:33.892510 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:52:33.892520 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:52:33.892533 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:52:33.892542 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:52:33.892552 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:52:33.892562 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:52:33.892572 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:52:33.892582 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:52:33.892592 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:52:33.892602 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:52:33.892612 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:52:33.892625 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:52:33.892635 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:52:33.892644 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:52:33.892654 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:52:33.892663 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:52:33.892682 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:52:33.892691 kernel: efi: EFI v2.7 by EDK II Feb 13 15:52:33.892701 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:52:33.892711 kernel: random: crng init done Feb 13 15:52:33.892721 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:52:33.892731 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:52:33.892740 kernel: secureboot: Secure boot disabled Feb 13 15:52:33.892754 kernel: SMBIOS 2.8 present. Feb 13 15:52:33.892764 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:52:33.892774 kernel: Hypervisor detected: KVM Feb 13 15:52:33.892783 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:52:33.892793 kernel: kvm-clock: using sched offset of 2732327882 cycles Feb 13 15:52:33.892804 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:52:33.892814 kernel: tsc: Detected 2794.750 MHz processor Feb 13 15:52:33.892824 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:52:33.892835 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:52:33.892846 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:52:33.892860 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:52:33.892870 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:52:33.892880 kernel: Using GB pages for direct mapping Feb 13 15:52:33.892890 kernel: ACPI: Early table checksum verification disabled Feb 13 15:52:33.892901 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:52:33.892911 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:52:33.892922 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:52:33.892932 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:52:33.892942 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:52:33.892956 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:52:33.892966 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:52:33.892976 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:52:33.892987 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:52:33.892997 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:52:33.893007 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:52:33.893017 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:52:33.893028 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:52:33.893038 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:52:33.893052 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:52:33.893062 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:52:33.893098 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:52:33.893109 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:52:33.893119 kernel: No NUMA configuration found Feb 13 15:52:33.893130 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:52:33.893141 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:52:33.893151 kernel: Zone ranges: Feb 13 15:52:33.893162 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:52:33.893177 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:52:33.893187 kernel: Normal empty Feb 13 15:52:33.893197 kernel: Movable zone start for each node Feb 13 15:52:33.893207 kernel: Early memory node ranges Feb 13 15:52:33.893218 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:52:33.893228 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:52:33.893239 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:52:33.893249 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:52:33.893260 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:52:33.893270 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:52:33.893284 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:52:33.893294 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:52:33.893305 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:52:33.893315 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:52:33.893325 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:52:33.893345 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:52:33.893359 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:52:33.893371 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:52:33.893383 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:52:33.893395 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:52:33.893406 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:52:33.893416 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:52:33.893430 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:52:33.893441 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:52:33.893451 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:52:33.893462 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:52:33.893472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:52:33.893486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:52:33.893497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:52:33.893507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:52:33.893518 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:52:33.893528 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:52:33.893539 kernel: TSC deadline timer available Feb 13 15:52:33.893549 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:52:33.893560 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:52:33.893570 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:52:33.893584 kernel: kvm-guest: setup PV sched yield Feb 13 15:52:33.893595 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:52:33.893605 kernel: Booting paravirtualized kernel on KVM Feb 13 15:52:33.893616 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:52:33.893627 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:52:33.893638 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:52:33.893648 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:52:33.893658 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:52:33.893679 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:52:33.893694 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:52:33.893706 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:52:33.893718 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:52:33.893728 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:52:33.893739 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:52:33.893750 kernel: Fallback order for Node 0: 0 Feb 13 15:52:33.893760 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:52:33.893771 kernel: Policy zone: DMA32 Feb 13 15:52:33.893785 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:52:33.893796 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 177824K reserved, 0K cma-reserved) Feb 13 15:52:33.893807 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:52:33.893817 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 15:52:33.893828 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:52:33.893838 kernel: Dynamic Preempt: voluntary Feb 13 15:52:33.893849 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:52:33.893861 kernel: rcu: RCU event tracing is enabled. Feb 13 15:52:33.893872 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:52:33.893886 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:52:33.893897 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:52:33.893907 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:52:33.893918 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:52:33.893928 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:52:33.893938 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:52:33.893949 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:52:33.893959 kernel: Console: colour dummy device 80x25 Feb 13 15:52:33.893970 kernel: printk: console [ttyS0] enabled Feb 13 15:52:33.893984 kernel: ACPI: Core revision 20230628 Feb 13 15:52:33.893994 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:52:33.894005 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:52:33.894015 kernel: x2apic enabled Feb 13 15:52:33.894026 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:52:33.894036 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:52:33.894047 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:52:33.894057 kernel: kvm-guest: setup PV IPIs Feb 13 15:52:33.894122 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:52:33.894138 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:52:33.894148 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 15:52:33.894159 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:52:33.894169 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:52:33.894180 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:52:33.894190 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:52:33.894200 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:52:33.894211 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:52:33.894221 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:52:33.894234 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:52:33.894245 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:52:33.894255 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:52:33.894266 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:52:33.894276 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:52:33.894288 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:52:33.894299 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:52:33.894310 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:52:33.894323 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:52:33.894333 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:52:33.894344 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:52:33.894355 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:52:33.894365 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:52:33.894376 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:52:33.894387 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:52:33.894397 kernel: landlock: Up and running. Feb 13 15:52:33.894408 kernel: SELinux: Initializing. Feb 13 15:52:33.894423 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:52:33.894434 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:52:33.894445 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:52:33.894455 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:52:33.894466 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:52:33.894477 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:52:33.894488 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:52:33.894498 kernel: ... version: 0 Feb 13 15:52:33.894509 kernel: ... bit width: 48 Feb 13 15:52:33.894524 kernel: ... generic registers: 6 Feb 13 15:52:33.894534 kernel: ... value mask: 0000ffffffffffff Feb 13 15:52:33.894545 kernel: ... max period: 00007fffffffffff Feb 13 15:52:33.894556 kernel: ... fixed-purpose events: 0 Feb 13 15:52:33.894567 kernel: ... event mask: 000000000000003f Feb 13 15:52:33.894577 kernel: signal: max sigframe size: 1776 Feb 13 15:52:33.894587 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:52:33.894598 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:52:33.894609 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:52:33.894623 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:52:33.894634 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:52:33.894645 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:52:33.894656 kernel: smpboot: Max logical packages: 1 Feb 13 15:52:33.894675 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 15:52:33.894686 kernel: devtmpfs: initialized Feb 13 15:52:33.894697 kernel: x86/mm: Memory block size: 128MB Feb 13 15:52:33.894708 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:52:33.894719 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:52:33.894729 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:52:33.894744 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:52:33.894755 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:52:33.894766 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:52:33.894776 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:52:33.894787 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:52:33.894798 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:52:33.894809 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:52:33.894819 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:52:33.894833 kernel: audit: type=2000 audit(1739461953.662:1): state=initialized audit_enabled=0 res=1 Feb 13 15:52:33.894844 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:52:33.894854 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:52:33.894865 kernel: cpuidle: using governor menu Feb 13 15:52:33.894876 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:52:33.894886 kernel: dca service started, version 1.12.1 Feb 13 15:52:33.894897 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:52:33.894908 kernel: PCI: Using configuration type 1 for base access Feb 13 15:52:33.894919 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:52:33.894933 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:52:33.894943 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:52:33.894954 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:52:33.894965 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:52:33.894976 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:52:33.894987 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:52:33.894997 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:52:33.895008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:52:33.895018 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:52:33.895032 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:52:33.895043 kernel: ACPI: Interpreter enabled Feb 13 15:52:33.895054 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:52:33.895064 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:52:33.895089 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:52:33.895109 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:52:33.895120 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:52:33.895131 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:52:33.895350 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:52:33.895522 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:52:33.895711 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:52:33.895728 kernel: PCI host bridge to bus 0000:00 Feb 13 15:52:33.895893 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:52:33.896046 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:52:33.896228 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:52:33.896387 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:52:33.896534 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:52:33.896691 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:52:33.896840 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:52:33.897017 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:52:33.897216 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:52:33.897376 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:52:33.897544 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:52:33.897719 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:52:33.897882 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:52:33.898042 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:52:33.898246 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:52:33.898410 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:52:33.898577 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:52:33.898752 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:52:33.898926 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:52:33.899143 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:52:33.899291 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:52:33.899438 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:52:33.900156 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:52:33.900293 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:52:33.900416 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:52:33.900538 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:52:33.900660 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:52:33.900803 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:52:33.900928 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:52:33.901122 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:52:33.901261 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:52:33.901383 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:52:33.901511 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:52:33.901634 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:52:33.901645 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:52:33.901654 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:52:33.901662 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:52:33.901682 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:52:33.901691 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:52:33.901699 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:52:33.901707 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:52:33.901715 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:52:33.901722 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:52:33.901730 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:52:33.901738 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:52:33.901746 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:52:33.901756 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:52:33.901764 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:52:33.901772 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:52:33.901780 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:52:33.901788 kernel: iommu: Default domain type: Translated Feb 13 15:52:33.901796 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:52:33.901804 kernel: efivars: Registered efivars operations Feb 13 15:52:33.901812 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:52:33.901820 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:52:33.901830 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:52:33.901839 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:52:33.901846 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:52:33.901854 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:52:33.901862 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:52:33.901870 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:52:33.901878 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:52:33.901885 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:52:33.902010 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:52:33.902156 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:52:33.902280 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:52:33.902291 kernel: vgaarb: loaded Feb 13 15:52:33.902299 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:52:33.902307 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:52:33.902315 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:52:33.902323 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:52:33.902332 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:52:33.902340 kernel: pnp: PnP ACPI init Feb 13 15:52:33.902482 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:52:33.902494 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:52:33.902503 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:52:33.902511 kernel: NET: Registered PF_INET protocol family Feb 13 15:52:33.902537 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:52:33.902547 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:52:33.902558 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:52:33.902566 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:52:33.902576 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:52:33.902585 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:52:33.902593 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:52:33.902601 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:52:33.902609 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:52:33.902618 kernel: NET: Registered PF_XDP protocol family Feb 13 15:52:33.902755 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:52:33.902880 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:52:33.903001 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:52:33.903154 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:52:33.903272 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:52:33.903387 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:52:33.903526 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:52:33.903642 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:52:33.903654 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:52:33.903662 kernel: Initialise system trusted keyrings Feb 13 15:52:33.903686 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:52:33.903694 kernel: Key type asymmetric registered Feb 13 15:52:33.903703 kernel: Asymmetric key parser 'x509' registered Feb 13 15:52:33.903711 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:52:33.903719 kernel: io scheduler mq-deadline registered Feb 13 15:52:33.903727 kernel: io scheduler kyber registered Feb 13 15:52:33.903735 kernel: io scheduler bfq registered Feb 13 15:52:33.903743 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:52:33.903752 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:52:33.903763 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:52:33.903774 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:52:33.903782 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:52:33.903791 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:52:33.903799 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:52:33.903807 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:52:33.903818 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:52:33.903948 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:52:33.903960 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:52:33.904095 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:52:33.904219 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:52:33 UTC (1739461953) Feb 13 15:52:33.904339 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:52:33.904350 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:52:33.904359 kernel: efifb: probing for efifb Feb 13 15:52:33.904372 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:52:33.904380 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:52:33.904389 kernel: efifb: scrolling: redraw Feb 13 15:52:33.904397 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:52:33.904405 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:52:33.904413 kernel: fb0: EFI VGA frame buffer device Feb 13 15:52:33.904421 kernel: pstore: Using crash dump compression: deflate Feb 13 15:52:33.904430 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:52:33.904438 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:52:33.904449 kernel: Segment Routing with IPv6 Feb 13 15:52:33.904457 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:52:33.904466 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:52:33.904474 kernel: Key type dns_resolver registered Feb 13 15:52:33.904482 kernel: IPI shorthand broadcast: enabled Feb 13 15:52:33.904490 kernel: sched_clock: Marking stable (588002174, 184137557)->(888906178, -116766447) Feb 13 15:52:33.904499 kernel: registered taskstats version 1 Feb 13 15:52:33.904507 kernel: Loading compiled-in X.509 certificates Feb 13 15:52:33.904515 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: a260c8876205efb4ca2ab3eb040cd310ec7afd21' Feb 13 15:52:33.904528 kernel: Key type .fscrypt registered Feb 13 15:52:33.904536 kernel: Key type fscrypt-provisioning registered Feb 13 15:52:33.904544 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:52:33.904553 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:52:33.904561 kernel: ima: No architecture policies found Feb 13 15:52:33.904569 kernel: clk: Disabling unused clocks Feb 13 15:52:33.904577 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 15:52:33.904585 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:52:33.904594 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 15:52:33.904604 kernel: Run /init as init process Feb 13 15:52:33.904613 kernel: with arguments: Feb 13 15:52:33.904621 kernel: /init Feb 13 15:52:33.904629 kernel: with environment: Feb 13 15:52:33.904637 kernel: HOME=/ Feb 13 15:52:33.904645 kernel: TERM=linux Feb 13 15:52:33.904653 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:52:33.904663 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:52:33.904684 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:52:33.904695 systemd[1]: Detected virtualization kvm. Feb 13 15:52:33.904703 systemd[1]: Detected architecture x86-64. Feb 13 15:52:33.904712 systemd[1]: Running in initrd. Feb 13 15:52:33.904721 systemd[1]: No hostname configured, using default hostname. Feb 13 15:52:33.904730 systemd[1]: Hostname set to . Feb 13 15:52:33.904738 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:52:33.904747 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:52:33.904759 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:52:33.904768 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:52:33.904777 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:52:33.904787 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:52:33.904796 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:52:33.904805 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:52:33.904816 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:52:33.904827 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:52:33.904836 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:52:33.904845 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:52:33.904853 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:52:33.904863 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:52:33.904871 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:52:33.904880 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:52:33.904889 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:52:33.904900 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:52:33.904909 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:52:33.904918 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:52:33.904927 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:52:33.904935 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:52:33.904944 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:52:33.904953 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:52:33.904962 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:52:33.904971 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:52:33.904982 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:52:33.904991 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:52:33.905000 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:52:33.905009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:52:33.905018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:52:33.905027 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:52:33.905035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:52:33.905047 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:52:33.905057 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:52:33.905079 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:52:33.905092 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:52:33.905101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:52:33.905146 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 15:52:33.905196 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:52:33.905224 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:52:33.905244 systemd-journald[194]: Journal started Feb 13 15:52:33.905287 systemd-journald[194]: Runtime Journal (/run/log/journal/4799b999fc9c426d95ca56dd5ef93529) is 6M, max 48.2M, 42.2M free. Feb 13 15:52:33.880381 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:52:33.907700 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:52:33.912098 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:52:33.912509 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:52:33.915992 kernel: Bridge firewalling registered Feb 13 15:52:33.916054 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:52:33.917417 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:52:33.919461 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:52:33.924479 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:52:33.927843 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:52:33.930715 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:52:33.940028 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:52:33.942868 dracut-cmdline[224]: dracut-dracut-053 Feb 13 15:52:33.946364 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:52:33.952216 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:52:33.990333 systemd-resolved[239]: Positive Trust Anchors: Feb 13 15:52:33.990347 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:52:33.990377 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:52:33.992856 systemd-resolved[239]: Defaulting to hostname 'linux'. Feb 13 15:52:33.993929 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:52:34.001096 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:52:34.041100 kernel: SCSI subsystem initialized Feb 13 15:52:34.050087 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:52:34.060089 kernel: iscsi: registered transport (tcp) Feb 13 15:52:34.085451 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:52:34.085494 kernel: QLogic iSCSI HBA Driver Feb 13 15:52:34.129271 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:52:34.136235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:52:34.160127 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:52:34.160185 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:52:34.160201 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:52:34.199089 kernel: raid6: avx2x4 gen() 30297 MB/s Feb 13 15:52:34.216088 kernel: raid6: avx2x2 gen() 31262 MB/s Feb 13 15:52:34.233295 kernel: raid6: avx2x1 gen() 25888 MB/s Feb 13 15:52:34.233318 kernel: raid6: using algorithm avx2x2 gen() 31262 MB/s Feb 13 15:52:34.251331 kernel: raid6: .... xor() 19918 MB/s, rmw enabled Feb 13 15:52:34.251373 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:52:34.271089 kernel: xor: automatically using best checksumming function avx Feb 13 15:52:34.414094 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:52:34.425118 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:52:34.437186 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:52:34.452008 systemd-udevd[414]: Using default interface naming scheme 'v255'. Feb 13 15:52:34.457340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:52:34.464255 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:52:34.476722 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Feb 13 15:52:34.505983 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:52:34.519301 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:52:34.581534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:52:34.592464 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:52:34.603874 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:52:34.607126 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:52:34.610195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:52:34.613087 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:52:34.652196 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:52:34.652379 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:52:34.652396 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:52:34.652411 kernel: GPT:9289727 != 19775487 Feb 13 15:52:34.652425 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:52:34.652439 kernel: GPT:9289727 != 19775487 Feb 13 15:52:34.652451 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:52:34.652471 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:52:34.652485 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:52:34.652499 kernel: AES CTR mode by8 optimization enabled Feb 13 15:52:34.652512 kernel: libata version 3.00 loaded. Feb 13 15:52:34.613710 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:52:34.621307 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:52:34.631154 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:52:34.654769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:52:34.655156 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:52:34.672289 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:52:34.692142 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:52:34.692161 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:52:34.692321 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:52:34.692466 kernel: scsi host0: ahci Feb 13 15:52:34.692621 kernel: BTRFS: device fsid 506754f7-5ef1-4c63-ad2a-b7b855a48f85 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (459) Feb 13 15:52:34.692634 kernel: scsi host1: ahci Feb 13 15:52:34.692806 kernel: scsi host2: ahci Feb 13 15:52:34.692958 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (470) Feb 13 15:52:34.692970 kernel: scsi host3: ahci Feb 13 15:52:34.693132 kernel: scsi host4: ahci Feb 13 15:52:34.693282 kernel: scsi host5: ahci Feb 13 15:52:34.693430 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:52:34.693442 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:52:34.693456 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:52:34.693467 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:52:34.693478 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:52:34.693488 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:52:34.657226 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:52:34.661246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:52:34.661396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:52:34.666188 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:52:34.677380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:52:34.701022 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:52:34.702775 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:52:34.720551 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:52:34.741570 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:52:34.743019 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:52:34.754606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:52:34.767190 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:52:34.768342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:52:34.768394 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:52:34.770847 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:52:34.772596 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:52:34.787021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:52:34.788147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:52:34.809651 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:52:34.902089 disk-uuid[553]: Primary Header is updated. Feb 13 15:52:34.902089 disk-uuid[553]: Secondary Entries is updated. Feb 13 15:52:34.902089 disk-uuid[553]: Secondary Header is updated. Feb 13 15:52:34.905897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:52:34.910094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:52:35.003093 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:52:35.003153 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:52:35.004101 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:52:35.005091 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:52:35.006092 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:52:35.007100 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:52:35.008092 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:52:35.008107 kernel: ata3.00: applying bridge limits Feb 13 15:52:35.009155 kernel: ata3.00: configured for UDMA/100 Feb 13 15:52:35.010095 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:52:35.052090 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:52:35.066815 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:52:35.066829 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:52:35.919775 disk-uuid[568]: The operation has completed successfully. Feb 13 15:52:35.921026 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:52:35.946155 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:52:35.946285 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:52:35.997179 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:52:36.000196 sh[596]: Success Feb 13 15:52:36.012095 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:52:36.046255 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:52:36.056461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:52:36.058988 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:52:36.073093 kernel: BTRFS info (device dm-0): first mount of filesystem 506754f7-5ef1-4c63-ad2a-b7b855a48f85 Feb 13 15:52:36.073125 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:52:36.075166 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:52:36.075180 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:52:36.076677 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:52:36.080484 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:52:36.081121 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:52:36.093196 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:52:36.095714 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:52:36.106166 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:52:36.106225 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:52:36.106237 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:52:36.109330 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:52:36.117869 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:52:36.120286 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:52:36.203969 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:52:36.225235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:52:36.251351 systemd-networkd[775]: lo: Link UP Feb 13 15:52:36.251361 systemd-networkd[775]: lo: Gained carrier Feb 13 15:52:36.253140 systemd-networkd[775]: Enumeration completed Feb 13 15:52:36.253238 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:52:36.253489 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:52:36.253494 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:52:36.254594 systemd-networkd[775]: eth0: Link UP Feb 13 15:52:36.254598 systemd-networkd[775]: eth0: Gained carrier Feb 13 15:52:36.254618 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:52:36.255859 systemd[1]: Reached target network.target - Network. Feb 13 15:52:36.291118 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:52:36.297815 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:52:36.308215 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:52:36.356235 ignition[782]: Ignition 2.20.0 Feb 13 15:52:36.356247 ignition[782]: Stage: fetch-offline Feb 13 15:52:36.356280 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:52:36.356290 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:52:36.356374 ignition[782]: parsed url from cmdline: "" Feb 13 15:52:36.356378 ignition[782]: no config URL provided Feb 13 15:52:36.356383 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:52:36.356392 ignition[782]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:52:36.356417 ignition[782]: op(1): [started] loading QEMU firmware config module Feb 13 15:52:36.356423 ignition[782]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:52:36.364270 ignition[782]: op(1): [finished] loading QEMU firmware config module Feb 13 15:52:36.403741 ignition[782]: parsing config with SHA512: af84ce8a21c79f0d9cc8ea45aef26aca824ecc6dae9c53ed11afd0685e320ed1b686597cf65e5fa51b3205e1d4f2a67ec85aafe7b50957bc434fa6a6fed31a5c Feb 13 15:52:36.407216 unknown[782]: fetched base config from "system" Feb 13 15:52:36.407435 unknown[782]: fetched user config from "qemu" Feb 13 15:52:36.407784 ignition[782]: fetch-offline: fetch-offline passed Feb 13 15:52:36.407848 ignition[782]: Ignition finished successfully Feb 13 15:52:36.413129 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:52:36.413437 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:52:36.425201 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:52:36.441330 ignition[792]: Ignition 2.20.0 Feb 13 15:52:36.441339 ignition[792]: Stage: kargs Feb 13 15:52:36.441477 ignition[792]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:52:36.441487 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:52:36.442301 ignition[792]: kargs: kargs passed Feb 13 15:52:36.442337 ignition[792]: Ignition finished successfully Feb 13 15:52:36.449216 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:52:36.463227 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:52:36.475878 ignition[801]: Ignition 2.20.0 Feb 13 15:52:36.475888 ignition[801]: Stage: disks Feb 13 15:52:36.476030 ignition[801]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:52:36.476041 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:52:36.476837 ignition[801]: disks: disks passed Feb 13 15:52:36.476875 ignition[801]: Ignition finished successfully Feb 13 15:52:36.482826 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:52:36.484307 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:52:36.486559 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:52:36.488057 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:52:36.490535 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:52:36.490619 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:52:36.500190 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:52:36.512165 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:52:36.675312 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:52:37.083138 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:52:37.169104 kernel: EXT4-fs (vda9): mounted filesystem 8023eced-1511-4e72-a58a-db1b8cb3210e r/w with ordered data mode. Quota mode: none. Feb 13 15:52:37.169866 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:52:37.171516 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:52:37.183139 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:52:37.185182 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:52:37.186747 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:52:37.192469 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (820) Feb 13 15:52:37.186799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:52:37.186828 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:52:37.203134 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:52:37.203162 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:52:37.203174 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:52:37.203185 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:52:37.193617 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:52:37.201082 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:52:37.207843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:52:37.236049 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:52:37.240098 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:52:37.244082 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:52:37.248172 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:52:37.333924 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:52:37.348146 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:52:37.352326 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:52:37.357132 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:52:37.375580 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:52:37.384167 ignition[934]: INFO : Ignition 2.20.0 Feb 13 15:52:37.384167 ignition[934]: INFO : Stage: mount Feb 13 15:52:37.386044 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:52:37.386044 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:52:37.386044 ignition[934]: INFO : mount: mount passed Feb 13 15:52:37.386044 ignition[934]: INFO : Ignition finished successfully Feb 13 15:52:37.392315 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:52:37.405142 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:52:38.073021 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:52:38.089203 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:52:38.098099 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (947) Feb 13 15:52:38.101691 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:52:38.101721 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:52:38.101733 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:52:38.105107 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:52:38.106268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:52:38.127438 ignition[964]: INFO : Ignition 2.20.0 Feb 13 15:52:38.127438 ignition[964]: INFO : Stage: files Feb 13 15:52:38.129265 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:52:38.129265 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:52:38.129265 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:52:38.129265 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:52:38.129265 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:52:38.135921 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:52:38.135921 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:52:38.135921 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:52:38.133777 unknown[964]: wrote ssh authorized keys file for user: core Feb 13 15:52:38.141466 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:52:38.141466 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:52:38.215648 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:52:38.315182 systemd-networkd[775]: eth0: Gained IPv6LL Feb 13 15:52:38.651985 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:52:38.657311 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:52:38.657311 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:52:39.034393 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:52:39.117041 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:52:39.119105 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 15:52:39.437778 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:52:39.704153 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:52:39.704153 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:52:39.707649 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:52:39.709616 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:52:39.709616 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:52:39.709616 ignition[964]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:52:39.709616 ignition[964]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:52:39.709616 ignition[964]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:52:39.709616 ignition[964]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:52:39.709616 ignition[964]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:52:39.728590 ignition[964]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:52:39.734691 ignition[964]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:52:39.736414 ignition[964]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:52:39.736414 ignition[964]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:52:39.736414 ignition[964]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:52:39.736414 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:52:39.736414 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:52:39.736414 ignition[964]: INFO : files: files passed Feb 13 15:52:39.736414 ignition[964]: INFO : Ignition finished successfully Feb 13 15:52:39.748116 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:52:39.760184 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:52:39.762021 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:52:39.768593 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:52:39.768715 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:52:39.772279 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:52:39.773805 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:52:39.773805 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:52:39.778525 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:52:39.776336 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:52:39.779113 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:52:39.791195 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:52:39.813235 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:52:39.813357 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:52:39.814561 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:52:39.817027 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:52:39.819143 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:52:39.828229 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:52:39.843448 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:52:39.855183 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:52:39.864273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:52:39.865658 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:52:39.868327 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:52:39.870569 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:52:39.870692 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:52:39.873293 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:52:39.875415 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:52:39.877673 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:52:39.879949 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:52:39.882192 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:52:39.884576 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:52:39.886926 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:52:39.889456 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:52:39.891678 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:52:39.894095 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:52:39.896057 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:52:39.896216 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:52:39.898760 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:52:39.900351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:52:39.902855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:52:39.902963 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:52:39.905231 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:52:39.905354 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:52:39.907952 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:52:39.908097 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:52:39.910089 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:52:39.911981 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:52:39.912125 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:52:39.914898 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:52:39.916927 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:52:39.919052 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:52:39.919158 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:52:39.921077 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:52:39.921161 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:52:39.923425 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:52:39.923545 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:52:39.925698 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:52:39.925804 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:52:39.935202 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:52:39.937143 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:52:39.938744 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:52:39.948303 ignition[1020]: INFO : Ignition 2.20.0 Feb 13 15:52:39.948303 ignition[1020]: INFO : Stage: umount Feb 13 15:52:39.948303 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:52:39.948303 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:52:39.948303 ignition[1020]: INFO : umount: umount passed Feb 13 15:52:39.948303 ignition[1020]: INFO : Ignition finished successfully Feb 13 15:52:39.938934 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:52:39.941401 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:52:39.941514 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:52:39.949468 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:52:39.949584 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:52:39.951898 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:52:39.951999 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:52:39.955803 systemd[1]: Stopped target network.target - Network. Feb 13 15:52:39.957018 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:52:39.957118 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:52:39.959159 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:52:39.959205 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:52:39.961513 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:52:39.961559 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:52:39.963730 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:52:39.963776 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:52:39.966382 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:52:39.968557 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:52:39.971881 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:52:39.973738 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:52:39.973858 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:52:39.977690 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:52:39.978336 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:52:39.978424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:52:39.981834 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:52:39.985793 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:52:39.985945 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:52:39.990144 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:52:39.990343 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:52:39.990382 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:52:40.000209 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:52:40.001462 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:52:40.001528 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:52:40.003762 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:52:40.003811 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:52:40.006086 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:52:40.006134 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:52:40.008150 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:52:40.017461 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:52:40.030687 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:52:40.031737 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:52:40.034575 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:52:40.035563 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:52:40.038064 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:52:40.039091 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:52:40.041355 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:52:40.041396 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:52:40.044335 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:52:40.045265 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:52:40.047431 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:52:40.048352 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:52:40.050423 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:52:40.050479 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:52:40.073211 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:52:40.073274 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:52:40.073327 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:52:40.121945 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:52:40.121993 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:52:40.123201 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:52:40.123247 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:52:40.125420 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:52:40.125464 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:52:40.136933 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:52:40.137049 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:52:40.269129 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:52:40.270214 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:52:40.272432 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:52:40.274734 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:52:40.275807 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:52:40.288182 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:52:40.296940 systemd[1]: Switching root. Feb 13 15:52:40.334343 systemd-journald[194]: Journal stopped Feb 13 15:52:41.830707 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 15:52:41.830782 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:52:41.830798 kernel: SELinux: policy capability open_perms=1 Feb 13 15:52:41.830809 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:52:41.830821 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:52:41.830832 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:52:41.830844 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:52:41.830856 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:52:41.830867 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:52:41.830879 kernel: audit: type=1403 audit(1739461961.034:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:52:41.830896 systemd[1]: Successfully loaded SELinux policy in 45.365ms. Feb 13 15:52:41.830938 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.971ms. Feb 13 15:52:41.830951 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:52:41.830964 systemd[1]: Detected virtualization kvm. Feb 13 15:52:41.830977 systemd[1]: Detected architecture x86-64. Feb 13 15:52:41.830995 systemd[1]: Detected first boot. Feb 13 15:52:41.831007 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:52:41.831020 zram_generator::config[1067]: No configuration found. Feb 13 15:52:41.831033 kernel: Guest personality initialized and is inactive Feb 13 15:52:41.831046 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 15:52:41.831058 kernel: Initialized host personality Feb 13 15:52:41.834153 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:52:41.834172 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:52:41.834187 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:52:41.834199 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:52:41.834211 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:52:41.834224 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:52:41.834240 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:52:41.834253 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:52:41.834266 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:52:41.834278 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:52:41.834290 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:52:41.834303 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:52:41.834315 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:52:41.834327 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:52:41.834341 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:52:41.834356 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:52:41.834368 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:52:41.834381 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:52:41.834393 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:52:41.834406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:52:41.834419 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:52:41.834431 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:52:41.834444 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:52:41.834479 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:52:41.834492 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:52:41.834504 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:52:41.834516 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:52:41.834529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:52:41.834541 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:52:41.834553 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:52:41.834565 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:52:41.834577 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:52:41.834593 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:52:41.834606 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:52:41.834618 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:52:41.834632 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:52:41.834644 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:52:41.834656 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:52:41.834668 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:52:41.834686 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:52:41.834699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:41.834714 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:52:41.834726 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:52:41.834739 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:52:41.834751 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:52:41.834764 systemd[1]: Reached target machines.target - Containers. Feb 13 15:52:41.834776 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:52:41.834788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:52:41.834801 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:52:41.834816 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:52:41.834829 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:52:41.834841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:52:41.834853 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:52:41.834865 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:52:41.834878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:52:41.834890 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:52:41.834904 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:52:41.834919 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:52:41.834931 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:52:41.834943 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:52:41.834956 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:52:41.834969 kernel: loop: module loaded Feb 13 15:52:41.834981 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:52:41.834993 kernel: fuse: init (API version 7.39) Feb 13 15:52:41.835005 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:52:41.835017 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:52:41.835033 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:52:41.835232 systemd-journald[1131]: Collecting audit messages is disabled. Feb 13 15:52:41.835261 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:52:41.835277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:52:41.835290 systemd-journald[1131]: Journal started Feb 13 15:52:41.835313 systemd-journald[1131]: Runtime Journal (/run/log/journal/4799b999fc9c426d95ca56dd5ef93529) is 6M, max 48.2M, 42.2M free. Feb 13 15:52:41.622181 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:52:41.635956 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:52:41.636427 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:52:41.838454 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:52:41.838484 systemd[1]: Stopped verity-setup.service. Feb 13 15:52:41.838499 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:41.845349 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:52:41.846401 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:52:41.847798 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:52:41.849255 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:52:41.850572 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:52:41.852257 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:52:41.854112 kernel: ACPI: bus type drm_connector registered Feb 13 15:52:41.855575 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:52:41.857083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:52:41.858779 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:52:41.859001 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:52:41.861769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:52:41.862001 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:52:41.863491 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:52:41.863717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:52:41.865215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:52:41.865430 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:52:41.867301 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:52:41.867525 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:52:41.869036 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:52:41.869269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:52:41.870841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:52:41.872462 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:52:41.874275 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:52:41.877901 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:52:41.891142 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:52:41.897160 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:52:41.899421 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:52:41.900679 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:52:41.900762 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:52:41.902951 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:52:41.905385 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:52:41.907729 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:52:41.908969 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:52:41.911431 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:52:41.913817 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:52:41.915128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:52:41.918206 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:52:41.920088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:52:41.921793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:52:41.926377 systemd-journald[1131]: Time spent on flushing to /var/log/journal/4799b999fc9c426d95ca56dd5ef93529 is 13.496ms for 1059 entries. Feb 13 15:52:41.926377 systemd-journald[1131]: System Journal (/var/log/journal/4799b999fc9c426d95ca56dd5ef93529) is 8M, max 195.6M, 187.6M free. Feb 13 15:52:42.033028 systemd-journald[1131]: Received client request to flush runtime journal. Feb 13 15:52:42.033199 kernel: loop0: detected capacity change from 0 to 147912 Feb 13 15:52:42.033249 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:52:41.935213 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:52:41.942080 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:52:41.945353 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:52:41.946678 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:52:41.948218 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:52:41.966519 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:52:41.978314 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:52:41.979978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:52:41.989990 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:52:42.001958 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 15:52:42.001971 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 15:52:42.008203 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:52:42.019189 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:52:42.022275 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:52:42.026614 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:52:42.034273 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:52:42.037496 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:52:42.044986 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 15:52:42.040218 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:52:42.106181 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:52:42.118354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:52:42.133044 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Feb 13 15:52:42.133077 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Feb 13 15:52:42.137796 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:52:42.142475 kernel: loop2: detected capacity change from 0 to 138176 Feb 13 15:52:42.182095 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 15:52:42.200174 kernel: loop4: detected capacity change from 0 to 205544 Feb 13 15:52:42.208111 kernel: loop5: detected capacity change from 0 to 138176 Feb 13 15:52:42.218157 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:52:42.218783 (sd-merge)[1213]: Merged extensions into '/usr'. Feb 13 15:52:42.223103 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:52:42.223121 systemd[1]: Reloading... Feb 13 15:52:42.279094 zram_generator::config[1240]: No configuration found. Feb 13 15:52:42.404005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:52:42.418658 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:52:42.469364 systemd[1]: Reloading finished in 245 ms. Feb 13 15:52:42.489080 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:52:42.490743 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:52:42.492479 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:52:42.511539 systemd[1]: Starting ensure-sysext.service... Feb 13 15:52:42.513560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:52:42.523470 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:52:42.523485 systemd[1]: Reloading... Feb 13 15:52:42.537948 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:52:42.538648 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:52:42.539858 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:52:42.540343 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Feb 13 15:52:42.540537 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Feb 13 15:52:42.545435 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:52:42.545515 systemd-tmpfiles[1281]: Skipping /boot Feb 13 15:52:42.561590 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:52:42.561661 systemd-tmpfiles[1281]: Skipping /boot Feb 13 15:52:42.584192 zram_generator::config[1308]: No configuration found. Feb 13 15:52:42.703581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:52:42.769354 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:52:42.769527 systemd[1]: Reloading finished in 245 ms. Feb 13 15:52:42.799836 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:52:42.808648 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:52:42.811078 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:52:42.817641 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:52:42.820592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:52:42.826296 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:52:42.831458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:42.831682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:52:42.833151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:52:42.838847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:52:42.845162 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:52:42.846498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:52:42.846623 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:52:42.853150 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:52:42.854293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:42.855857 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:52:42.857956 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:52:42.863441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:52:42.863685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:52:42.865492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:52:42.865698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:52:42.867489 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:52:42.867699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:52:42.876024 augenrules[1378]: No rules Feb 13 15:52:42.876708 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:52:42.877032 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:52:42.880579 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:52:42.884723 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:42.885035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:52:42.892282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:52:42.894378 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:52:42.896582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:52:42.897667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:52:42.897771 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:52:42.902979 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:52:42.906097 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:52:42.907228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:42.908603 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:52:42.910413 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:52:42.916225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:52:42.916488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:52:42.918227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:52:42.918493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:52:42.921810 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:52:42.922032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:52:42.924549 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:52:42.933059 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:42.938278 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:52:42.939512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:52:42.942190 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Feb 13 15:52:42.942705 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:52:42.948347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:52:42.951539 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:52:42.959614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:52:42.960859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:52:42.960984 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:52:42.961119 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:52:42.961202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:42.962184 augenrules[1403]: /sbin/augenrules: No change Feb 13 15:52:42.962896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:52:42.963264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:52:42.964977 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:52:42.965269 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:52:42.966963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:52:42.967198 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:52:42.968982 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:52:42.969197 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:52:42.971002 augenrules[1424]: No rules Feb 13 15:52:42.972629 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:52:42.972877 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:52:42.974713 systemd[1]: Finished ensure-sysext.service. Feb 13 15:52:42.977478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:52:42.985656 systemd-resolved[1351]: Positive Trust Anchors: Feb 13 15:52:42.985671 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:52:42.985702 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:52:42.989974 systemd-resolved[1351]: Defaulting to hostname 'linux'. Feb 13 15:52:42.996299 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:52:42.997542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:52:42.997654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:52:43.002478 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:52:43.004288 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:52:43.009804 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:52:43.030490 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:52:43.036853 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1445) Feb 13 15:52:43.075103 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:52:43.076996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:52:43.084136 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:52:43.084268 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:52:43.097799 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:52:43.098885 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:52:43.099083 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:52:43.099281 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:52:43.105682 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:52:43.112087 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:52:43.144153 systemd-networkd[1449]: lo: Link UP Feb 13 15:52:43.144167 systemd-networkd[1449]: lo: Gained carrier Feb 13 15:52:43.148133 systemd-networkd[1449]: Enumeration completed Feb 13 15:52:43.148249 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:52:43.149619 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:52:43.150905 systemd[1]: Reached target network.target - Network. Feb 13 15:52:43.151833 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:52:43.153005 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:52:43.153014 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:52:43.153592 systemd-networkd[1449]: eth0: Link UP Feb 13 15:52:43.153596 systemd-networkd[1449]: eth0: Gained carrier Feb 13 15:52:43.153609 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:52:43.199599 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:52:43.203794 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:52:43.207288 systemd-networkd[1449]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:52:43.209007 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. Feb 13 15:52:44.122678 kernel: kvm_amd: TSC scaling supported Feb 13 15:52:44.122710 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:52:44.122730 kernel: kvm_amd: Nested Paging enabled Feb 13 15:52:44.122743 kernel: kvm_amd: LBR virtualization supported Feb 13 15:52:44.122759 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:52:44.122775 kernel: kvm_amd: Virtual GIF supported Feb 13 15:52:44.118840 systemd-resolved[1351]: Clock change detected. Flushing caches. Feb 13 15:52:44.118918 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:52:44.118971 systemd-timesyncd[1455]: Initial clock synchronization to Thu 2025-02-13 15:52:44.118808 UTC. Feb 13 15:52:44.121970 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:52:44.134411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:52:44.144867 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:52:44.150660 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:52:44.178974 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:52:44.185724 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:52:44.187475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:52:44.193852 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:52:44.229752 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:52:44.231817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:52:44.233004 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:52:44.234213 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:52:44.235508 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:52:44.236982 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:52:44.238341 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:52:44.239643 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:52:44.240927 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:52:44.240967 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:52:44.241912 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:52:44.243755 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:52:44.246477 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:52:44.249943 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:52:44.251514 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:52:44.252816 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:52:44.258089 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:52:44.264809 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:52:44.267251 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:52:44.269001 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:52:44.270209 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:52:44.271205 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:52:44.272233 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:52:44.272265 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:52:44.273252 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:52:44.275350 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:52:44.277675 lvm[1489]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:52:44.279673 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:52:44.282789 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:52:44.284156 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:52:44.286461 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:52:44.291521 jq[1492]: false Feb 13 15:52:44.291698 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:52:44.296728 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:52:44.302146 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:52:44.302436 extend-filesystems[1493]: Found loop3 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found loop4 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found loop5 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found sr0 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found vda Feb 13 15:52:44.305657 extend-filesystems[1493]: Found vda1 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found vda2 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found vda3 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found usr Feb 13 15:52:44.305657 extend-filesystems[1493]: Found vda4 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found vda6 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found vda7 Feb 13 15:52:44.305657 extend-filesystems[1493]: Found vda9 Feb 13 15:52:44.305657 extend-filesystems[1493]: Checking size of /dev/vda9 Feb 13 15:52:44.308457 dbus-daemon[1491]: [system] SELinux support is enabled Feb 13 15:52:44.322733 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:52:44.324701 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:52:44.325207 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:52:44.326536 extend-filesystems[1493]: Resized partition /dev/vda9 Feb 13 15:52:44.327746 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:52:44.328355 extend-filesystems[1513]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:52:44.330840 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:52:44.335582 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:52:44.339398 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:52:44.340673 jq[1514]: true Feb 13 15:52:44.348042 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:52:44.348299 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:52:44.348664 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:52:44.348973 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:52:44.352186 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:52:44.352491 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:52:44.359626 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:52:44.361648 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1433) Feb 13 15:52:44.363188 jq[1517]: true Feb 13 15:52:44.363767 (ntainerd)[1518]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:52:44.379794 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:52:44.379828 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:52:44.382429 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:52:44.382456 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:52:44.398992 update_engine[1512]: I20250213 15:52:44.398905 1512 main.cc:92] Flatcar Update Engine starting Feb 13 15:52:44.400098 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:52:44.400180 update_engine[1512]: I20250213 15:52:44.400110 1512 update_check_scheduler.cc:74] Next update check in 7m39s Feb 13 15:52:44.408740 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:52:44.435805 tar[1516]: linux-amd64/helm Feb 13 15:52:44.444847 systemd-logind[1509]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:52:44.444872 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:52:44.447317 systemd-logind[1509]: New seat seat0. Feb 13 15:52:44.451109 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:52:44.461993 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:52:44.478104 locksmithd[1543]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:52:44.486382 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:52:44.497864 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:52:44.505971 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:52:44.506256 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:52:44.509141 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:52:44.522637 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:52:44.529342 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:52:44.536911 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:52:44.539778 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:52:44.541189 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:52:44.768468 extend-filesystems[1513]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:52:44.768468 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:52:44.768468 extend-filesystems[1513]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:52:44.771477 extend-filesystems[1493]: Resized filesystem in /dev/vda9 Feb 13 15:52:44.770410 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:52:44.770712 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:52:44.795710 containerd[1518]: time="2025-02-13T15:52:44.795590041Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:52:44.818483 containerd[1518]: time="2025-02-13T15:52:44.818433353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:44.820264 containerd[1518]: time="2025-02-13T15:52:44.820215083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:44.820264 containerd[1518]: time="2025-02-13T15:52:44.820246041Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:52:44.820264 containerd[1518]: time="2025-02-13T15:52:44.820261981Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:52:44.820465 containerd[1518]: time="2025-02-13T15:52:44.820434124Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:52:44.820465 containerd[1518]: time="2025-02-13T15:52:44.820458279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:44.820546 containerd[1518]: time="2025-02-13T15:52:44.820527379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:44.820546 containerd[1518]: time="2025-02-13T15:52:44.820543599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:44.820879 containerd[1518]: time="2025-02-13T15:52:44.820846567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:44.820879 containerd[1518]: time="2025-02-13T15:52:44.820868659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:44.820956 containerd[1518]: time="2025-02-13T15:52:44.820882535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:44.820956 containerd[1518]: time="2025-02-13T15:52:44.820893926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:44.821042 containerd[1518]: time="2025-02-13T15:52:44.821011647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:44.821303 containerd[1518]: time="2025-02-13T15:52:44.821272516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:44.821495 containerd[1518]: time="2025-02-13T15:52:44.821464396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:44.821495 containerd[1518]: time="2025-02-13T15:52:44.821485205Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:52:44.821625 containerd[1518]: time="2025-02-13T15:52:44.821585934Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:52:44.821702 containerd[1518]: time="2025-02-13T15:52:44.821671895Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:52:44.835698 bash[1545]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:52:44.837254 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:52:44.840580 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:52:44.936435 containerd[1518]: time="2025-02-13T15:52:44.936347812Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:52:44.936435 containerd[1518]: time="2025-02-13T15:52:44.936432421Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:52:44.936580 containerd[1518]: time="2025-02-13T15:52:44.936455564Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:52:44.936580 containerd[1518]: time="2025-02-13T15:52:44.936481232Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:52:44.936580 containerd[1518]: time="2025-02-13T15:52:44.936502142Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:52:44.936793 containerd[1518]: time="2025-02-13T15:52:44.936756809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:52:44.937147 containerd[1518]: time="2025-02-13T15:52:44.937115872Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:52:44.937280 containerd[1518]: time="2025-02-13T15:52:44.937251517Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:52:44.937317 containerd[1518]: time="2025-02-13T15:52:44.937280210Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:52:44.937317 containerd[1518]: time="2025-02-13T15:52:44.937300218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:52:44.937395 containerd[1518]: time="2025-02-13T15:52:44.937317430Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:52:44.937395 containerd[1518]: time="2025-02-13T15:52:44.937334152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:52:44.937395 containerd[1518]: time="2025-02-13T15:52:44.937349831Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:52:44.937395 containerd[1518]: time="2025-02-13T15:52:44.937368827Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:52:44.937395 containerd[1518]: time="2025-02-13T15:52:44.937389646Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:52:44.937520 containerd[1518]: time="2025-02-13T15:52:44.937407158Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:52:44.937520 containerd[1518]: time="2025-02-13T15:52:44.937422357Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:52:44.937520 containerd[1518]: time="2025-02-13T15:52:44.937436413Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:52:44.937520 containerd[1518]: time="2025-02-13T15:52:44.937459827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937520 containerd[1518]: time="2025-02-13T15:52:44.937479204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937520 containerd[1518]: time="2025-02-13T15:52:44.937497698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937520 containerd[1518]: time="2025-02-13T15:52:44.937514460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937683 containerd[1518]: time="2025-02-13T15:52:44.937530299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937683 containerd[1518]: time="2025-02-13T15:52:44.937549185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937683 containerd[1518]: time="2025-02-13T15:52:44.937566848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937683 containerd[1518]: time="2025-02-13T15:52:44.937583439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937683 containerd[1518]: time="2025-02-13T15:52:44.937623765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937683 containerd[1518]: time="2025-02-13T15:52:44.937649733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937683 containerd[1518]: time="2025-02-13T15:52:44.937665994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937683 containerd[1518]: time="2025-02-13T15:52:44.937681713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937834 containerd[1518]: time="2025-02-13T15:52:44.937698916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937834 containerd[1518]: time="2025-02-13T15:52:44.937718262Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:52:44.937834 containerd[1518]: time="2025-02-13T15:52:44.937744832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937834 containerd[1518]: time="2025-02-13T15:52:44.937762956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.937834 containerd[1518]: time="2025-02-13T15:52:44.937784676Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:52:44.937937 containerd[1518]: time="2025-02-13T15:52:44.937849388Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:52:44.937937 containerd[1518]: time="2025-02-13T15:52:44.937874685Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:52:44.937937 containerd[1518]: time="2025-02-13T15:52:44.937888020Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:52:44.937937 containerd[1518]: time="2025-02-13T15:52:44.937903599Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:52:44.937937 containerd[1518]: time="2025-02-13T15:52:44.937928145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.938043 containerd[1518]: time="2025-02-13T15:52:44.937944195Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:52:44.938043 containerd[1518]: time="2025-02-13T15:52:44.937959153Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:52:44.938043 containerd[1518]: time="2025-02-13T15:52:44.937973200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:52:44.938373 containerd[1518]: time="2025-02-13T15:52:44.938294783Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:52:44.938373 containerd[1518]: time="2025-02-13T15:52:44.938359865Z" level=info msg="Connect containerd service" Feb 13 15:52:44.938565 containerd[1518]: time="2025-02-13T15:52:44.938401503Z" level=info msg="using legacy CRI server" Feb 13 15:52:44.938565 containerd[1518]: time="2025-02-13T15:52:44.938411952Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:52:44.938565 containerd[1518]: time="2025-02-13T15:52:44.938538710Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:52:44.939339 containerd[1518]: time="2025-02-13T15:52:44.939311028Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:52:44.940222 containerd[1518]: time="2025-02-13T15:52:44.939514630Z" level=info msg="Start subscribing containerd event" Feb 13 15:52:44.940222 containerd[1518]: time="2025-02-13T15:52:44.939653490Z" level=info msg="Start recovering state" Feb 13 15:52:44.940222 containerd[1518]: time="2025-02-13T15:52:44.939653029Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:52:44.940222 containerd[1518]: time="2025-02-13T15:52:44.939787912Z" level=info msg="Start event monitor" Feb 13 15:52:44.940222 containerd[1518]: time="2025-02-13T15:52:44.939814312Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:52:44.940222 containerd[1518]: time="2025-02-13T15:52:44.939815845Z" level=info msg="Start snapshots syncer" Feb 13 15:52:44.940222 containerd[1518]: time="2025-02-13T15:52:44.939855539Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:52:44.940222 containerd[1518]: time="2025-02-13T15:52:44.939867892Z" level=info msg="Start streaming server" Feb 13 15:52:44.940098 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:52:44.941479 containerd[1518]: time="2025-02-13T15:52:44.941439319Z" level=info msg="containerd successfully booted in 0.147346s" Feb 13 15:52:45.029747 tar[1516]: linux-amd64/LICENSE Feb 13 15:52:45.029850 tar[1516]: linux-amd64/README.md Feb 13 15:52:45.045230 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:52:45.943739 systemd-networkd[1449]: eth0: Gained IPv6LL Feb 13 15:52:45.946960 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:52:45.948866 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:52:45.961851 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:52:45.964514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:45.966713 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:52:45.984582 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:52:45.984931 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:52:45.986576 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:52:45.988340 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:52:46.568304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:46.569930 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:52:46.571194 systemd[1]: Startup finished in 724ms (kernel) + 7.322s (initrd) + 4.672s (userspace) = 12.719s. Feb 13 15:52:46.571962 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:52:46.968770 kubelet[1605]: E0213 15:52:46.968580 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:52:46.972661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:52:46.972867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:52:46.973247 systemd[1]: kubelet.service: Consumed 903ms CPU time, 238.4M memory peak. Feb 13 15:52:50.795395 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:52:50.796798 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:39078.service - OpenSSH per-connection server daemon (10.0.0.1:39078). Feb 13 15:52:50.848506 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 39078 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:52:50.850541 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:50.856952 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:52:50.866823 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:52:50.872997 systemd-logind[1509]: New session 1 of user core. Feb 13 15:52:50.877563 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:52:50.880980 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:52:50.888274 (systemd)[1622]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:52:50.890611 systemd-logind[1509]: New session c1 of user core. Feb 13 15:52:51.030738 systemd[1622]: Queued start job for default target default.target. Feb 13 15:52:51.042933 systemd[1622]: Created slice app.slice - User Application Slice. Feb 13 15:52:51.042961 systemd[1622]: Reached target paths.target - Paths. Feb 13 15:52:51.043001 systemd[1622]: Reached target timers.target - Timers. Feb 13 15:52:51.044475 systemd[1622]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:52:51.055351 systemd[1622]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:52:51.055471 systemd[1622]: Reached target sockets.target - Sockets. Feb 13 15:52:51.055515 systemd[1622]: Reached target basic.target - Basic System. Feb 13 15:52:51.055557 systemd[1622]: Reached target default.target - Main User Target. Feb 13 15:52:51.055620 systemd[1622]: Startup finished in 157ms. Feb 13 15:52:51.056104 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:52:51.057797 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:52:51.118693 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:39080.service - OpenSSH per-connection server daemon (10.0.0.1:39080). Feb 13 15:52:51.160851 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 39080 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:52:51.162101 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:51.165878 systemd-logind[1509]: New session 2 of user core. Feb 13 15:52:51.175711 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:52:51.227476 sshd[1635]: Connection closed by 10.0.0.1 port 39080 Feb 13 15:52:51.227834 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:51.243451 systemd[1]: sshd@1-10.0.0.150:22-10.0.0.1:39080.service: Deactivated successfully. Feb 13 15:52:51.245320 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:52:51.247100 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:52:51.255853 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:39092.service - OpenSSH per-connection server daemon (10.0.0.1:39092). Feb 13 15:52:51.256901 systemd-logind[1509]: Removed session 2. Feb 13 15:52:51.292287 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 39092 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:52:51.293613 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:51.298026 systemd-logind[1509]: New session 3 of user core. Feb 13 15:52:51.307714 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:52:51.356258 sshd[1643]: Connection closed by 10.0.0.1 port 39092 Feb 13 15:52:51.356714 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:51.370272 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:39092.service: Deactivated successfully. Feb 13 15:52:51.371965 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:52:51.373652 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:52:51.380950 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:39102.service - OpenSSH per-connection server daemon (10.0.0.1:39102). Feb 13 15:52:51.382035 systemd-logind[1509]: Removed session 3. Feb 13 15:52:51.414622 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 39102 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:52:51.415856 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:51.419908 systemd-logind[1509]: New session 4 of user core. Feb 13 15:52:51.429709 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:52:51.481567 sshd[1651]: Connection closed by 10.0.0.1 port 39102 Feb 13 15:52:51.482010 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:51.491163 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:39102.service: Deactivated successfully. Feb 13 15:52:51.492815 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:52:51.494420 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:52:51.504848 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:39104.service - OpenSSH per-connection server daemon (10.0.0.1:39104). Feb 13 15:52:51.505716 systemd-logind[1509]: Removed session 4. Feb 13 15:52:51.539966 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 39104 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:52:51.541194 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:51.545095 systemd-logind[1509]: New session 5 of user core. Feb 13 15:52:51.554747 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:52:51.611700 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:52:51.612050 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:52:51.629578 sudo[1660]: pam_unix(sudo:session): session closed for user root Feb 13 15:52:51.631073 sshd[1659]: Connection closed by 10.0.0.1 port 39104 Feb 13 15:52:51.631417 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:51.655339 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:39104.service: Deactivated successfully. Feb 13 15:52:51.657139 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:52:51.658444 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:52:51.659756 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:39120.service - OpenSSH per-connection server daemon (10.0.0.1:39120). Feb 13 15:52:51.660369 systemd-logind[1509]: Removed session 5. Feb 13 15:52:51.699229 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 39120 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:52:51.700614 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:51.704482 systemd-logind[1509]: New session 6 of user core. Feb 13 15:52:51.713703 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:52:51.766070 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:52:51.766373 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:52:51.769821 sudo[1670]: pam_unix(sudo:session): session closed for user root Feb 13 15:52:51.775784 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:52:51.776087 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:52:51.795848 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:52:51.822991 augenrules[1692]: No rules Feb 13 15:52:51.823828 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:52:51.824087 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:52:51.825135 sudo[1669]: pam_unix(sudo:session): session closed for user root Feb 13 15:52:51.826555 sshd[1668]: Connection closed by 10.0.0.1 port 39120 Feb 13 15:52:51.826939 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:51.838195 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:39120.service: Deactivated successfully. Feb 13 15:52:51.839998 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:52:51.841313 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:52:51.855938 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:39136.service - OpenSSH per-connection server daemon (10.0.0.1:39136). Feb 13 15:52:51.856772 systemd-logind[1509]: Removed session 6. Feb 13 15:52:51.890063 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 39136 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:52:51.891542 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:51.895305 systemd-logind[1509]: New session 7 of user core. Feb 13 15:52:51.906701 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:52:51.958734 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:52:51.959041 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:52:52.228834 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:52:52.228933 (dockerd)[1723]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:52:52.479959 dockerd[1723]: time="2025-02-13T15:52:52.479838475Z" level=info msg="Starting up" Feb 13 15:52:53.589226 dockerd[1723]: time="2025-02-13T15:52:53.589168950Z" level=info msg="Loading containers: start." Feb 13 15:52:53.921617 kernel: Initializing XFRM netlink socket Feb 13 15:52:54.002663 systemd-networkd[1449]: docker0: Link UP Feb 13 15:52:54.141319 dockerd[1723]: time="2025-02-13T15:52:54.141264328Z" level=info msg="Loading containers: done." Feb 13 15:52:54.245070 dockerd[1723]: time="2025-02-13T15:52:54.245007537Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:52:54.245255 dockerd[1723]: time="2025-02-13T15:52:54.245122973Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:52:54.245291 dockerd[1723]: time="2025-02-13T15:52:54.245254089Z" level=info msg="Daemon has completed initialization" Feb 13 15:52:54.652510 dockerd[1723]: time="2025-02-13T15:52:54.652449500Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:52:54.652686 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:52:55.320579 containerd[1518]: time="2025-02-13T15:52:55.320522637Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:52:56.045536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169500128.mount: Deactivated successfully. Feb 13 15:52:56.899831 containerd[1518]: time="2025-02-13T15:52:56.899768433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:56.900611 containerd[1518]: time="2025-02-13T15:52:56.900508210Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 15:52:56.901762 containerd[1518]: time="2025-02-13T15:52:56.901734870Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:56.904654 containerd[1518]: time="2025-02-13T15:52:56.904604230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:56.905649 containerd[1518]: time="2025-02-13T15:52:56.905617700Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 1.585046722s" Feb 13 15:52:56.905688 containerd[1518]: time="2025-02-13T15:52:56.905650041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 15:52:56.907014 containerd[1518]: time="2025-02-13T15:52:56.906985264Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:52:57.223305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:52:57.229792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:57.375362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:57.379346 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:52:57.416046 kubelet[1984]: E0213 15:52:57.415999 1984 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:52:57.422429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:52:57.422672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:52:57.423060 systemd[1]: kubelet.service: Consumed 187ms CPU time, 98.1M memory peak. Feb 13 15:52:58.166835 containerd[1518]: time="2025-02-13T15:52:58.166774058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:58.167688 containerd[1518]: time="2025-02-13T15:52:58.167637687Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 15:52:58.168733 containerd[1518]: time="2025-02-13T15:52:58.168704217Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:58.172484 containerd[1518]: time="2025-02-13T15:52:58.172418872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:58.173276 containerd[1518]: time="2025-02-13T15:52:58.173241805Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.266230371s" Feb 13 15:52:58.173276 containerd[1518]: time="2025-02-13T15:52:58.173269637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 15:52:58.173787 containerd[1518]: time="2025-02-13T15:52:58.173745610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:52:59.623828 containerd[1518]: time="2025-02-13T15:52:59.623754836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:59.624969 containerd[1518]: time="2025-02-13T15:52:59.624922746Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 15:52:59.626194 containerd[1518]: time="2025-02-13T15:52:59.626133296Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:59.628835 containerd[1518]: time="2025-02-13T15:52:59.628800056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:59.629841 containerd[1518]: time="2025-02-13T15:52:59.629798889Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.456009397s" Feb 13 15:52:59.629841 containerd[1518]: time="2025-02-13T15:52:59.629836309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 15:52:59.630342 containerd[1518]: time="2025-02-13T15:52:59.630309036Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:53:00.821678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715069080.mount: Deactivated successfully. Feb 13 15:53:01.083004 containerd[1518]: time="2025-02-13T15:53:01.082874005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:01.083771 containerd[1518]: time="2025-02-13T15:53:01.083725902Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 15:53:01.084943 containerd[1518]: time="2025-02-13T15:53:01.084913318Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:01.087000 containerd[1518]: time="2025-02-13T15:53:01.086967991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:01.087629 containerd[1518]: time="2025-02-13T15:53:01.087580339Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.457239153s" Feb 13 15:53:01.087629 containerd[1518]: time="2025-02-13T15:53:01.087623229Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 15:53:01.088102 containerd[1518]: time="2025-02-13T15:53:01.088081619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:53:01.644952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922473714.mount: Deactivated successfully. Feb 13 15:53:02.409268 containerd[1518]: time="2025-02-13T15:53:02.409203181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:02.410154 containerd[1518]: time="2025-02-13T15:53:02.410077240Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:53:02.412796 containerd[1518]: time="2025-02-13T15:53:02.412757726Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:02.415905 containerd[1518]: time="2025-02-13T15:53:02.415857618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:02.416880 containerd[1518]: time="2025-02-13T15:53:02.416827798Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.328717866s" Feb 13 15:53:02.416880 containerd[1518]: time="2025-02-13T15:53:02.416872091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:53:02.417473 containerd[1518]: time="2025-02-13T15:53:02.417447580Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:53:02.844819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264142066.mount: Deactivated successfully. Feb 13 15:53:02.850641 containerd[1518]: time="2025-02-13T15:53:02.850576857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:02.851369 containerd[1518]: time="2025-02-13T15:53:02.851307276Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 15:53:02.852572 containerd[1518]: time="2025-02-13T15:53:02.852518728Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:02.854877 containerd[1518]: time="2025-02-13T15:53:02.854841824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:02.855530 containerd[1518]: time="2025-02-13T15:53:02.855497693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 438.024365ms" Feb 13 15:53:02.855576 containerd[1518]: time="2025-02-13T15:53:02.855528090Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 15:53:02.856024 containerd[1518]: time="2025-02-13T15:53:02.855992682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:53:03.362107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3801635668.mount: Deactivated successfully. Feb 13 15:53:04.988432 containerd[1518]: time="2025-02-13T15:53:04.988369407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:04.989226 containerd[1518]: time="2025-02-13T15:53:04.989195486Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 15:53:04.990660 containerd[1518]: time="2025-02-13T15:53:04.990615849Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:04.993509 containerd[1518]: time="2025-02-13T15:53:04.993457257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:04.994709 containerd[1518]: time="2025-02-13T15:53:04.994652368Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.138631904s" Feb 13 15:53:04.994709 containerd[1518]: time="2025-02-13T15:53:04.994686542Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 15:53:07.269800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:53:07.269965 systemd[1]: kubelet.service: Consumed 187ms CPU time, 98.1M memory peak. Feb 13 15:53:07.284797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:53:07.309266 systemd[1]: Reload requested from client PID 2140 ('systemctl') (unit session-7.scope)... Feb 13 15:53:07.309281 systemd[1]: Reloading... Feb 13 15:53:07.397628 zram_generator::config[2190]: No configuration found. Feb 13 15:53:07.495849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:53:07.596523 systemd[1]: Reloading finished in 286 ms. Feb 13 15:53:07.648726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:53:07.652399 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:53:07.654212 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:53:07.654494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:53:07.654528 systemd[1]: kubelet.service: Consumed 128ms CPU time, 83.6M memory peak. Feb 13 15:53:07.656066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:53:07.798457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:53:07.802352 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:53:07.838062 kubelet[2234]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:53:07.838062 kubelet[2234]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:53:07.838062 kubelet[2234]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:53:07.838969 kubelet[2234]: I0213 15:53:07.838929 2234 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:53:08.091678 kubelet[2234]: I0213 15:53:08.091550 2234 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:53:08.091678 kubelet[2234]: I0213 15:53:08.091589 2234 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:53:08.091901 kubelet[2234]: I0213 15:53:08.091867 2234 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:53:08.111490 kubelet[2234]: I0213 15:53:08.111465 2234 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:53:08.111628 kubelet[2234]: E0213 15:53:08.111531 2234 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:08.117121 kubelet[2234]: E0213 15:53:08.117088 2234 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:53:08.117173 kubelet[2234]: I0213 15:53:08.117123 2234 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:53:08.123570 kubelet[2234]: I0213 15:53:08.123536 2234 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:53:08.125424 kubelet[2234]: I0213 15:53:08.125365 2234 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:53:08.125739 kubelet[2234]: I0213 15:53:08.125698 2234 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:53:08.126306 kubelet[2234]: I0213 15:53:08.125740 2234 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:53:08.126423 kubelet[2234]: I0213 15:53:08.126311 2234 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:53:08.126423 kubelet[2234]: I0213 15:53:08.126326 2234 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:53:08.126471 kubelet[2234]: I0213 15:53:08.126441 2234 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:53:08.127812 kubelet[2234]: I0213 15:53:08.127785 2234 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:53:08.127812 kubelet[2234]: I0213 15:53:08.127806 2234 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:53:08.127876 kubelet[2234]: I0213 15:53:08.127844 2234 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:53:08.127876 kubelet[2234]: I0213 15:53:08.127860 2234 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:53:08.131555 kubelet[2234]: W0213 15:53:08.131456 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Feb 13 15:53:08.131555 kubelet[2234]: E0213 15:53:08.131508 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:08.132580 kubelet[2234]: I0213 15:53:08.132520 2234 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:53:08.133949 kubelet[2234]: I0213 15:53:08.133926 2234 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:53:08.134172 kubelet[2234]: W0213 15:53:08.134127 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Feb 13 15:53:08.134225 kubelet[2234]: E0213 15:53:08.134170 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:08.134387 kubelet[2234]: W0213 15:53:08.134364 2234 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:53:08.135143 kubelet[2234]: I0213 15:53:08.135022 2234 server.go:1269] "Started kubelet" Feb 13 15:53:08.135143 kubelet[2234]: I0213 15:53:08.135082 2234 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:53:08.136016 kubelet[2234]: I0213 15:53:08.135987 2234 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:53:08.138301 kubelet[2234]: I0213 15:53:08.136409 2234 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:53:08.138301 kubelet[2234]: I0213 15:53:08.136786 2234 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:53:08.138301 kubelet[2234]: I0213 15:53:08.137005 2234 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:53:08.138301 kubelet[2234]: I0213 15:53:08.137399 2234 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:53:08.138301 kubelet[2234]: I0213 15:53:08.137761 2234 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:53:08.138301 kubelet[2234]: I0213 15:53:08.137865 2234 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:53:08.138301 kubelet[2234]: I0213 15:53:08.137911 2234 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:53:08.138301 kubelet[2234]: W0213 15:53:08.138203 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Feb 13 15:53:08.138301 kubelet[2234]: E0213 15:53:08.138243 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:08.138546 kubelet[2234]: E0213 15:53:08.138419 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:53:08.138546 kubelet[2234]: E0213 15:53:08.138467 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Feb 13 15:53:08.140097 kubelet[2234]: I0213 15:53:08.139830 2234 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:53:08.140097 kubelet[2234]: E0213 15:53:08.138161 2234 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cf764f14d35d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:53:08.134998877 +0000 UTC m=+0.329014393,LastTimestamp:2025-02-13 15:53:08.134998877 +0000 UTC m=+0.329014393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:53:08.140186 kubelet[2234]: I0213 15:53:08.139921 2234 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:53:08.141170 kubelet[2234]: E0213 15:53:08.141144 2234 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:53:08.141456 kubelet[2234]: I0213 15:53:08.141428 2234 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:53:08.155586 kubelet[2234]: I0213 15:53:08.155554 2234 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:53:08.155586 kubelet[2234]: I0213 15:53:08.155572 2234 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:53:08.155586 kubelet[2234]: I0213 15:53:08.155588 2234 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:53:08.156313 kubelet[2234]: I0213 15:53:08.156248 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:53:08.157805 kubelet[2234]: I0213 15:53:08.157772 2234 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:53:08.157882 kubelet[2234]: I0213 15:53:08.157810 2234 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:53:08.157882 kubelet[2234]: I0213 15:53:08.157832 2234 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:53:08.157882 kubelet[2234]: E0213 15:53:08.157873 2234 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:53:08.239256 kubelet[2234]: E0213 15:53:08.239220 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:53:08.258638 kubelet[2234]: E0213 15:53:08.258572 2234 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:53:08.339291 kubelet[2234]: E0213 15:53:08.339241 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Feb 13 15:53:08.339344 kubelet[2234]: E0213 15:53:08.339293 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:53:08.439768 kubelet[2234]: E0213 15:53:08.439650 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:53:08.458877 kubelet[2234]: E0213 15:53:08.458831 2234 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:53:08.517883 kubelet[2234]: I0213 15:53:08.517847 2234 policy_none.go:49] "None policy: Start" Feb 13 15:53:08.518141 kubelet[2234]: W0213 15:53:08.518087 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Feb 13 15:53:08.518235 kubelet[2234]: E0213 15:53:08.518146 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:08.518814 kubelet[2234]: I0213 15:53:08.518790 2234 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:53:08.518872 kubelet[2234]: I0213 15:53:08.518821 2234 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:53:08.525959 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:53:08.539745 kubelet[2234]: E0213 15:53:08.539720 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:53:08.539920 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:53:08.542854 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:53:08.555498 kubelet[2234]: I0213 15:53:08.555470 2234 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:53:08.555881 kubelet[2234]: I0213 15:53:08.555705 2234 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:53:08.555881 kubelet[2234]: I0213 15:53:08.555723 2234 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:53:08.555946 kubelet[2234]: I0213 15:53:08.555923 2234 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:53:08.557018 kubelet[2234]: E0213 15:53:08.556980 2234 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:53:08.656879 kubelet[2234]: I0213 15:53:08.656841 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:53:08.657234 kubelet[2234]: E0213 15:53:08.657200 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Feb 13 15:53:08.739921 kubelet[2234]: E0213 15:53:08.739787 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Feb 13 15:53:08.858256 kubelet[2234]: I0213 15:53:08.858226 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:53:08.858634 kubelet[2234]: E0213 15:53:08.858504 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Feb 13 15:53:08.867966 systemd[1]: Created slice kubepods-burstable-pod6e08f5d58ab162dd4d6bb053ed8374e4.slice - libcontainer container kubepods-burstable-pod6e08f5d58ab162dd4d6bb053ed8374e4.slice. Feb 13 15:53:08.880613 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 15:53:08.883842 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 15:53:08.941467 kubelet[2234]: I0213 15:53:08.941426 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:08.941467 kubelet[2234]: I0213 15:53:08.941464 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:08.941645 kubelet[2234]: I0213 15:53:08.941487 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:08.941645 kubelet[2234]: I0213 15:53:08.941508 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:53:08.941645 kubelet[2234]: I0213 15:53:08.941532 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e08f5d58ab162dd4d6bb053ed8374e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e08f5d58ab162dd4d6bb053ed8374e4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:53:08.941645 kubelet[2234]: I0213 15:53:08.941554 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:08.941645 kubelet[2234]: I0213 15:53:08.941578 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:08.941819 kubelet[2234]: I0213 15:53:08.941616 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e08f5d58ab162dd4d6bb053ed8374e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e08f5d58ab162dd4d6bb053ed8374e4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:53:08.941819 kubelet[2234]: I0213 15:53:08.941637 2234 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e08f5d58ab162dd4d6bb053ed8374e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e08f5d58ab162dd4d6bb053ed8374e4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:53:08.956991 kubelet[2234]: W0213 15:53:08.956933 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Feb 13 15:53:08.957046 kubelet[2234]: E0213 15:53:08.956998 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:09.180073 kubelet[2234]: E0213 15:53:09.179965 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:09.180460 containerd[1518]: time="2025-02-13T15:53:09.180432344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e08f5d58ab162dd4d6bb053ed8374e4,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:09.182654 kubelet[2234]: E0213 15:53:09.182593 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:09.182916 containerd[1518]: time="2025-02-13T15:53:09.182846731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:09.186388 kubelet[2234]: E0213 15:53:09.186353 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:09.186665 containerd[1518]: time="2025-02-13T15:53:09.186633732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:09.260318 kubelet[2234]: I0213 15:53:09.260273 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:53:09.260658 kubelet[2234]: E0213 15:53:09.260619 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Feb 13 15:53:09.362431 kubelet[2234]: W0213 15:53:09.362366 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Feb 13 15:53:09.362431 kubelet[2234]: E0213 15:53:09.362424 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:09.540660 kubelet[2234]: E0213 15:53:09.540562 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Feb 13 15:53:09.639554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778366532.mount: Deactivated successfully. Feb 13 15:53:09.646840 containerd[1518]: time="2025-02-13T15:53:09.646805213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:53:09.650033 containerd[1518]: time="2025-02-13T15:53:09.649974035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:53:09.651083 containerd[1518]: time="2025-02-13T15:53:09.651053619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:53:09.653094 containerd[1518]: time="2025-02-13T15:53:09.653055873Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:53:09.654029 containerd[1518]: time="2025-02-13T15:53:09.653985146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:53:09.654990 containerd[1518]: time="2025-02-13T15:53:09.654961025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:53:09.655919 containerd[1518]: time="2025-02-13T15:53:09.655875330Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:53:09.657052 containerd[1518]: time="2025-02-13T15:53:09.657014846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:53:09.657878 containerd[1518]: time="2025-02-13T15:53:09.657840514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.243822ms" Feb 13 15:53:09.660351 containerd[1518]: time="2025-02-13T15:53:09.660316236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.401408ms" Feb 13 15:53:09.663553 containerd[1518]: time="2025-02-13T15:53:09.663529702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 476.835196ms" Feb 13 15:53:09.721467 kubelet[2234]: W0213 15:53:09.721395 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Feb 13 15:53:09.721467 kubelet[2234]: E0213 15:53:09.721466 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:09.800076 kubelet[2234]: W0213 15:53:09.799874 2234 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Feb 13 15:53:09.800076 kubelet[2234]: E0213 15:53:09.799927 2234 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:09.800211 containerd[1518]: time="2025-02-13T15:53:09.799942165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:09.800211 containerd[1518]: time="2025-02-13T15:53:09.800002067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:09.800211 containerd[1518]: time="2025-02-13T15:53:09.800016685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:09.800700 containerd[1518]: time="2025-02-13T15:53:09.800097857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:09.801021 containerd[1518]: time="2025-02-13T15:53:09.800823488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:09.801021 containerd[1518]: time="2025-02-13T15:53:09.800877719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:09.801021 containerd[1518]: time="2025-02-13T15:53:09.800889271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:09.801021 containerd[1518]: time="2025-02-13T15:53:09.800962107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:09.802588 containerd[1518]: time="2025-02-13T15:53:09.799395470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:09.802864 containerd[1518]: time="2025-02-13T15:53:09.802614205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:09.802864 containerd[1518]: time="2025-02-13T15:53:09.802631498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:09.802864 containerd[1518]: time="2025-02-13T15:53:09.802709584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:09.824737 systemd[1]: Started cri-containerd-6c537247cc45704cb8425d7f2e9b3fb691dd8c2290761311f9062242b37204d3.scope - libcontainer container 6c537247cc45704cb8425d7f2e9b3fb691dd8c2290761311f9062242b37204d3. Feb 13 15:53:09.828712 systemd[1]: Started cri-containerd-79b21ae4a9afd3a21407f63c70ec5a7dc4888fbd07369546eb25ae8b5d33d694.scope - libcontainer container 79b21ae4a9afd3a21407f63c70ec5a7dc4888fbd07369546eb25ae8b5d33d694. Feb 13 15:53:09.830472 systemd[1]: Started cri-containerd-e65984ffe8b721b284dbe1f78f80a2e62342b9ed159c9373b74ff74e9f49574b.scope - libcontainer container e65984ffe8b721b284dbe1f78f80a2e62342b9ed159c9373b74ff74e9f49574b. Feb 13 15:53:09.863313 containerd[1518]: time="2025-02-13T15:53:09.863165849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c537247cc45704cb8425d7f2e9b3fb691dd8c2290761311f9062242b37204d3\"" Feb 13 15:53:09.864263 kubelet[2234]: E0213 15:53:09.864156 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:09.867973 containerd[1518]: time="2025-02-13T15:53:09.867808885Z" level=info msg="CreateContainer within sandbox \"6c537247cc45704cb8425d7f2e9b3fb691dd8c2290761311f9062242b37204d3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:53:09.871780 containerd[1518]: time="2025-02-13T15:53:09.871720399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e08f5d58ab162dd4d6bb053ed8374e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e65984ffe8b721b284dbe1f78f80a2e62342b9ed159c9373b74ff74e9f49574b\"" Feb 13 15:53:09.872533 kubelet[2234]: E0213 15:53:09.872514 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:09.874231 containerd[1518]: time="2025-02-13T15:53:09.874159072Z" level=info msg="CreateContainer within sandbox \"e65984ffe8b721b284dbe1f78f80a2e62342b9ed159c9373b74ff74e9f49574b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:53:09.874380 containerd[1518]: time="2025-02-13T15:53:09.874360119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"79b21ae4a9afd3a21407f63c70ec5a7dc4888fbd07369546eb25ae8b5d33d694\"" Feb 13 15:53:09.875040 kubelet[2234]: E0213 15:53:09.874969 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:09.876678 containerd[1518]: time="2025-02-13T15:53:09.876643590Z" level=info msg="CreateContainer within sandbox \"79b21ae4a9afd3a21407f63c70ec5a7dc4888fbd07369546eb25ae8b5d33d694\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:53:10.061820 kubelet[2234]: I0213 15:53:10.061719 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:53:10.062108 kubelet[2234]: E0213 15:53:10.062067 2234 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Feb 13 15:53:10.221112 containerd[1518]: time="2025-02-13T15:53:10.221061904Z" level=info msg="CreateContainer within sandbox \"79b21ae4a9afd3a21407f63c70ec5a7dc4888fbd07369546eb25ae8b5d33d694\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6e4a574f7575476eddbdc7c46464d2ad32fce902d827cfd6417dc8869014b32e\"" Feb 13 15:53:10.222086 containerd[1518]: time="2025-02-13T15:53:10.221804527Z" level=info msg="StartContainer for \"6e4a574f7575476eddbdc7c46464d2ad32fce902d827cfd6417dc8869014b32e\"" Feb 13 15:53:10.227786 containerd[1518]: time="2025-02-13T15:53:10.227747209Z" level=info msg="CreateContainer within sandbox \"6c537247cc45704cb8425d7f2e9b3fb691dd8c2290761311f9062242b37204d3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bcd6ad4a5b9786a1f14407a6bd07fda163b51a7558d5e4c6ddc01dd14956818d\"" Feb 13 15:53:10.228536 containerd[1518]: time="2025-02-13T15:53:10.228412898Z" level=info msg="StartContainer for \"bcd6ad4a5b9786a1f14407a6bd07fda163b51a7558d5e4c6ddc01dd14956818d\"" Feb 13 15:53:10.232250 containerd[1518]: time="2025-02-13T15:53:10.232218253Z" level=info msg="CreateContainer within sandbox \"e65984ffe8b721b284dbe1f78f80a2e62342b9ed159c9373b74ff74e9f49574b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f0a4434ac10c706a17cdb36b97229be6309fee5513479cf6fba421436820ae26\"" Feb 13 15:53:10.232844 containerd[1518]: time="2025-02-13T15:53:10.232627159Z" level=info msg="StartContainer for \"f0a4434ac10c706a17cdb36b97229be6309fee5513479cf6fba421436820ae26\"" Feb 13 15:53:10.252720 systemd[1]: Started cri-containerd-6e4a574f7575476eddbdc7c46464d2ad32fce902d827cfd6417dc8869014b32e.scope - libcontainer container 6e4a574f7575476eddbdc7c46464d2ad32fce902d827cfd6417dc8869014b32e. Feb 13 15:53:10.258811 systemd[1]: Started cri-containerd-bcd6ad4a5b9786a1f14407a6bd07fda163b51a7558d5e4c6ddc01dd14956818d.scope - libcontainer container bcd6ad4a5b9786a1f14407a6bd07fda163b51a7558d5e4c6ddc01dd14956818d. Feb 13 15:53:10.261517 kubelet[2234]: E0213 15:53:10.261462 2234 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:53:10.261820 systemd[1]: Started cri-containerd-f0a4434ac10c706a17cdb36b97229be6309fee5513479cf6fba421436820ae26.scope - libcontainer container f0a4434ac10c706a17cdb36b97229be6309fee5513479cf6fba421436820ae26. Feb 13 15:53:10.304062 containerd[1518]: time="2025-02-13T15:53:10.303933789Z" level=info msg="StartContainer for \"6e4a574f7575476eddbdc7c46464d2ad32fce902d827cfd6417dc8869014b32e\" returns successfully" Feb 13 15:53:10.309126 containerd[1518]: time="2025-02-13T15:53:10.309088935Z" level=info msg="StartContainer for \"bcd6ad4a5b9786a1f14407a6bd07fda163b51a7558d5e4c6ddc01dd14956818d\" returns successfully" Feb 13 15:53:10.313879 containerd[1518]: time="2025-02-13T15:53:10.313780121Z" level=info msg="StartContainer for \"f0a4434ac10c706a17cdb36b97229be6309fee5513479cf6fba421436820ae26\" returns successfully" Feb 13 15:53:11.168801 kubelet[2234]: E0213 15:53:11.168764 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:11.169989 kubelet[2234]: E0213 15:53:11.169966 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:11.171953 kubelet[2234]: E0213 15:53:11.171932 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:11.292159 kubelet[2234]: E0213 15:53:11.291377 2234 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:53:11.428561 kubelet[2234]: E0213 15:53:11.428382 2234 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823cf764f14d35d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:53:08.134998877 +0000 UTC m=+0.329014393,LastTimestamp:2025-02-13 15:53:08.134998877 +0000 UTC m=+0.329014393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:53:11.486021 kubelet[2234]: E0213 15:53:11.485790 2234 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823cf764f726ba4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:53:08.141132708 +0000 UTC m=+0.335148224,LastTimestamp:2025-02-13 15:53:08.141132708 +0000 UTC m=+0.335148224,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:53:11.538213 kubelet[2234]: E0213 15:53:11.538080 2234 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823cf765045a388 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:53:08.154975112 +0000 UTC m=+0.348990628,LastTimestamp:2025-02-13 15:53:08.154975112 +0000 UTC m=+0.348990628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:53:11.591048 kubelet[2234]: E0213 15:53:11.590979 2234 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823cf765045bd5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:53:08.154981725 +0000 UTC m=+0.348997241,LastTimestamp:2025-02-13 15:53:08.154981725 +0000 UTC m=+0.348997241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:53:11.626959 kubelet[2234]: E0213 15:53:11.626922 2234 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 15:53:11.643412 kubelet[2234]: E0213 15:53:11.643307 2234 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823cf765045cab5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:53:08.154985141 +0000 UTC m=+0.349000657,LastTimestamp:2025-02-13 15:53:08.154985141 +0000 UTC m=+0.349000657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:53:11.663427 kubelet[2234]: I0213 15:53:11.663407 2234 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:53:11.669838 kubelet[2234]: I0213 15:53:11.669795 2234 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:53:11.669838 kubelet[2234]: E0213 15:53:11.669818 2234 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 15:53:11.675893 kubelet[2234]: E0213 15:53:11.675855 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:53:11.776326 kubelet[2234]: E0213 15:53:11.776191 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:53:11.877006 kubelet[2234]: E0213 15:53:11.876954 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:53:11.977577 kubelet[2234]: E0213 15:53:11.977525 2234 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:53:12.133861 kubelet[2234]: I0213 15:53:12.133716 2234 apiserver.go:52] "Watching apiserver" Feb 13 15:53:12.138933 kubelet[2234]: I0213 15:53:12.138879 2234 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:53:12.176943 kubelet[2234]: E0213 15:53:12.176901 2234 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 15:53:12.177333 kubelet[2234]: E0213 15:53:12.177009 2234 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:12.177333 kubelet[2234]: E0213 15:53:12.177018 2234 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 15:53:12.177333 kubelet[2234]: E0213 15:53:12.177062 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:12.177333 kubelet[2234]: E0213 15:53:12.177181 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:12.177333 kubelet[2234]: E0213 15:53:12.177182 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:13.178096 kubelet[2234]: E0213 15:53:13.178058 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:13.179221 kubelet[2234]: E0213 15:53:13.179178 2234 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:13.737017 systemd[1]: Reload requested from client PID 2513 ('systemctl') (unit session-7.scope)... Feb 13 15:53:13.737036 systemd[1]: Reloading... Feb 13 15:53:13.824631 zram_generator::config[2560]: No configuration found. Feb 13 15:53:13.934252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:53:14.048773 systemd[1]: Reloading finished in 311 ms. Feb 13 15:53:14.071693 kubelet[2234]: I0213 15:53:14.071651 2234 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:53:14.071923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:53:14.096103 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:53:14.096450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:53:14.096515 systemd[1]: kubelet.service: Consumed 800ms CPU time, 121.2M memory peak. Feb 13 15:53:14.101958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:53:14.251797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:53:14.257501 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:53:14.297904 kubelet[2602]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:53:14.299110 kubelet[2602]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:53:14.299110 kubelet[2602]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:53:14.299110 kubelet[2602]: I0213 15:53:14.298341 2602 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:53:14.305380 kubelet[2602]: I0213 15:53:14.305335 2602 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:53:14.305380 kubelet[2602]: I0213 15:53:14.305365 2602 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:53:14.305697 kubelet[2602]: I0213 15:53:14.305671 2602 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:53:14.306965 kubelet[2602]: I0213 15:53:14.306942 2602 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:53:14.491358 kubelet[2602]: I0213 15:53:14.491294 2602 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:53:14.494258 kubelet[2602]: E0213 15:53:14.494219 2602 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:53:14.494258 kubelet[2602]: I0213 15:53:14.494256 2602 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:53:14.499529 kubelet[2602]: I0213 15:53:14.499508 2602 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:53:14.499657 kubelet[2602]: I0213 15:53:14.499639 2602 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:53:14.499807 kubelet[2602]: I0213 15:53:14.499770 2602 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:53:14.499971 kubelet[2602]: I0213 15:53:14.499798 2602 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:53:14.500051 kubelet[2602]: I0213 15:53:14.499971 2602 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:53:14.500051 kubelet[2602]: I0213 15:53:14.499981 2602 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:53:14.500051 kubelet[2602]: I0213 15:53:14.500009 2602 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:53:14.500120 kubelet[2602]: I0213 15:53:14.500113 2602 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:53:14.500150 kubelet[2602]: I0213 15:53:14.500125 2602 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:53:14.500174 kubelet[2602]: I0213 15:53:14.500155 2602 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:53:14.500174 kubelet[2602]: I0213 15:53:14.500170 2602 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:53:14.501006 kubelet[2602]: I0213 15:53:14.500923 2602 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:53:14.502215 kubelet[2602]: I0213 15:53:14.502153 2602 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:53:14.502978 kubelet[2602]: I0213 15:53:14.502963 2602 server.go:1269] "Started kubelet" Feb 13 15:53:14.506614 kubelet[2602]: I0213 15:53:14.504215 2602 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:53:14.506614 kubelet[2602]: I0213 15:53:14.504261 2602 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:53:14.506614 kubelet[2602]: I0213 15:53:14.505870 2602 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:53:14.506614 kubelet[2602]: I0213 15:53:14.506005 2602 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:53:14.507313 kubelet[2602]: I0213 15:53:14.507290 2602 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:53:14.509751 kubelet[2602]: E0213 15:53:14.509734 2602 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:53:14.509972 kubelet[2602]: I0213 15:53:14.509908 2602 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:53:14.511342 kubelet[2602]: I0213 15:53:14.511300 2602 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:53:14.511464 kubelet[2602]: I0213 15:53:14.511444 2602 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:53:14.511923 kubelet[2602]: I0213 15:53:14.511904 2602 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:53:14.512140 kubelet[2602]: I0213 15:53:14.512116 2602 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:53:14.512255 kubelet[2602]: I0213 15:53:14.512230 2602 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:53:14.514763 kubelet[2602]: I0213 15:53:14.514732 2602 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:53:14.519024 kubelet[2602]: I0213 15:53:14.518974 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:53:14.520699 kubelet[2602]: I0213 15:53:14.520666 2602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:53:14.520749 kubelet[2602]: I0213 15:53:14.520701 2602 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:53:14.520749 kubelet[2602]: I0213 15:53:14.520726 2602 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:53:14.520797 kubelet[2602]: E0213 15:53:14.520784 2602 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:53:14.546795 kubelet[2602]: I0213 15:53:14.546767 2602 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:53:14.546795 kubelet[2602]: I0213 15:53:14.546785 2602 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:53:14.546795 kubelet[2602]: I0213 15:53:14.546805 2602 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:53:14.547020 kubelet[2602]: I0213 15:53:14.546985 2602 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:53:14.547020 kubelet[2602]: I0213 15:53:14.547001 2602 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:53:14.547020 kubelet[2602]: I0213 15:53:14.547021 2602 policy_none.go:49] "None policy: Start" Feb 13 15:53:14.547697 kubelet[2602]: I0213 15:53:14.547652 2602 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:53:14.547697 kubelet[2602]: I0213 15:53:14.547677 2602 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:53:14.547871 kubelet[2602]: I0213 15:53:14.547843 2602 state_mem.go:75] "Updated machine memory state" Feb 13 15:53:14.552500 kubelet[2602]: I0213 15:53:14.552423 2602 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:53:14.553180 kubelet[2602]: I0213 15:53:14.552590 2602 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:53:14.553180 kubelet[2602]: I0213 15:53:14.552685 2602 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:53:14.553233 kubelet[2602]: I0213 15:53:14.553223 2602 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:53:14.628320 kubelet[2602]: E0213 15:53:14.628268 2602 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:53:14.628320 kubelet[2602]: E0213 15:53:14.628285 2602 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:53:14.658365 kubelet[2602]: I0213 15:53:14.658271 2602 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:53:14.664230 kubelet[2602]: I0213 15:53:14.664210 2602 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 15:53:14.664324 kubelet[2602]: I0213 15:53:14.664261 2602 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:53:14.713058 kubelet[2602]: I0213 15:53:14.713022 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:14.713058 kubelet[2602]: I0213 15:53:14.713053 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:14.713209 kubelet[2602]: I0213 15:53:14.713074 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e08f5d58ab162dd4d6bb053ed8374e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e08f5d58ab162dd4d6bb053ed8374e4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:53:14.713209 kubelet[2602]: I0213 15:53:14.713098 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e08f5d58ab162dd4d6bb053ed8374e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e08f5d58ab162dd4d6bb053ed8374e4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:53:14.713209 kubelet[2602]: I0213 15:53:14.713114 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e08f5d58ab162dd4d6bb053ed8374e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e08f5d58ab162dd4d6bb053ed8374e4\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:53:14.713209 kubelet[2602]: I0213 15:53:14.713135 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:14.713209 kubelet[2602]: I0213 15:53:14.713152 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:53:14.713315 kubelet[2602]: I0213 15:53:14.713168 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:14.713315 kubelet[2602]: I0213 15:53:14.713187 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:53:14.748518 sudo[2639]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:53:14.748895 sudo[2639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:53:14.929149 kubelet[2602]: E0213 15:53:14.929016 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:14.929149 kubelet[2602]: E0213 15:53:14.929029 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:14.929149 kubelet[2602]: E0213 15:53:14.929108 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:15.202703 sudo[2639]: pam_unix(sudo:session): session closed for user root Feb 13 15:53:15.500769 kubelet[2602]: I0213 15:53:15.500663 2602 apiserver.go:52] "Watching apiserver" Feb 13 15:53:15.512477 kubelet[2602]: I0213 15:53:15.512461 2602 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:53:15.530961 kubelet[2602]: E0213 15:53:15.530921 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:15.531142 kubelet[2602]: E0213 15:53:15.531120 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:15.531550 kubelet[2602]: E0213 15:53:15.531526 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:15.545696 kubelet[2602]: I0213 15:53:15.545463 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.545450251 podStartE2EDuration="2.545450251s" podCreationTimestamp="2025-02-13 15:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:15.545365592 +0000 UTC m=+1.283837887" watchObservedRunningTime="2025-02-13 15:53:15.545450251 +0000 UTC m=+1.283922545" Feb 13 15:53:15.557815 kubelet[2602]: I0213 15:53:15.557763 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.557743772 podStartE2EDuration="1.557743772s" podCreationTimestamp="2025-02-13 15:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:15.551028641 +0000 UTC m=+1.289500935" watchObservedRunningTime="2025-02-13 15:53:15.557743772 +0000 UTC m=+1.296216066" Feb 13 15:53:15.563628 kubelet[2602]: I0213 15:53:15.563556 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.5635426040000002 podStartE2EDuration="2.563542604s" podCreationTimestamp="2025-02-13 15:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:15.557921865 +0000 UTC m=+1.296394160" watchObservedRunningTime="2025-02-13 15:53:15.563542604 +0000 UTC m=+1.302014898" Feb 13 15:53:16.438504 sudo[1704]: pam_unix(sudo:session): session closed for user root Feb 13 15:53:16.440131 sshd[1703]: Connection closed by 10.0.0.1 port 39136 Feb 13 15:53:16.440537 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:16.444763 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:39136.service: Deactivated successfully. Feb 13 15:53:16.447538 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:53:16.447894 systemd[1]: session-7.scope: Consumed 4.146s CPU time, 252.5M memory peak. Feb 13 15:53:16.449359 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:53:16.450303 systemd-logind[1509]: Removed session 7. Feb 13 15:53:16.531657 kubelet[2602]: E0213 15:53:16.531621 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:17.532794 kubelet[2602]: E0213 15:53:17.532752 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:18.242652 kubelet[2602]: I0213 15:53:18.242621 2602 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:53:18.242972 containerd[1518]: time="2025-02-13T15:53:18.242936306Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:53:18.243299 kubelet[2602]: I0213 15:53:18.243167 2602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:53:19.073730 systemd[1]: Created slice kubepods-besteffort-pod187ae9f4_d311_4f7c_b320_0dbdb1b436e6.slice - libcontainer container kubepods-besteffort-pod187ae9f4_d311_4f7c_b320_0dbdb1b436e6.slice. Feb 13 15:53:19.095843 systemd[1]: Created slice kubepods-burstable-podaa885608_4f6a_4364_9f96_0d3b14ef9f90.slice - libcontainer container kubepods-burstable-podaa885608_4f6a_4364_9f96_0d3b14ef9f90.slice. Feb 13 15:53:19.139511 kubelet[2602]: I0213 15:53:19.139470 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-run\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.139511 kubelet[2602]: I0213 15:53:19.139508 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa885608-4f6a-4364-9f96-0d3b14ef9f90-hubble-tls\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.139977 kubelet[2602]: I0213 15:53:19.139544 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/187ae9f4-d311-4f7c-b320-0dbdb1b436e6-kube-proxy\") pod \"kube-proxy-hpvgg\" (UID: \"187ae9f4-d311-4f7c-b320-0dbdb1b436e6\") " pod="kube-system/kube-proxy-hpvgg" Feb 13 15:53:19.139977 kubelet[2602]: I0213 15:53:19.139613 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67j5k\" (UniqueName: \"kubernetes.io/projected/187ae9f4-d311-4f7c-b320-0dbdb1b436e6-kube-api-access-67j5k\") pod \"kube-proxy-hpvgg\" (UID: \"187ae9f4-d311-4f7c-b320-0dbdb1b436e6\") " pod="kube-system/kube-proxy-hpvgg" Feb 13 15:53:19.139977 kubelet[2602]: I0213 15:53:19.139632 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-host-proc-sys-kernel\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.139977 kubelet[2602]: I0213 15:53:19.139644 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/187ae9f4-d311-4f7c-b320-0dbdb1b436e6-xtables-lock\") pod \"kube-proxy-hpvgg\" (UID: \"187ae9f4-d311-4f7c-b320-0dbdb1b436e6\") " pod="kube-system/kube-proxy-hpvgg" Feb 13 15:53:19.139977 kubelet[2602]: I0213 15:53:19.139656 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-bpf-maps\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140123 kubelet[2602]: I0213 15:53:19.139668 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-cgroup\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140123 kubelet[2602]: I0213 15:53:19.139682 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa885608-4f6a-4364-9f96-0d3b14ef9f90-clustermesh-secrets\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140123 kubelet[2602]: I0213 15:53:19.139695 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-lib-modules\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140123 kubelet[2602]: I0213 15:53:19.139707 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cni-path\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140123 kubelet[2602]: I0213 15:53:19.139743 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-etc-cni-netd\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140123 kubelet[2602]: I0213 15:53:19.139773 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-xtables-lock\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140253 kubelet[2602]: I0213 15:53:19.139812 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vctq\" (UniqueName: \"kubernetes.io/projected/aa885608-4f6a-4364-9f96-0d3b14ef9f90-kube-api-access-5vctq\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140253 kubelet[2602]: I0213 15:53:19.139829 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/187ae9f4-d311-4f7c-b320-0dbdb1b436e6-lib-modules\") pod \"kube-proxy-hpvgg\" (UID: \"187ae9f4-d311-4f7c-b320-0dbdb1b436e6\") " pod="kube-system/kube-proxy-hpvgg" Feb 13 15:53:19.140253 kubelet[2602]: I0213 15:53:19.139850 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-hostproc\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140253 kubelet[2602]: I0213 15:53:19.139888 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-config-path\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.140253 kubelet[2602]: I0213 15:53:19.139902 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-host-proc-sys-net\") pod \"cilium-gzzlx\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " pod="kube-system/cilium-gzzlx" Feb 13 15:53:19.281284 systemd[1]: Created slice kubepods-besteffort-pod334fa82c_e2cd_466b_b195_288c1a3f64b2.slice - libcontainer container kubepods-besteffort-pod334fa82c_e2cd_466b_b195_288c1a3f64b2.slice. Feb 13 15:53:19.340958 kubelet[2602]: I0213 15:53:19.340842 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvjhk\" (UniqueName: \"kubernetes.io/projected/334fa82c-e2cd-466b-b195-288c1a3f64b2-kube-api-access-rvjhk\") pod \"cilium-operator-5d85765b45-l77zm\" (UID: \"334fa82c-e2cd-466b-b195-288c1a3f64b2\") " pod="kube-system/cilium-operator-5d85765b45-l77zm" Feb 13 15:53:19.340958 kubelet[2602]: I0213 15:53:19.340884 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/334fa82c-e2cd-466b-b195-288c1a3f64b2-cilium-config-path\") pod \"cilium-operator-5d85765b45-l77zm\" (UID: \"334fa82c-e2cd-466b-b195-288c1a3f64b2\") " pod="kube-system/cilium-operator-5d85765b45-l77zm" Feb 13 15:53:19.387852 kubelet[2602]: E0213 15:53:19.387826 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:19.388309 containerd[1518]: time="2025-02-13T15:53:19.388265430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hpvgg,Uid:187ae9f4-d311-4f7c-b320-0dbdb1b436e6,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:19.399942 kubelet[2602]: E0213 15:53:19.399908 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:19.400343 containerd[1518]: time="2025-02-13T15:53:19.400300335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gzzlx,Uid:aa885608-4f6a-4364-9f96-0d3b14ef9f90,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:19.532867 containerd[1518]: time="2025-02-13T15:53:19.532770558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:19.533014 containerd[1518]: time="2025-02-13T15:53:19.532885058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:19.533014 containerd[1518]: time="2025-02-13T15:53:19.532909655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:19.533856 containerd[1518]: time="2025-02-13T15:53:19.533759818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:19.534965 containerd[1518]: time="2025-02-13T15:53:19.534856905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:19.534965 containerd[1518]: time="2025-02-13T15:53:19.534917702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:19.535124 containerd[1518]: time="2025-02-13T15:53:19.534946046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:19.535124 containerd[1518]: time="2025-02-13T15:53:19.535033434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:19.559741 systemd[1]: Started cri-containerd-643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273.scope - libcontainer container 643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273. Feb 13 15:53:19.563295 systemd[1]: Started cri-containerd-360915bf8dd4317e6a0b4159ae37aea2251c94059eadccadfa34a149725a0557.scope - libcontainer container 360915bf8dd4317e6a0b4159ae37aea2251c94059eadccadfa34a149725a0557. Feb 13 15:53:19.584483 containerd[1518]: time="2025-02-13T15:53:19.584412459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gzzlx,Uid:aa885608-4f6a-4364-9f96-0d3b14ef9f90,Namespace:kube-system,Attempt:0,} returns sandbox id \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\"" Feb 13 15:53:19.584870 kubelet[2602]: E0213 15:53:19.584849 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:19.585500 kubelet[2602]: E0213 15:53:19.585296 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:19.586344 containerd[1518]: time="2025-02-13T15:53:19.586308150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l77zm,Uid:334fa82c-e2cd-466b-b195-288c1a3f64b2,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:19.586530 containerd[1518]: time="2025-02-13T15:53:19.586476363Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:53:19.590435 containerd[1518]: time="2025-02-13T15:53:19.590136303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hpvgg,Uid:187ae9f4-d311-4f7c-b320-0dbdb1b436e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"360915bf8dd4317e6a0b4159ae37aea2251c94059eadccadfa34a149725a0557\"" Feb 13 15:53:19.590751 kubelet[2602]: E0213 15:53:19.590636 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:19.593198 containerd[1518]: time="2025-02-13T15:53:19.592553256Z" level=info msg="CreateContainer within sandbox \"360915bf8dd4317e6a0b4159ae37aea2251c94059eadccadfa34a149725a0557\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:53:19.615572 containerd[1518]: time="2025-02-13T15:53:19.615520570Z" level=info msg="CreateContainer within sandbox \"360915bf8dd4317e6a0b4159ae37aea2251c94059eadccadfa34a149725a0557\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"98934f3e636d073c19957088331f7363270b61cc80d7900405fb7364ffc6f0b5\"" Feb 13 15:53:19.616491 containerd[1518]: time="2025-02-13T15:53:19.616465706Z" level=info msg="StartContainer for \"98934f3e636d073c19957088331f7363270b61cc80d7900405fb7364ffc6f0b5\"" Feb 13 15:53:19.618427 containerd[1518]: time="2025-02-13T15:53:19.618333863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:19.618427 containerd[1518]: time="2025-02-13T15:53:19.618405902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:19.618520 containerd[1518]: time="2025-02-13T15:53:19.618425500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:19.618553 containerd[1518]: time="2025-02-13T15:53:19.618514371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:19.639731 systemd[1]: Started cri-containerd-c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db.scope - libcontainer container c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db. Feb 13 15:53:19.642944 systemd[1]: Started cri-containerd-98934f3e636d073c19957088331f7363270b61cc80d7900405fb7364ffc6f0b5.scope - libcontainer container 98934f3e636d073c19957088331f7363270b61cc80d7900405fb7364ffc6f0b5. Feb 13 15:53:19.681768 containerd[1518]: time="2025-02-13T15:53:19.681689764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-l77zm,Uid:334fa82c-e2cd-466b-b195-288c1a3f64b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db\"" Feb 13 15:53:19.681768 containerd[1518]: time="2025-02-13T15:53:19.681749389Z" level=info msg="StartContainer for \"98934f3e636d073c19957088331f7363270b61cc80d7900405fb7364ffc6f0b5\" returns successfully" Feb 13 15:53:19.682832 kubelet[2602]: E0213 15:53:19.682794 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:20.539519 kubelet[2602]: E0213 15:53:20.539490 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:20.548611 kubelet[2602]: I0213 15:53:20.548546 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hpvgg" podStartSLOduration=1.548526457 podStartE2EDuration="1.548526457s" podCreationTimestamp="2025-02-13 15:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:20.548336433 +0000 UTC m=+6.286808737" watchObservedRunningTime="2025-02-13 15:53:20.548526457 +0000 UTC m=+6.286998751" Feb 13 15:53:21.483853 kubelet[2602]: E0213 15:53:21.483822 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:21.540917 kubelet[2602]: E0213 15:53:21.540872 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:23.928721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730851294.mount: Deactivated successfully. Feb 13 15:53:24.447154 kubelet[2602]: E0213 15:53:24.447114 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:24.544749 kubelet[2602]: E0213 15:53:24.544690 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:27.482745 kubelet[2602]: E0213 15:53:27.482709 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:27.547995 kubelet[2602]: E0213 15:53:27.547965 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:29.678027 containerd[1518]: time="2025-02-13T15:53:29.677970446Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:29.678995 containerd[1518]: time="2025-02-13T15:53:29.678957471Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:53:29.679957 containerd[1518]: time="2025-02-13T15:53:29.679896203Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:29.681589 containerd[1518]: time="2025-02-13T15:53:29.681562176Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.094930615s" Feb 13 15:53:29.681682 containerd[1518]: time="2025-02-13T15:53:29.681617963Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:53:29.685904 containerd[1518]: time="2025-02-13T15:53:29.685871871Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:53:29.697465 containerd[1518]: time="2025-02-13T15:53:29.697425201Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:53:29.710913 containerd[1518]: time="2025-02-13T15:53:29.710873008Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\"" Feb 13 15:53:29.711305 containerd[1518]: time="2025-02-13T15:53:29.711278708Z" level=info msg="StartContainer for \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\"" Feb 13 15:53:29.745757 systemd[1]: Started cri-containerd-e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317.scope - libcontainer container e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317. Feb 13 15:53:29.772426 containerd[1518]: time="2025-02-13T15:53:29.772361794Z" level=info msg="StartContainer for \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\" returns successfully" Feb 13 15:53:29.783334 systemd[1]: cri-containerd-e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317.scope: Deactivated successfully. Feb 13 15:53:29.852355 update_engine[1512]: I20250213 15:53:29.852260 1512 update_attempter.cc:509] Updating boot flags... Feb 13 15:53:30.040271 containerd[1518]: time="2025-02-13T15:53:30.039765234Z" level=info msg="shim disconnected" id=e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317 namespace=k8s.io Feb 13 15:53:30.040271 containerd[1518]: time="2025-02-13T15:53:30.039822292Z" level=warning msg="cleaning up after shim disconnected" id=e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317 namespace=k8s.io Feb 13 15:53:30.040271 containerd[1518]: time="2025-02-13T15:53:30.039830317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:53:30.056712 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3062) Feb 13 15:53:30.101642 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3065) Feb 13 15:53:30.555965 kubelet[2602]: E0213 15:53:30.555924 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:30.557877 containerd[1518]: time="2025-02-13T15:53:30.557733026Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:53:30.576334 containerd[1518]: time="2025-02-13T15:53:30.576280018Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\"" Feb 13 15:53:30.576791 containerd[1518]: time="2025-02-13T15:53:30.576748567Z" level=info msg="StartContainer for \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\"" Feb 13 15:53:30.601724 systemd[1]: Started cri-containerd-1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067.scope - libcontainer container 1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067. Feb 13 15:53:30.626530 containerd[1518]: time="2025-02-13T15:53:30.626488708Z" level=info msg="StartContainer for \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\" returns successfully" Feb 13 15:53:30.639936 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:53:30.640213 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:53:30.640396 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:53:30.645052 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:53:30.645305 systemd[1]: cri-containerd-1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067.scope: Deactivated successfully. Feb 13 15:53:30.669385 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:53:30.669659 containerd[1518]: time="2025-02-13T15:53:30.669538876Z" level=info msg="shim disconnected" id=1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067 namespace=k8s.io Feb 13 15:53:30.669659 containerd[1518]: time="2025-02-13T15:53:30.669591256Z" level=warning msg="cleaning up after shim disconnected" id=1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067 namespace=k8s.io Feb 13 15:53:30.669659 containerd[1518]: time="2025-02-13T15:53:30.669618397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:53:30.708270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317-rootfs.mount: Deactivated successfully. Feb 13 15:53:31.140280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425978174.mount: Deactivated successfully. Feb 13 15:53:31.416637 containerd[1518]: time="2025-02-13T15:53:31.416488685Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:31.417406 containerd[1518]: time="2025-02-13T15:53:31.417357221Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:53:31.418721 containerd[1518]: time="2025-02-13T15:53:31.418679939Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:31.419868 containerd[1518]: time="2025-02-13T15:53:31.419825471Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.733919736s" Feb 13 15:53:31.419868 containerd[1518]: time="2025-02-13T15:53:31.419864896Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:53:31.422573 containerd[1518]: time="2025-02-13T15:53:31.422538276Z" level=info msg="CreateContainer within sandbox \"c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:53:31.435237 containerd[1518]: time="2025-02-13T15:53:31.435198590Z" level=info msg="CreateContainer within sandbox \"c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\"" Feb 13 15:53:31.435665 containerd[1518]: time="2025-02-13T15:53:31.435642932Z" level=info msg="StartContainer for \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\"" Feb 13 15:53:31.463731 systemd[1]: Started cri-containerd-359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e.scope - libcontainer container 359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e. Feb 13 15:53:31.488428 containerd[1518]: time="2025-02-13T15:53:31.488395887Z" level=info msg="StartContainer for \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\" returns successfully" Feb 13 15:53:31.561305 kubelet[2602]: E0213 15:53:31.561264 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:31.563981 kubelet[2602]: E0213 15:53:31.563956 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:31.565542 containerd[1518]: time="2025-02-13T15:53:31.565505083Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:53:31.596378 containerd[1518]: time="2025-02-13T15:53:31.596327195Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\"" Feb 13 15:53:31.596872 containerd[1518]: time="2025-02-13T15:53:31.596846008Z" level=info msg="StartContainer for \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\"" Feb 13 15:53:31.627319 kubelet[2602]: I0213 15:53:31.627252 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-l77zm" podStartSLOduration=0.889894229 podStartE2EDuration="12.627236061s" podCreationTimestamp="2025-02-13 15:53:19 +0000 UTC" firstStartedPulling="2025-02-13 15:53:19.683288846 +0000 UTC m=+5.421761140" lastFinishedPulling="2025-02-13 15:53:31.420630678 +0000 UTC m=+17.159102972" observedRunningTime="2025-02-13 15:53:31.573793338 +0000 UTC m=+17.312265632" watchObservedRunningTime="2025-02-13 15:53:31.627236061 +0000 UTC m=+17.365708365" Feb 13 15:53:31.647718 systemd[1]: Started cri-containerd-05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487.scope - libcontainer container 05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487. Feb 13 15:53:31.686124 containerd[1518]: time="2025-02-13T15:53:31.685967902Z" level=info msg="StartContainer for \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\" returns successfully" Feb 13 15:53:31.691725 systemd[1]: cri-containerd-05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487.scope: Deactivated successfully. Feb 13 15:53:31.717348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487-rootfs.mount: Deactivated successfully. Feb 13 15:53:31.902581 containerd[1518]: time="2025-02-13T15:53:31.902521727Z" level=info msg="shim disconnected" id=05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487 namespace=k8s.io Feb 13 15:53:31.902581 containerd[1518]: time="2025-02-13T15:53:31.902571230Z" level=warning msg="cleaning up after shim disconnected" id=05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487 namespace=k8s.io Feb 13 15:53:31.902581 containerd[1518]: time="2025-02-13T15:53:31.902580117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:53:32.568950 kubelet[2602]: E0213 15:53:32.568895 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:32.569437 kubelet[2602]: E0213 15:53:32.568985 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:32.570501 containerd[1518]: time="2025-02-13T15:53:32.570430069Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:53:32.585611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2258579015.mount: Deactivated successfully. Feb 13 15:53:32.587639 containerd[1518]: time="2025-02-13T15:53:32.587587358Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\"" Feb 13 15:53:32.588176 containerd[1518]: time="2025-02-13T15:53:32.588138302Z" level=info msg="StartContainer for \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\"" Feb 13 15:53:32.619788 systemd[1]: Started cri-containerd-74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032.scope - libcontainer container 74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032. Feb 13 15:53:32.643988 systemd[1]: cri-containerd-74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032.scope: Deactivated successfully. Feb 13 15:53:32.646305 containerd[1518]: time="2025-02-13T15:53:32.646261563Z" level=info msg="StartContainer for \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\" returns successfully" Feb 13 15:53:32.672226 containerd[1518]: time="2025-02-13T15:53:32.672154033Z" level=info msg="shim disconnected" id=74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032 namespace=k8s.io Feb 13 15:53:32.672226 containerd[1518]: time="2025-02-13T15:53:32.672220759Z" level=warning msg="cleaning up after shim disconnected" id=74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032 namespace=k8s.io Feb 13 15:53:32.672520 containerd[1518]: time="2025-02-13T15:53:32.672232040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:53:32.708546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032-rootfs.mount: Deactivated successfully. Feb 13 15:53:33.572607 kubelet[2602]: E0213 15:53:33.572576 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:33.574203 containerd[1518]: time="2025-02-13T15:53:33.574168485Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:53:33.593426 containerd[1518]: time="2025-02-13T15:53:33.593390633Z" level=info msg="CreateContainer within sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\"" Feb 13 15:53:33.594084 containerd[1518]: time="2025-02-13T15:53:33.593870571Z" level=info msg="StartContainer for \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\"" Feb 13 15:53:33.633855 systemd[1]: Started cri-containerd-7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc.scope - libcontainer container 7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc. Feb 13 15:53:33.667235 containerd[1518]: time="2025-02-13T15:53:33.667196175Z" level=info msg="StartContainer for \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\" returns successfully" Feb 13 15:53:33.806987 kubelet[2602]: I0213 15:53:33.806937 2602 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:53:33.837038 systemd[1]: Created slice kubepods-burstable-pod7636b629_d7fa_42da_87fd_82a20bcadfe4.slice - libcontainer container kubepods-burstable-pod7636b629_d7fa_42da_87fd_82a20bcadfe4.slice. Feb 13 15:53:33.843246 systemd[1]: Created slice kubepods-burstable-poddad38b5b_71d7_4c4b_be31_4a57d4f41085.slice - libcontainer container kubepods-burstable-poddad38b5b_71d7_4c4b_be31_4a57d4f41085.slice. Feb 13 15:53:33.946882 kubelet[2602]: I0213 15:53:33.946844 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn2mm\" (UniqueName: \"kubernetes.io/projected/7636b629-d7fa-42da-87fd-82a20bcadfe4-kube-api-access-dn2mm\") pod \"coredns-6f6b679f8f-2jxsx\" (UID: \"7636b629-d7fa-42da-87fd-82a20bcadfe4\") " pod="kube-system/coredns-6f6b679f8f-2jxsx" Feb 13 15:53:33.946882 kubelet[2602]: I0213 15:53:33.946886 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjnk7\" (UniqueName: \"kubernetes.io/projected/dad38b5b-71d7-4c4b-be31-4a57d4f41085-kube-api-access-sjnk7\") pod \"coredns-6f6b679f8f-kxxxj\" (UID: \"dad38b5b-71d7-4c4b-be31-4a57d4f41085\") " pod="kube-system/coredns-6f6b679f8f-kxxxj" Feb 13 15:53:33.947060 kubelet[2602]: I0213 15:53:33.946906 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dad38b5b-71d7-4c4b-be31-4a57d4f41085-config-volume\") pod \"coredns-6f6b679f8f-kxxxj\" (UID: \"dad38b5b-71d7-4c4b-be31-4a57d4f41085\") " pod="kube-system/coredns-6f6b679f8f-kxxxj" Feb 13 15:53:33.947060 kubelet[2602]: I0213 15:53:33.946925 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7636b629-d7fa-42da-87fd-82a20bcadfe4-config-volume\") pod \"coredns-6f6b679f8f-2jxsx\" (UID: \"7636b629-d7fa-42da-87fd-82a20bcadfe4\") " pod="kube-system/coredns-6f6b679f8f-2jxsx" Feb 13 15:53:34.142326 kubelet[2602]: E0213 15:53:34.142006 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:34.148208 kubelet[2602]: E0213 15:53:34.147365 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:34.150777 containerd[1518]: time="2025-02-13T15:53:34.150735962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kxxxj,Uid:dad38b5b-71d7-4c4b-be31-4a57d4f41085,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:34.151220 containerd[1518]: time="2025-02-13T15:53:34.151180212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2jxsx,Uid:7636b629-d7fa-42da-87fd-82a20bcadfe4,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:34.581854 kubelet[2602]: E0213 15:53:34.581822 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:34.600866 kubelet[2602]: I0213 15:53:34.600684 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gzzlx" podStartSLOduration=5.50081858 podStartE2EDuration="15.600668704s" podCreationTimestamp="2025-02-13 15:53:19 +0000 UTC" firstStartedPulling="2025-02-13 15:53:19.585881311 +0000 UTC m=+5.324353605" lastFinishedPulling="2025-02-13 15:53:29.685731434 +0000 UTC m=+15.424203729" observedRunningTime="2025-02-13 15:53:34.59978392 +0000 UTC m=+20.338256234" watchObservedRunningTime="2025-02-13 15:53:34.600668704 +0000 UTC m=+20.339141008" Feb 13 15:53:35.577484 kubelet[2602]: E0213 15:53:35.577448 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:35.834800 systemd-networkd[1449]: cilium_host: Link UP Feb 13 15:53:35.834956 systemd-networkd[1449]: cilium_net: Link UP Feb 13 15:53:35.835137 systemd-networkd[1449]: cilium_net: Gained carrier Feb 13 15:53:35.835315 systemd-networkd[1449]: cilium_host: Gained carrier Feb 13 15:53:35.928577 systemd-networkd[1449]: cilium_vxlan: Link UP Feb 13 15:53:35.928586 systemd-networkd[1449]: cilium_vxlan: Gained carrier Feb 13 15:53:36.131626 kernel: NET: Registered PF_ALG protocol family Feb 13 15:53:36.289755 systemd-networkd[1449]: cilium_host: Gained IPv6LL Feb 13 15:53:36.442739 systemd-networkd[1449]: cilium_net: Gained IPv6LL Feb 13 15:53:36.579268 kubelet[2602]: E0213 15:53:36.579170 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:36.751002 systemd-networkd[1449]: lxc_health: Link UP Feb 13 15:53:36.760043 systemd-networkd[1449]: lxc_health: Gained carrier Feb 13 15:53:37.246168 systemd-networkd[1449]: lxc9541ea76bf89: Link UP Feb 13 15:53:37.251625 kernel: eth0: renamed from tmp3efa1 Feb 13 15:53:37.259380 systemd-networkd[1449]: lxccffa300a4fff: Link UP Feb 13 15:53:37.266624 kernel: eth0: renamed from tmpb0098 Feb 13 15:53:37.275884 systemd-networkd[1449]: lxc9541ea76bf89: Gained carrier Feb 13 15:53:37.276108 systemd-networkd[1449]: lxccffa300a4fff: Gained carrier Feb 13 15:53:37.399784 systemd-networkd[1449]: cilium_vxlan: Gained IPv6LL Feb 13 15:53:37.580992 kubelet[2602]: E0213 15:53:37.580886 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:38.295779 systemd-networkd[1449]: lxc9541ea76bf89: Gained IPv6LL Feb 13 15:53:38.582200 kubelet[2602]: E0213 15:53:38.582088 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:38.679737 systemd-networkd[1449]: lxccffa300a4fff: Gained IPv6LL Feb 13 15:53:38.807700 systemd-networkd[1449]: lxc_health: Gained IPv6LL Feb 13 15:53:39.583045 kubelet[2602]: E0213 15:53:39.583000 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:40.545576 containerd[1518]: time="2025-02-13T15:53:40.544865252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:40.545576 containerd[1518]: time="2025-02-13T15:53:40.545559351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:40.545576 containerd[1518]: time="2025-02-13T15:53:40.545580622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:40.546035 containerd[1518]: time="2025-02-13T15:53:40.545699746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:40.568732 systemd[1]: Started cri-containerd-b00982849c894114c44ef436de7e0d2ed94b79b1e8dd063be18944c0a9feda92.scope - libcontainer container b00982849c894114c44ef436de7e0d2ed94b79b1e8dd063be18944c0a9feda92. Feb 13 15:53:40.580751 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:53:40.581531 containerd[1518]: time="2025-02-13T15:53:40.581291311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:40.581531 containerd[1518]: time="2025-02-13T15:53:40.581359219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:40.581531 containerd[1518]: time="2025-02-13T15:53:40.581374919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:40.581531 containerd[1518]: time="2025-02-13T15:53:40.581457084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:40.603731 systemd[1]: Started cri-containerd-3efa1111c4abc75b2ea3c3a5373d54448d48e8ee50df5188cf2d64e305cafc82.scope - libcontainer container 3efa1111c4abc75b2ea3c3a5373d54448d48e8ee50df5188cf2d64e305cafc82. Feb 13 15:53:40.608911 containerd[1518]: time="2025-02-13T15:53:40.608836462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kxxxj,Uid:dad38b5b-71d7-4c4b-be31-4a57d4f41085,Namespace:kube-system,Attempt:0,} returns sandbox id \"b00982849c894114c44ef436de7e0d2ed94b79b1e8dd063be18944c0a9feda92\"" Feb 13 15:53:40.609766 kubelet[2602]: E0213 15:53:40.609739 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:40.615323 containerd[1518]: time="2025-02-13T15:53:40.615124086Z" level=info msg="CreateContainer within sandbox \"b00982849c894114c44ef436de7e0d2ed94b79b1e8dd063be18944c0a9feda92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:53:40.617170 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:53:40.631573 containerd[1518]: time="2025-02-13T15:53:40.631529911Z" level=info msg="CreateContainer within sandbox \"b00982849c894114c44ef436de7e0d2ed94b79b1e8dd063be18944c0a9feda92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91bdcc137d042c5f9043bded5c41e5f336b3446f57fd7a321dada77b2148bc19\"" Feb 13 15:53:40.634943 containerd[1518]: time="2025-02-13T15:53:40.634909498Z" level=info msg="StartContainer for \"91bdcc137d042c5f9043bded5c41e5f336b3446f57fd7a321dada77b2148bc19\"" Feb 13 15:53:40.639868 containerd[1518]: time="2025-02-13T15:53:40.639831364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2jxsx,Uid:7636b629-d7fa-42da-87fd-82a20bcadfe4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3efa1111c4abc75b2ea3c3a5373d54448d48e8ee50df5188cf2d64e305cafc82\"" Feb 13 15:53:40.640632 kubelet[2602]: E0213 15:53:40.640539 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:40.642705 containerd[1518]: time="2025-02-13T15:53:40.642670831Z" level=info msg="CreateContainer within sandbox \"3efa1111c4abc75b2ea3c3a5373d54448d48e8ee50df5188cf2d64e305cafc82\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:53:40.658621 containerd[1518]: time="2025-02-13T15:53:40.658554741Z" level=info msg="CreateContainer within sandbox \"3efa1111c4abc75b2ea3c3a5373d54448d48e8ee50df5188cf2d64e305cafc82\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"795db44f8eac43405b896136d50324bc43f035896432728856562e2b19a23814\"" Feb 13 15:53:40.659138 containerd[1518]: time="2025-02-13T15:53:40.659113335Z" level=info msg="StartContainer for \"795db44f8eac43405b896136d50324bc43f035896432728856562e2b19a23814\"" Feb 13 15:53:40.663756 systemd[1]: Started cri-containerd-91bdcc137d042c5f9043bded5c41e5f336b3446f57fd7a321dada77b2148bc19.scope - libcontainer container 91bdcc137d042c5f9043bded5c41e5f336b3446f57fd7a321dada77b2148bc19. Feb 13 15:53:40.684737 systemd[1]: Started cri-containerd-795db44f8eac43405b896136d50324bc43f035896432728856562e2b19a23814.scope - libcontainer container 795db44f8eac43405b896136d50324bc43f035896432728856562e2b19a23814. Feb 13 15:53:40.691803 containerd[1518]: time="2025-02-13T15:53:40.691751307Z" level=info msg="StartContainer for \"91bdcc137d042c5f9043bded5c41e5f336b3446f57fd7a321dada77b2148bc19\" returns successfully" Feb 13 15:53:40.712113 containerd[1518]: time="2025-02-13T15:53:40.712009820Z" level=info msg="StartContainer for \"795db44f8eac43405b896136d50324bc43f035896432728856562e2b19a23814\" returns successfully" Feb 13 15:53:41.551303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527355218.mount: Deactivated successfully. Feb 13 15:53:41.599255 kubelet[2602]: E0213 15:53:41.599219 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:41.599930 kubelet[2602]: E0213 15:53:41.599220 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:41.628896 kubelet[2602]: I0213 15:53:41.628025 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kxxxj" podStartSLOduration=22.628005367 podStartE2EDuration="22.628005367s" podCreationTimestamp="2025-02-13 15:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:41.620400052 +0000 UTC m=+27.358872356" watchObservedRunningTime="2025-02-13 15:53:41.628005367 +0000 UTC m=+27.366477661" Feb 13 15:53:42.467401 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:37442.service - OpenSSH per-connection server daemon (10.0.0.1:37442). Feb 13 15:53:42.509574 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 37442 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:53:42.511179 sshd-session[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:42.515274 systemd-logind[1509]: New session 8 of user core. Feb 13 15:53:42.525725 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:53:42.648256 sshd[3992]: Connection closed by 10.0.0.1 port 37442 Feb 13 15:53:42.648640 sshd-session[3990]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:42.652937 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:37442.service: Deactivated successfully. Feb 13 15:53:42.655625 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:53:42.656547 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:53:42.657502 systemd-logind[1509]: Removed session 8. Feb 13 15:53:44.143036 kubelet[2602]: E0213 15:53:44.142984 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:44.148338 kubelet[2602]: E0213 15:53:44.148311 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:44.152196 kubelet[2602]: I0213 15:53:44.152062 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2jxsx" podStartSLOduration=25.152046576 podStartE2EDuration="25.152046576s" podCreationTimestamp="2025-02-13 15:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:41.628349085 +0000 UTC m=+27.366821369" watchObservedRunningTime="2025-02-13 15:53:44.152046576 +0000 UTC m=+29.890518870" Feb 13 15:53:44.601586 kubelet[2602]: E0213 15:53:44.601554 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:44.601731 kubelet[2602]: E0213 15:53:44.601629 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:53:47.665165 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:37458.service - OpenSSH per-connection server daemon (10.0.0.1:37458). Feb 13 15:53:47.704200 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 37458 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:53:47.705438 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:47.709174 systemd-logind[1509]: New session 9 of user core. Feb 13 15:53:47.718710 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:53:47.828573 sshd[4018]: Connection closed by 10.0.0.1 port 37458 Feb 13 15:53:47.828942 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:47.832333 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:37458.service: Deactivated successfully. Feb 13 15:53:47.834131 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:53:47.834825 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:53:47.835741 systemd-logind[1509]: Removed session 9. Feb 13 15:53:52.841981 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:59090.service - OpenSSH per-connection server daemon (10.0.0.1:59090). Feb 13 15:53:52.885613 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 59090 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:53:52.886966 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:52.891564 systemd-logind[1509]: New session 10 of user core. Feb 13 15:53:52.904731 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:53:53.016686 sshd[4037]: Connection closed by 10.0.0.1 port 59090 Feb 13 15:53:53.017046 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:53.021301 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:59090.service: Deactivated successfully. Feb 13 15:53:53.023417 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:53:53.024083 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:53:53.024913 systemd-logind[1509]: Removed session 10. Feb 13 15:53:58.028264 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:59104.service - OpenSSH per-connection server daemon (10.0.0.1:59104). Feb 13 15:53:58.067510 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 59104 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:53:58.068975 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:58.072546 systemd-logind[1509]: New session 11 of user core. Feb 13 15:53:58.082723 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:53:58.185726 sshd[4054]: Connection closed by 10.0.0.1 port 59104 Feb 13 15:53:58.186053 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:58.204158 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:59104.service: Deactivated successfully. Feb 13 15:53:58.205832 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:53:58.206547 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:53:58.223056 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:59110.service - OpenSSH per-connection server daemon (10.0.0.1:59110). Feb 13 15:53:58.223731 systemd-logind[1509]: Removed session 11. Feb 13 15:53:58.257728 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 59110 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:53:58.259129 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:58.263379 systemd-logind[1509]: New session 12 of user core. Feb 13 15:53:58.272729 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:53:58.406840 sshd[4070]: Connection closed by 10.0.0.1 port 59110 Feb 13 15:53:58.407986 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:58.417049 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:59110.service: Deactivated successfully. Feb 13 15:53:58.419110 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:53:58.421340 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:53:58.432855 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:59124.service - OpenSSH per-connection server daemon (10.0.0.1:59124). Feb 13 15:53:58.433439 systemd-logind[1509]: Removed session 12. Feb 13 15:53:58.467362 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 59124 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:53:58.468643 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:58.472652 systemd-logind[1509]: New session 13 of user core. Feb 13 15:53:58.481711 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:53:58.586538 sshd[4084]: Connection closed by 10.0.0.1 port 59124 Feb 13 15:53:58.586881 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:58.591005 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:59124.service: Deactivated successfully. Feb 13 15:53:58.593123 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:53:58.593849 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:53:58.594635 systemd-logind[1509]: Removed session 13. Feb 13 15:54:03.598868 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:43598.service - OpenSSH per-connection server daemon (10.0.0.1:43598). Feb 13 15:54:03.636983 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 43598 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:03.638571 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:03.642805 systemd-logind[1509]: New session 14 of user core. Feb 13 15:54:03.651704 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:54:03.756688 sshd[4099]: Connection closed by 10.0.0.1 port 43598 Feb 13 15:54:03.757036 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:03.761280 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:43598.service: Deactivated successfully. Feb 13 15:54:03.763491 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:54:03.764481 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:54:03.765425 systemd-logind[1509]: Removed session 14. Feb 13 15:54:08.768561 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:43600.service - OpenSSH per-connection server daemon (10.0.0.1:43600). Feb 13 15:54:08.807608 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 43600 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:08.809166 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:08.813012 systemd-logind[1509]: New session 15 of user core. Feb 13 15:54:08.822697 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:54:08.925541 sshd[4115]: Connection closed by 10.0.0.1 port 43600 Feb 13 15:54:08.925927 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:08.938244 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:43600.service: Deactivated successfully. Feb 13 15:54:08.939926 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:54:08.942180 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:54:08.949850 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:43610.service - OpenSSH per-connection server daemon (10.0.0.1:43610). Feb 13 15:54:08.950739 systemd-logind[1509]: Removed session 15. Feb 13 15:54:08.984643 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 43610 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:08.985982 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:08.989913 systemd-logind[1509]: New session 16 of user core. Feb 13 15:54:08.998709 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:54:09.166001 sshd[4130]: Connection closed by 10.0.0.1 port 43610 Feb 13 15:54:09.167549 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:09.182182 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:43610.service: Deactivated successfully. Feb 13 15:54:09.184030 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:54:09.184732 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:54:09.189827 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:43618.service - OpenSSH per-connection server daemon (10.0.0.1:43618). Feb 13 15:54:09.190651 systemd-logind[1509]: Removed session 16. Feb 13 15:54:09.228738 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 43618 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:09.229996 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:09.234160 systemd-logind[1509]: New session 17 of user core. Feb 13 15:54:09.243715 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:54:10.479520 sshd[4143]: Connection closed by 10.0.0.1 port 43618 Feb 13 15:54:10.479877 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:10.491077 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:43618.service: Deactivated successfully. Feb 13 15:54:10.494181 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:54:10.496394 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:54:10.503047 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:42320.service - OpenSSH per-connection server daemon (10.0.0.1:42320). Feb 13 15:54:10.504668 systemd-logind[1509]: Removed session 17. Feb 13 15:54:10.539359 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 42320 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:10.540857 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:10.545203 systemd-logind[1509]: New session 18 of user core. Feb 13 15:54:10.550711 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:54:10.774542 sshd[4166]: Connection closed by 10.0.0.1 port 42320 Feb 13 15:54:10.775468 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:10.785716 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:42320.service: Deactivated successfully. Feb 13 15:54:10.787672 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:54:10.788674 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:54:10.790021 systemd-logind[1509]: Removed session 18. Feb 13 15:54:10.806874 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:42330.service - OpenSSH per-connection server daemon (10.0.0.1:42330). Feb 13 15:54:10.845006 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 42330 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:10.846348 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:10.850755 systemd-logind[1509]: New session 19 of user core. Feb 13 15:54:10.859721 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:54:10.966652 sshd[4180]: Connection closed by 10.0.0.1 port 42330 Feb 13 15:54:10.966996 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:10.971243 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:42330.service: Deactivated successfully. Feb 13 15:54:10.973412 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:54:10.974096 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:54:10.974922 systemd-logind[1509]: Removed session 19. Feb 13 15:54:15.979544 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:42338.service - OpenSSH per-connection server daemon (10.0.0.1:42338). Feb 13 15:54:16.017568 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 42338 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:16.018874 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:16.022907 systemd-logind[1509]: New session 20 of user core. Feb 13 15:54:16.033720 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:54:16.134829 sshd[4197]: Connection closed by 10.0.0.1 port 42338 Feb 13 15:54:16.135191 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:16.138791 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:42338.service: Deactivated successfully. Feb 13 15:54:16.140863 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:54:16.141625 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:54:16.142450 systemd-logind[1509]: Removed session 20. Feb 13 15:54:21.147650 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:51618.service - OpenSSH per-connection server daemon (10.0.0.1:51618). Feb 13 15:54:21.186412 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 51618 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:21.187755 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:21.191795 systemd-logind[1509]: New session 21 of user core. Feb 13 15:54:21.208713 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:54:21.314628 sshd[4217]: Connection closed by 10.0.0.1 port 51618 Feb 13 15:54:21.314948 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:21.318798 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:51618.service: Deactivated successfully. Feb 13 15:54:21.320690 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:54:21.321436 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:54:21.322386 systemd-logind[1509]: Removed session 21. Feb 13 15:54:26.327506 systemd[1]: Started sshd@21-10.0.0.150:22-10.0.0.1:51622.service - OpenSSH per-connection server daemon (10.0.0.1:51622). Feb 13 15:54:26.366658 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 51622 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:26.368098 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:26.372019 systemd-logind[1509]: New session 22 of user core. Feb 13 15:54:26.387717 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:54:26.491119 sshd[4232]: Connection closed by 10.0.0.1 port 51622 Feb 13 15:54:26.491517 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:26.506475 systemd[1]: sshd@21-10.0.0.150:22-10.0.0.1:51622.service: Deactivated successfully. Feb 13 15:54:26.508426 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:54:26.510046 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:54:26.519019 systemd[1]: Started sshd@22-10.0.0.150:22-10.0.0.1:51626.service - OpenSSH per-connection server daemon (10.0.0.1:51626). Feb 13 15:54:26.519942 systemd-logind[1509]: Removed session 22. Feb 13 15:54:26.553265 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 51626 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:26.554647 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:26.558788 systemd-logind[1509]: New session 23 of user core. Feb 13 15:54:26.570716 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:54:27.886225 containerd[1518]: time="2025-02-13T15:54:27.886116784Z" level=info msg="StopContainer for \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\" with timeout 30 (s)" Feb 13 15:54:27.896301 containerd[1518]: time="2025-02-13T15:54:27.895770236Z" level=info msg="Stop container \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\" with signal terminated" Feb 13 15:54:27.903027 systemd[1]: run-containerd-runc-k8s.io-7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc-runc.uF016L.mount: Deactivated successfully. Feb 13 15:54:27.910352 systemd[1]: cri-containerd-359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e.scope: Deactivated successfully. Feb 13 15:54:27.920013 containerd[1518]: time="2025-02-13T15:54:27.919889014Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:54:27.927545 containerd[1518]: time="2025-02-13T15:54:27.927494074Z" level=info msg="StopContainer for \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\" with timeout 2 (s)" Feb 13 15:54:27.927891 containerd[1518]: time="2025-02-13T15:54:27.927870951Z" level=info msg="Stop container \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\" with signal terminated" Feb 13 15:54:27.932788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e-rootfs.mount: Deactivated successfully. Feb 13 15:54:27.935174 systemd-networkd[1449]: lxc_health: Link DOWN Feb 13 15:54:27.935183 systemd-networkd[1449]: lxc_health: Lost carrier Feb 13 15:54:27.944535 containerd[1518]: time="2025-02-13T15:54:27.944482526Z" level=info msg="shim disconnected" id=359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e namespace=k8s.io Feb 13 15:54:27.944535 containerd[1518]: time="2025-02-13T15:54:27.944528375Z" level=warning msg="cleaning up after shim disconnected" id=359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e namespace=k8s.io Feb 13 15:54:27.944535 containerd[1518]: time="2025-02-13T15:54:27.944538354Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:27.957288 systemd[1]: cri-containerd-7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc.scope: Deactivated successfully. Feb 13 15:54:27.957937 systemd[1]: cri-containerd-7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc.scope: Consumed 6.514s CPU time, 128.2M memory peak, 332K read from disk, 13.3M written to disk. Feb 13 15:54:27.961338 containerd[1518]: time="2025-02-13T15:54:27.961304447Z" level=info msg="StopContainer for \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\" returns successfully" Feb 13 15:54:27.965928 containerd[1518]: time="2025-02-13T15:54:27.965894681Z" level=info msg="StopPodSandbox for \"c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db\"" Feb 13 15:54:27.978113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc-rootfs.mount: Deactivated successfully. Feb 13 15:54:27.981089 containerd[1518]: time="2025-02-13T15:54:27.965933717Z" level=info msg="Container to stop \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:54:27.983014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db-shm.mount: Deactivated successfully. Feb 13 15:54:27.985904 containerd[1518]: time="2025-02-13T15:54:27.985864385Z" level=info msg="shim disconnected" id=7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc namespace=k8s.io Feb 13 15:54:27.985977 containerd[1518]: time="2025-02-13T15:54:27.985904251Z" level=warning msg="cleaning up after shim disconnected" id=7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc namespace=k8s.io Feb 13 15:54:27.985977 containerd[1518]: time="2025-02-13T15:54:27.985913239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:27.987575 systemd[1]: cri-containerd-c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db.scope: Deactivated successfully. Feb 13 15:54:28.002712 containerd[1518]: time="2025-02-13T15:54:28.002673789Z" level=info msg="StopContainer for \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\" returns successfully" Feb 13 15:54:28.003190 containerd[1518]: time="2025-02-13T15:54:28.003144767Z" level=info msg="StopPodSandbox for \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\"" Feb 13 15:54:28.003228 containerd[1518]: time="2025-02-13T15:54:28.003182680Z" level=info msg="Container to stop \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:54:28.003228 containerd[1518]: time="2025-02-13T15:54:28.003215021Z" level=info msg="Container to stop \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:54:28.003228 containerd[1518]: time="2025-02-13T15:54:28.003224480Z" level=info msg="Container to stop \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:54:28.003355 containerd[1518]: time="2025-02-13T15:54:28.003232555Z" level=info msg="Container to stop \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:54:28.003355 containerd[1518]: time="2025-02-13T15:54:28.003241804Z" level=info msg="Container to stop \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:54:28.009090 systemd[1]: cri-containerd-643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273.scope: Deactivated successfully. Feb 13 15:54:28.013286 containerd[1518]: time="2025-02-13T15:54:28.012643325Z" level=info msg="shim disconnected" id=c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db namespace=k8s.io Feb 13 15:54:28.013431 containerd[1518]: time="2025-02-13T15:54:28.013291596Z" level=warning msg="cleaning up after shim disconnected" id=c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db namespace=k8s.io Feb 13 15:54:28.013431 containerd[1518]: time="2025-02-13T15:54:28.013313067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:28.037730 containerd[1518]: time="2025-02-13T15:54:28.037667683Z" level=info msg="shim disconnected" id=643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273 namespace=k8s.io Feb 13 15:54:28.037986 containerd[1518]: time="2025-02-13T15:54:28.037949416Z" level=warning msg="cleaning up after shim disconnected" id=643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273 namespace=k8s.io Feb 13 15:54:28.037986 containerd[1518]: time="2025-02-13T15:54:28.037966840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:28.043443 containerd[1518]: time="2025-02-13T15:54:28.043410964Z" level=info msg="TearDown network for sandbox \"c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db\" successfully" Feb 13 15:54:28.043443 containerd[1518]: time="2025-02-13T15:54:28.043432776Z" level=info msg="StopPodSandbox for \"c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db\" returns successfully" Feb 13 15:54:28.060564 containerd[1518]: time="2025-02-13T15:54:28.060475998Z" level=info msg="TearDown network for sandbox \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" successfully" Feb 13 15:54:28.060564 containerd[1518]: time="2025-02-13T15:54:28.060529040Z" level=info msg="StopPodSandbox for \"643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273\" returns successfully" Feb 13 15:54:28.234384 kubelet[2602]: I0213 15:54:28.233580 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cni-path\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.234384 kubelet[2602]: I0213 15:54:28.233634 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-etc-cni-netd\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.234384 kubelet[2602]: I0213 15:54:28.233649 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-xtables-lock\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.234384 kubelet[2602]: I0213 15:54:28.233677 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa885608-4f6a-4364-9f96-0d3b14ef9f90-hubble-tls\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.234384 kubelet[2602]: I0213 15:54:28.233695 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vctq\" (UniqueName: \"kubernetes.io/projected/aa885608-4f6a-4364-9f96-0d3b14ef9f90-kube-api-access-5vctq\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.234384 kubelet[2602]: I0213 15:54:28.233710 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-run\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.235194 kubelet[2602]: I0213 15:54:28.233723 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-host-proc-sys-kernel\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.235194 kubelet[2602]: I0213 15:54:28.233738 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-host-proc-sys-net\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.235194 kubelet[2602]: I0213 15:54:28.233722 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cni-path" (OuterVolumeSpecName: "cni-path") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.235194 kubelet[2602]: I0213 15:54:28.233755 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/334fa82c-e2cd-466b-b195-288c1a3f64b2-cilium-config-path\") pod \"334fa82c-e2cd-466b-b195-288c1a3f64b2\" (UID: \"334fa82c-e2cd-466b-b195-288c1a3f64b2\") " Feb 13 15:54:28.235194 kubelet[2602]: I0213 15:54:28.233776 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-lib-modules\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.235194 kubelet[2602]: I0213 15:54:28.233792 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-cgroup\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.235352 kubelet[2602]: I0213 15:54:28.233806 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-bpf-maps\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.235352 kubelet[2602]: I0213 15:54:28.233823 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-hostproc\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.235352 kubelet[2602]: I0213 15:54:28.233838 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-config-path\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.235352 kubelet[2602]: I0213 15:54:28.233855 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa885608-4f6a-4364-9f96-0d3b14ef9f90-clustermesh-secrets\") pod \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\" (UID: \"aa885608-4f6a-4364-9f96-0d3b14ef9f90\") " Feb 13 15:54:28.235352 kubelet[2602]: I0213 15:54:28.233872 2602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvjhk\" (UniqueName: \"kubernetes.io/projected/334fa82c-e2cd-466b-b195-288c1a3f64b2-kube-api-access-rvjhk\") pod \"334fa82c-e2cd-466b-b195-288c1a3f64b2\" (UID: \"334fa82c-e2cd-466b-b195-288c1a3f64b2\") " Feb 13 15:54:28.235352 kubelet[2602]: I0213 15:54:28.233902 2602 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.235485 kubelet[2602]: I0213 15:54:28.234060 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.235485 kubelet[2602]: I0213 15:54:28.234095 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.235485 kubelet[2602]: I0213 15:54:28.234109 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.235485 kubelet[2602]: I0213 15:54:28.234242 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.235485 kubelet[2602]: I0213 15:54:28.234259 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.235648 kubelet[2602]: I0213 15:54:28.234286 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.235648 kubelet[2602]: I0213 15:54:28.234494 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-hostproc" (OuterVolumeSpecName: "hostproc") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.235648 kubelet[2602]: I0213 15:54:28.234541 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.235648 kubelet[2602]: I0213 15:54:28.234561 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:54:28.238617 kubelet[2602]: I0213 15:54:28.238512 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa885608-4f6a-4364-9f96-0d3b14ef9f90-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:54:28.240358 kubelet[2602]: I0213 15:54:28.240330 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa885608-4f6a-4364-9f96-0d3b14ef9f90-kube-api-access-5vctq" (OuterVolumeSpecName: "kube-api-access-5vctq") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "kube-api-access-5vctq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:54:28.240641 kubelet[2602]: I0213 15:54:28.240579 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:54:28.240807 kubelet[2602]: I0213 15:54:28.240663 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/334fa82c-e2cd-466b-b195-288c1a3f64b2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "334fa82c-e2cd-466b-b195-288c1a3f64b2" (UID: "334fa82c-e2cd-466b-b195-288c1a3f64b2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:54:28.240989 kubelet[2602]: I0213 15:54:28.240951 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/334fa82c-e2cd-466b-b195-288c1a3f64b2-kube-api-access-rvjhk" (OuterVolumeSpecName: "kube-api-access-rvjhk") pod "334fa82c-e2cd-466b-b195-288c1a3f64b2" (UID: "334fa82c-e2cd-466b-b195-288c1a3f64b2"). InnerVolumeSpecName "kube-api-access-rvjhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:54:28.241376 kubelet[2602]: I0213 15:54:28.241347 2602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa885608-4f6a-4364-9f96-0d3b14ef9f90-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aa885608-4f6a-4364-9f96-0d3b14ef9f90" (UID: "aa885608-4f6a-4364-9f96-0d3b14ef9f90"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:54:28.334622 kubelet[2602]: I0213 15:54:28.334580 2602 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rvjhk\" (UniqueName: \"kubernetes.io/projected/334fa82c-e2cd-466b-b195-288c1a3f64b2-kube-api-access-rvjhk\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334622 kubelet[2602]: I0213 15:54:28.334619 2602 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa885608-4f6a-4364-9f96-0d3b14ef9f90-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334720 kubelet[2602]: I0213 15:54:28.334630 2602 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334720 kubelet[2602]: I0213 15:54:28.334639 2602 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334720 kubelet[2602]: I0213 15:54:28.334650 2602 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334720 kubelet[2602]: I0213 15:54:28.334658 2602 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5vctq\" (UniqueName: \"kubernetes.io/projected/aa885608-4f6a-4364-9f96-0d3b14ef9f90-kube-api-access-5vctq\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334720 kubelet[2602]: I0213 15:54:28.334666 2602 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334720 kubelet[2602]: I0213 15:54:28.334674 2602 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334720 kubelet[2602]: I0213 15:54:28.334682 2602 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/334fa82c-e2cd-466b-b195-288c1a3f64b2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334720 kubelet[2602]: I0213 15:54:28.334690 2602 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334902 kubelet[2602]: I0213 15:54:28.334697 2602 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334902 kubelet[2602]: I0213 15:54:28.334704 2602 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334902 kubelet[2602]: I0213 15:54:28.334712 2602 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa885608-4f6a-4364-9f96-0d3b14ef9f90-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334902 kubelet[2602]: I0213 15:54:28.334719 2602 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa885608-4f6a-4364-9f96-0d3b14ef9f90-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.334902 kubelet[2602]: I0213 15:54:28.334727 2602 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa885608-4f6a-4364-9f96-0d3b14ef9f90-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:54:28.529947 systemd[1]: Removed slice kubepods-besteffort-pod334fa82c_e2cd_466b_b195_288c1a3f64b2.slice - libcontainer container kubepods-besteffort-pod334fa82c_e2cd_466b_b195_288c1a3f64b2.slice. Feb 13 15:54:28.531339 systemd[1]: Removed slice kubepods-burstable-podaa885608_4f6a_4364_9f96_0d3b14ef9f90.slice - libcontainer container kubepods-burstable-podaa885608_4f6a_4364_9f96_0d3b14ef9f90.slice. Feb 13 15:54:28.531434 systemd[1]: kubepods-burstable-podaa885608_4f6a_4364_9f96_0d3b14ef9f90.slice: Consumed 6.620s CPU time, 128.5M memory peak, 352K read from disk, 13.3M written to disk. Feb 13 15:54:28.672517 kubelet[2602]: I0213 15:54:28.672420 2602 scope.go:117] "RemoveContainer" containerID="359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e" Feb 13 15:54:28.673717 containerd[1518]: time="2025-02-13T15:54:28.673477544Z" level=info msg="RemoveContainer for \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\"" Feb 13 15:54:28.686522 containerd[1518]: time="2025-02-13T15:54:28.686473175Z" level=info msg="RemoveContainer for \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\" returns successfully" Feb 13 15:54:28.686772 kubelet[2602]: I0213 15:54:28.686745 2602 scope.go:117] "RemoveContainer" containerID="359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e" Feb 13 15:54:28.686990 containerd[1518]: time="2025-02-13T15:54:28.686946086Z" level=error msg="ContainerStatus for \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\": not found" Feb 13 15:54:28.693256 kubelet[2602]: E0213 15:54:28.693227 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\": not found" containerID="359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e" Feb 13 15:54:28.693347 kubelet[2602]: I0213 15:54:28.693257 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e"} err="failed to get container status \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"359140c40f9978d8c54e1e74cd0246b14e07d6d39cbb20a280cc036981712d6e\": not found" Feb 13 15:54:28.693347 kubelet[2602]: I0213 15:54:28.693342 2602 scope.go:117] "RemoveContainer" containerID="7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc" Feb 13 15:54:28.694320 containerd[1518]: time="2025-02-13T15:54:28.694293459Z" level=info msg="RemoveContainer for \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\"" Feb 13 15:54:28.697611 containerd[1518]: time="2025-02-13T15:54:28.697567029Z" level=info msg="RemoveContainer for \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\" returns successfully" Feb 13 15:54:28.697759 kubelet[2602]: I0213 15:54:28.697725 2602 scope.go:117] "RemoveContainer" containerID="74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032" Feb 13 15:54:28.698678 containerd[1518]: time="2025-02-13T15:54:28.698640117Z" level=info msg="RemoveContainer for \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\"" Feb 13 15:54:28.702013 containerd[1518]: time="2025-02-13T15:54:28.701982320Z" level=info msg="RemoveContainer for \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\" returns successfully" Feb 13 15:54:28.702164 kubelet[2602]: I0213 15:54:28.702136 2602 scope.go:117] "RemoveContainer" containerID="05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487" Feb 13 15:54:28.702983 containerd[1518]: time="2025-02-13T15:54:28.702951678Z" level=info msg="RemoveContainer for \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\"" Feb 13 15:54:28.706146 containerd[1518]: time="2025-02-13T15:54:28.706118783Z" level=info msg="RemoveContainer for \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\" returns successfully" Feb 13 15:54:28.706318 kubelet[2602]: I0213 15:54:28.706239 2602 scope.go:117] "RemoveContainer" containerID="1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067" Feb 13 15:54:28.707021 containerd[1518]: time="2025-02-13T15:54:28.706997006Z" level=info msg="RemoveContainer for \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\"" Feb 13 15:54:28.709971 containerd[1518]: time="2025-02-13T15:54:28.709935660Z" level=info msg="RemoveContainer for \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\" returns successfully" Feb 13 15:54:28.710078 kubelet[2602]: I0213 15:54:28.710058 2602 scope.go:117] "RemoveContainer" containerID="e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317" Feb 13 15:54:28.710819 containerd[1518]: time="2025-02-13T15:54:28.710793875Z" level=info msg="RemoveContainer for \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\"" Feb 13 15:54:28.713788 containerd[1518]: time="2025-02-13T15:54:28.713759131Z" level=info msg="RemoveContainer for \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\" returns successfully" Feb 13 15:54:28.713898 kubelet[2602]: I0213 15:54:28.713880 2602 scope.go:117] "RemoveContainer" containerID="7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc" Feb 13 15:54:28.714056 containerd[1518]: time="2025-02-13T15:54:28.714031145Z" level=error msg="ContainerStatus for \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\": not found" Feb 13 15:54:28.714169 kubelet[2602]: E0213 15:54:28.714147 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\": not found" containerID="7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc" Feb 13 15:54:28.714202 kubelet[2602]: I0213 15:54:28.714174 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc"} err="failed to get container status \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ecdd125266f0da8c0df00ab90336e3511c2ea7397df0502083c646b3e2fc5dc\": not found" Feb 13 15:54:28.714202 kubelet[2602]: I0213 15:54:28.714192 2602 scope.go:117] "RemoveContainer" containerID="74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032" Feb 13 15:54:28.714380 containerd[1518]: time="2025-02-13T15:54:28.714344550Z" level=error msg="ContainerStatus for \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\": not found" Feb 13 15:54:28.714509 kubelet[2602]: E0213 15:54:28.714483 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\": not found" containerID="74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032" Feb 13 15:54:28.714557 kubelet[2602]: I0213 15:54:28.714514 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032"} err="failed to get container status \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\": rpc error: code = NotFound desc = an error occurred when try to find container \"74a946cee29f2e42d94b68ff27432ef7ea6a952b6bc323a83610b4cf6a84f032\": not found" Feb 13 15:54:28.714557 kubelet[2602]: I0213 15:54:28.714539 2602 scope.go:117] "RemoveContainer" containerID="05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487" Feb 13 15:54:28.714712 containerd[1518]: time="2025-02-13T15:54:28.714686379Z" level=error msg="ContainerStatus for \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\": not found" Feb 13 15:54:28.714806 kubelet[2602]: E0213 15:54:28.714789 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\": not found" containerID="05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487" Feb 13 15:54:28.714841 kubelet[2602]: I0213 15:54:28.714810 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487"} err="failed to get container status \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\": rpc error: code = NotFound desc = an error occurred when try to find container \"05b4a4bf766b1ccd5ea2b93f838df140283e00e6e36c481fac830c6b3205a487\": not found" Feb 13 15:54:28.714841 kubelet[2602]: I0213 15:54:28.714824 2602 scope.go:117] "RemoveContainer" containerID="1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067" Feb 13 15:54:28.714983 containerd[1518]: time="2025-02-13T15:54:28.714952892Z" level=error msg="ContainerStatus for \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\": not found" Feb 13 15:54:28.715068 kubelet[2602]: E0213 15:54:28.715046 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\": not found" containerID="1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067" Feb 13 15:54:28.715108 kubelet[2602]: I0213 15:54:28.715066 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067"} err="failed to get container status \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\": rpc error: code = NotFound desc = an error occurred when try to find container \"1efef9b859986e4be5e40a62d799bc8c07574e5a2cf0171830ef1edd4ee97067\": not found" Feb 13 15:54:28.715108 kubelet[2602]: I0213 15:54:28.715080 2602 scope.go:117] "RemoveContainer" containerID="e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317" Feb 13 15:54:28.715239 containerd[1518]: time="2025-02-13T15:54:28.715209456Z" level=error msg="ContainerStatus for \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\": not found" Feb 13 15:54:28.715354 kubelet[2602]: E0213 15:54:28.715319 2602 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\": not found" containerID="e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317" Feb 13 15:54:28.715411 kubelet[2602]: I0213 15:54:28.715353 2602 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317"} err="failed to get container status \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9226e09c5bc761a9ebbe3947939f3cb974acee04f6a1b146c6289f64084d317\": not found" Feb 13 15:54:28.898914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c102b00b07f42c6f5d8be7fb43f0284cfe7170776eff309d8a25c2546edf06db-rootfs.mount: Deactivated successfully. Feb 13 15:54:28.899027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273-rootfs.mount: Deactivated successfully. Feb 13 15:54:28.899104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-643e913a621dba2cee74ae0984e9d45e4309bc59a7e3316a91ac9394eb085273-shm.mount: Deactivated successfully. Feb 13 15:54:28.899184 systemd[1]: var-lib-kubelet-pods-334fa82c\x2de2cd\x2d466b\x2db195\x2d288c1a3f64b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drvjhk.mount: Deactivated successfully. Feb 13 15:54:28.899262 systemd[1]: var-lib-kubelet-pods-aa885608\x2d4f6a\x2d4364\x2d9f96\x2d0d3b14ef9f90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5vctq.mount: Deactivated successfully. Feb 13 15:54:28.899354 systemd[1]: var-lib-kubelet-pods-aa885608\x2d4f6a\x2d4364\x2d9f96\x2d0d3b14ef9f90-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:54:28.899438 systemd[1]: var-lib-kubelet-pods-aa885608\x2d4f6a\x2d4364\x2d9f96\x2d0d3b14ef9f90-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:54:29.522165 kubelet[2602]: E0213 15:54:29.522119 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:29.570508 kubelet[2602]: E0213 15:54:29.570452 2602 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:54:29.861025 sshd[4247]: Connection closed by 10.0.0.1 port 51626 Feb 13 15:54:29.861537 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:29.874274 systemd[1]: sshd@22-10.0.0.150:22-10.0.0.1:51626.service: Deactivated successfully. Feb 13 15:54:29.876368 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:54:29.878105 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:54:29.879495 systemd[1]: Started sshd@23-10.0.0.150:22-10.0.0.1:48408.service - OpenSSH per-connection server daemon (10.0.0.1:48408). Feb 13 15:54:29.880656 systemd-logind[1509]: Removed session 23. Feb 13 15:54:29.921376 sshd[4407]: Accepted publickey for core from 10.0.0.1 port 48408 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:29.922813 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:29.927617 systemd-logind[1509]: New session 24 of user core. Feb 13 15:54:29.937821 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:54:30.523922 kubelet[2602]: I0213 15:54:30.523878 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="334fa82c-e2cd-466b-b195-288c1a3f64b2" path="/var/lib/kubelet/pods/334fa82c-e2cd-466b-b195-288c1a3f64b2/volumes" Feb 13 15:54:30.524482 kubelet[2602]: I0213 15:54:30.524459 2602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa885608-4f6a-4364-9f96-0d3b14ef9f90" path="/var/lib/kubelet/pods/aa885608-4f6a-4364-9f96-0d3b14ef9f90/volumes" Feb 13 15:54:30.591026 sshd[4410]: Connection closed by 10.0.0.1 port 48408 Feb 13 15:54:30.591415 sshd-session[4407]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:30.603555 systemd[1]: sshd@23-10.0.0.150:22-10.0.0.1:48408.service: Deactivated successfully. Feb 13 15:54:30.605833 kubelet[2602]: E0213 15:54:30.605773 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa885608-4f6a-4364-9f96-0d3b14ef9f90" containerName="cilium-agent" Feb 13 15:54:30.605833 kubelet[2602]: E0213 15:54:30.605804 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa885608-4f6a-4364-9f96-0d3b14ef9f90" containerName="mount-cgroup" Feb 13 15:54:30.605833 kubelet[2602]: E0213 15:54:30.605811 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa885608-4f6a-4364-9f96-0d3b14ef9f90" containerName="apply-sysctl-overwrites" Feb 13 15:54:30.605833 kubelet[2602]: E0213 15:54:30.605817 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="334fa82c-e2cd-466b-b195-288c1a3f64b2" containerName="cilium-operator" Feb 13 15:54:30.605833 kubelet[2602]: E0213 15:54:30.605824 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa885608-4f6a-4364-9f96-0d3b14ef9f90" containerName="clean-cilium-state" Feb 13 15:54:30.605833 kubelet[2602]: E0213 15:54:30.605832 2602 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa885608-4f6a-4364-9f96-0d3b14ef9f90" containerName="mount-bpf-fs" Feb 13 15:54:30.606001 kubelet[2602]: I0213 15:54:30.605853 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="334fa82c-e2cd-466b-b195-288c1a3f64b2" containerName="cilium-operator" Feb 13 15:54:30.606001 kubelet[2602]: I0213 15:54:30.605860 2602 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa885608-4f6a-4364-9f96-0d3b14ef9f90" containerName="cilium-agent" Feb 13 15:54:30.606421 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:54:30.608565 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:54:30.622044 systemd[1]: Started sshd@24-10.0.0.150:22-10.0.0.1:48420.service - OpenSSH per-connection server daemon (10.0.0.1:48420). Feb 13 15:54:30.626906 systemd-logind[1509]: Removed session 24. Feb 13 15:54:30.632578 systemd[1]: Created slice kubepods-burstable-podf79833dc_5669_4a63_ae3b_1a61ca9772d9.slice - libcontainer container kubepods-burstable-podf79833dc_5669_4a63_ae3b_1a61ca9772d9.slice. Feb 13 15:54:30.660430 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 48420 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:30.662097 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:30.666393 systemd-logind[1509]: New session 25 of user core. Feb 13 15:54:30.673734 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:54:30.724470 sshd[4425]: Connection closed by 10.0.0.1 port 48420 Feb 13 15:54:30.724831 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:30.741419 systemd[1]: sshd@24-10.0.0.150:22-10.0.0.1:48420.service: Deactivated successfully. Feb 13 15:54:30.743455 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:54:30.744922 systemd-logind[1509]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:54:30.748812 kubelet[2602]: I0213 15:54:30.748772 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-cilium-cgroup\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.748861 kubelet[2602]: I0213 15:54:30.748828 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-bpf-maps\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.748861 kubelet[2602]: I0213 15:54:30.748856 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f79833dc-5669-4a63-ae3b-1a61ca9772d9-hubble-tls\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.748905 kubelet[2602]: I0213 15:54:30.748877 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-lib-modules\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.748905 kubelet[2602]: I0213 15:54:30.748899 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-cni-path\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.748961 kubelet[2602]: I0213 15:54:30.748923 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-xtables-lock\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.748961 kubelet[2602]: I0213 15:54:30.748942 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-hostproc\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.749009 kubelet[2602]: I0213 15:54:30.748978 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-cilium-run\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.749009 kubelet[2602]: I0213 15:54:30.749000 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-host-proc-sys-net\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.749054 kubelet[2602]: I0213 15:54:30.749025 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f79833dc-5669-4a63-ae3b-1a61ca9772d9-clustermesh-secrets\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.749054 kubelet[2602]: I0213 15:54:30.749046 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-etc-cni-netd\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.749099 kubelet[2602]: I0213 15:54:30.749062 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f79833dc-5669-4a63-ae3b-1a61ca9772d9-cilium-config-path\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.749099 kubelet[2602]: I0213 15:54:30.749081 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f79833dc-5669-4a63-ae3b-1a61ca9772d9-host-proc-sys-kernel\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.749099 kubelet[2602]: I0213 15:54:30.749096 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzzq\" (UniqueName: \"kubernetes.io/projected/f79833dc-5669-4a63-ae3b-1a61ca9772d9-kube-api-access-2dzzq\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.749160 kubelet[2602]: I0213 15:54:30.749115 2602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f79833dc-5669-4a63-ae3b-1a61ca9772d9-cilium-ipsec-secrets\") pod \"cilium-kltvd\" (UID: \"f79833dc-5669-4a63-ae3b-1a61ca9772d9\") " pod="kube-system/cilium-kltvd" Feb 13 15:54:30.753879 systemd[1]: Started sshd@25-10.0.0.150:22-10.0.0.1:48424.service - OpenSSH per-connection server daemon (10.0.0.1:48424). Feb 13 15:54:30.754969 systemd-logind[1509]: Removed session 25. Feb 13 15:54:30.794284 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 48424 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:54:30.795912 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:30.800631 systemd-logind[1509]: New session 26 of user core. Feb 13 15:54:30.811731 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:54:30.935666 kubelet[2602]: E0213 15:54:30.935624 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:30.936220 containerd[1518]: time="2025-02-13T15:54:30.936175532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kltvd,Uid:f79833dc-5669-4a63-ae3b-1a61ca9772d9,Namespace:kube-system,Attempt:0,}" Feb 13 15:54:30.957097 containerd[1518]: time="2025-02-13T15:54:30.957006068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:54:30.957097 containerd[1518]: time="2025-02-13T15:54:30.957058679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:54:30.957097 containerd[1518]: time="2025-02-13T15:54:30.957074229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:30.957219 containerd[1518]: time="2025-02-13T15:54:30.957156868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:30.976740 systemd[1]: Started cri-containerd-70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee.scope - libcontainer container 70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee. Feb 13 15:54:31.000067 containerd[1518]: time="2025-02-13T15:54:30.999732508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kltvd,Uid:f79833dc-5669-4a63-ae3b-1a61ca9772d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\"" Feb 13 15:54:31.000883 kubelet[2602]: E0213 15:54:31.000847 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:31.002762 containerd[1518]: time="2025-02-13T15:54:31.002645612Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:54:31.017655 containerd[1518]: time="2025-02-13T15:54:31.017582914Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f13ca6d25793a64e8a9502248da5ec1ae716b7993f4a2e9e7c79ba59bdb8c5c\"" Feb 13 15:54:31.018273 containerd[1518]: time="2025-02-13T15:54:31.018129616Z" level=info msg="StartContainer for \"8f13ca6d25793a64e8a9502248da5ec1ae716b7993f4a2e9e7c79ba59bdb8c5c\"" Feb 13 15:54:31.046728 systemd[1]: Started cri-containerd-8f13ca6d25793a64e8a9502248da5ec1ae716b7993f4a2e9e7c79ba59bdb8c5c.scope - libcontainer container 8f13ca6d25793a64e8a9502248da5ec1ae716b7993f4a2e9e7c79ba59bdb8c5c. Feb 13 15:54:31.074040 containerd[1518]: time="2025-02-13T15:54:31.073992015Z" level=info msg="StartContainer for \"8f13ca6d25793a64e8a9502248da5ec1ae716b7993f4a2e9e7c79ba59bdb8c5c\" returns successfully" Feb 13 15:54:31.085867 systemd[1]: cri-containerd-8f13ca6d25793a64e8a9502248da5ec1ae716b7993f4a2e9e7c79ba59bdb8c5c.scope: Deactivated successfully. Feb 13 15:54:31.117524 containerd[1518]: time="2025-02-13T15:54:31.117451539Z" level=info msg="shim disconnected" id=8f13ca6d25793a64e8a9502248da5ec1ae716b7993f4a2e9e7c79ba59bdb8c5c namespace=k8s.io Feb 13 15:54:31.117524 containerd[1518]: time="2025-02-13T15:54:31.117510492Z" level=warning msg="cleaning up after shim disconnected" id=8f13ca6d25793a64e8a9502248da5ec1ae716b7993f4a2e9e7c79ba59bdb8c5c namespace=k8s.io Feb 13 15:54:31.117524 containerd[1518]: time="2025-02-13T15:54:31.117519550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:31.683291 kubelet[2602]: E0213 15:54:31.683258 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:31.685235 containerd[1518]: time="2025-02-13T15:54:31.685004942Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:54:31.697687 containerd[1518]: time="2025-02-13T15:54:31.697620587Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4957b78ba4eaa58cfbfe6de32558838ee9329f5eb2cd4c9d96812b1c57b2cfb8\"" Feb 13 15:54:31.698414 containerd[1518]: time="2025-02-13T15:54:31.698275717Z" level=info msg="StartContainer for \"4957b78ba4eaa58cfbfe6de32558838ee9329f5eb2cd4c9d96812b1c57b2cfb8\"" Feb 13 15:54:31.729721 systemd[1]: Started cri-containerd-4957b78ba4eaa58cfbfe6de32558838ee9329f5eb2cd4c9d96812b1c57b2cfb8.scope - libcontainer container 4957b78ba4eaa58cfbfe6de32558838ee9329f5eb2cd4c9d96812b1c57b2cfb8. Feb 13 15:54:31.753414 containerd[1518]: time="2025-02-13T15:54:31.753376192Z" level=info msg="StartContainer for \"4957b78ba4eaa58cfbfe6de32558838ee9329f5eb2cd4c9d96812b1c57b2cfb8\" returns successfully" Feb 13 15:54:31.759427 systemd[1]: cri-containerd-4957b78ba4eaa58cfbfe6de32558838ee9329f5eb2cd4c9d96812b1c57b2cfb8.scope: Deactivated successfully. Feb 13 15:54:31.786086 containerd[1518]: time="2025-02-13T15:54:31.786020812Z" level=info msg="shim disconnected" id=4957b78ba4eaa58cfbfe6de32558838ee9329f5eb2cd4c9d96812b1c57b2cfb8 namespace=k8s.io Feb 13 15:54:31.786086 containerd[1518]: time="2025-02-13T15:54:31.786080808Z" level=warning msg="cleaning up after shim disconnected" id=4957b78ba4eaa58cfbfe6de32558838ee9329f5eb2cd4c9d96812b1c57b2cfb8 namespace=k8s.io Feb 13 15:54:31.786086 containerd[1518]: time="2025-02-13T15:54:31.786090337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:32.687077 kubelet[2602]: E0213 15:54:32.687044 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:32.689393 containerd[1518]: time="2025-02-13T15:54:32.689336685Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:54:32.712963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339131445.mount: Deactivated successfully. Feb 13 15:54:32.714339 containerd[1518]: time="2025-02-13T15:54:32.714282345Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6560e84fbe608dfcff40cf5bea457fb09b85257c62fc0bb05f6ef1e06c9aa27c\"" Feb 13 15:54:32.714822 containerd[1518]: time="2025-02-13T15:54:32.714789449Z" level=info msg="StartContainer for \"6560e84fbe608dfcff40cf5bea457fb09b85257c62fc0bb05f6ef1e06c9aa27c\"" Feb 13 15:54:32.744725 systemd[1]: Started cri-containerd-6560e84fbe608dfcff40cf5bea457fb09b85257c62fc0bb05f6ef1e06c9aa27c.scope - libcontainer container 6560e84fbe608dfcff40cf5bea457fb09b85257c62fc0bb05f6ef1e06c9aa27c. Feb 13 15:54:32.773724 containerd[1518]: time="2025-02-13T15:54:32.773672133Z" level=info msg="StartContainer for \"6560e84fbe608dfcff40cf5bea457fb09b85257c62fc0bb05f6ef1e06c9aa27c\" returns successfully" Feb 13 15:54:32.776130 systemd[1]: cri-containerd-6560e84fbe608dfcff40cf5bea457fb09b85257c62fc0bb05f6ef1e06c9aa27c.scope: Deactivated successfully. Feb 13 15:54:32.801379 containerd[1518]: time="2025-02-13T15:54:32.801301732Z" level=info msg="shim disconnected" id=6560e84fbe608dfcff40cf5bea457fb09b85257c62fc0bb05f6ef1e06c9aa27c namespace=k8s.io Feb 13 15:54:32.801379 containerd[1518]: time="2025-02-13T15:54:32.801360585Z" level=warning msg="cleaning up after shim disconnected" id=6560e84fbe608dfcff40cf5bea457fb09b85257c62fc0bb05f6ef1e06c9aa27c namespace=k8s.io Feb 13 15:54:32.801379 containerd[1518]: time="2025-02-13T15:54:32.801380764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:32.854658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6560e84fbe608dfcff40cf5bea457fb09b85257c62fc0bb05f6ef1e06c9aa27c-rootfs.mount: Deactivated successfully. Feb 13 15:54:33.690127 kubelet[2602]: E0213 15:54:33.690097 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:33.691502 containerd[1518]: time="2025-02-13T15:54:33.691399064Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:54:33.703672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2106458917.mount: Deactivated successfully. Feb 13 15:54:33.719504 containerd[1518]: time="2025-02-13T15:54:33.719464225Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c586b43016cf7640143025c5e59b9bb55e577a303df36966ed68cfef8436f50f\"" Feb 13 15:54:33.720032 containerd[1518]: time="2025-02-13T15:54:33.719994433Z" level=info msg="StartContainer for \"c586b43016cf7640143025c5e59b9bb55e577a303df36966ed68cfef8436f50f\"" Feb 13 15:54:33.746726 systemd[1]: Started cri-containerd-c586b43016cf7640143025c5e59b9bb55e577a303df36966ed68cfef8436f50f.scope - libcontainer container c586b43016cf7640143025c5e59b9bb55e577a303df36966ed68cfef8436f50f. Feb 13 15:54:33.769002 systemd[1]: cri-containerd-c586b43016cf7640143025c5e59b9bb55e577a303df36966ed68cfef8436f50f.scope: Deactivated successfully. Feb 13 15:54:33.771359 containerd[1518]: time="2025-02-13T15:54:33.771330647Z" level=info msg="StartContainer for \"c586b43016cf7640143025c5e59b9bb55e577a303df36966ed68cfef8436f50f\" returns successfully" Feb 13 15:54:33.794381 containerd[1518]: time="2025-02-13T15:54:33.794314958Z" level=info msg="shim disconnected" id=c586b43016cf7640143025c5e59b9bb55e577a303df36966ed68cfef8436f50f namespace=k8s.io Feb 13 15:54:33.794381 containerd[1518]: time="2025-02-13T15:54:33.794369032Z" level=warning msg="cleaning up after shim disconnected" id=c586b43016cf7640143025c5e59b9bb55e577a303df36966ed68cfef8436f50f namespace=k8s.io Feb 13 15:54:33.794381 containerd[1518]: time="2025-02-13T15:54:33.794377869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:54:33.854798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c586b43016cf7640143025c5e59b9bb55e577a303df36966ed68cfef8436f50f-rootfs.mount: Deactivated successfully. Feb 13 15:54:34.571473 kubelet[2602]: E0213 15:54:34.571431 2602 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:54:34.693092 kubelet[2602]: E0213 15:54:34.693063 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:34.694461 containerd[1518]: time="2025-02-13T15:54:34.694409763Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:54:34.710266 containerd[1518]: time="2025-02-13T15:54:34.710221076Z" level=info msg="CreateContainer within sandbox \"70be7505a4dc68f26bc536fb57581d8c70c84f442dd38249c221da46f22751ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15c3d080fdd6de90df76d33866e6d306817f53d2d9a954b4fd541cf2db814fa7\"" Feb 13 15:54:34.710725 containerd[1518]: time="2025-02-13T15:54:34.710695466Z" level=info msg="StartContainer for \"15c3d080fdd6de90df76d33866e6d306817f53d2d9a954b4fd541cf2db814fa7\"" Feb 13 15:54:34.740713 systemd[1]: Started cri-containerd-15c3d080fdd6de90df76d33866e6d306817f53d2d9a954b4fd541cf2db814fa7.scope - libcontainer container 15c3d080fdd6de90df76d33866e6d306817f53d2d9a954b4fd541cf2db814fa7. Feb 13 15:54:34.769375 containerd[1518]: time="2025-02-13T15:54:34.769323666Z" level=info msg="StartContainer for \"15c3d080fdd6de90df76d33866e6d306817f53d2d9a954b4fd541cf2db814fa7\" returns successfully" Feb 13 15:54:35.174631 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:54:35.521840 kubelet[2602]: E0213 15:54:35.521820 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:35.696797 kubelet[2602]: E0213 15:54:35.696771 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:35.708096 kubelet[2602]: I0213 15:54:35.708023 2602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kltvd" podStartSLOduration=5.70800753 podStartE2EDuration="5.70800753s" podCreationTimestamp="2025-02-13 15:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:54:35.707681896 +0000 UTC m=+81.446154190" watchObservedRunningTime="2025-02-13 15:54:35.70800753 +0000 UTC m=+81.446479824" Feb 13 15:54:36.250267 kubelet[2602]: I0213 15:54:36.250207 2602 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:54:36Z","lastTransitionTime":"2025-02-13T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:54:36.937051 kubelet[2602]: E0213 15:54:36.936972 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:37.034489 systemd[1]: run-containerd-runc-k8s.io-15c3d080fdd6de90df76d33866e6d306817f53d2d9a954b4fd541cf2db814fa7-runc.UHNaVu.mount: Deactivated successfully. Feb 13 15:54:38.201498 systemd-networkd[1449]: lxc_health: Link UP Feb 13 15:54:38.201957 systemd-networkd[1449]: lxc_health: Gained carrier Feb 13 15:54:38.938201 kubelet[2602]: E0213 15:54:38.937048 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:39.223782 systemd-networkd[1449]: lxc_health: Gained IPv6LL Feb 13 15:54:39.704105 kubelet[2602]: E0213 15:54:39.704069 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:40.705295 kubelet[2602]: E0213 15:54:40.705257 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:41.521909 kubelet[2602]: E0213 15:54:41.521853 2602 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:54:43.373658 sshd[4434]: Connection closed by 10.0.0.1 port 48424 Feb 13 15:54:43.374552 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:43.379065 systemd[1]: sshd@25-10.0.0.150:22-10.0.0.1:48424.service: Deactivated successfully. Feb 13 15:54:43.381275 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:54:43.383066 systemd-logind[1509]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:54:43.384363 systemd-logind[1509]: Removed session 26.