Feb 13 15:41:14.996885 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:00:20 -00 2025 Feb 13 15:41:14.996915 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:41:14.996930 kernel: BIOS-provided physical RAM map: Feb 13 15:41:14.996940 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:41:14.996948 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:41:14.996956 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:41:14.996967 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:41:14.996976 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:41:14.996985 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:41:14.996993 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:41:14.997002 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:41:14.997014 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:41:14.997023 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:41:14.997032 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:41:14.997043 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:41:14.997053 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:41:14.997073 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:41:14.997083 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:41:14.997093 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:41:14.997102 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:41:14.997111 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:41:14.997121 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:41:14.997130 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:41:14.997139 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:41:14.997148 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:41:14.997158 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:41:14.997167 kernel: NX (Execute Disable) protection: active Feb 13 15:41:14.997182 kernel: APIC: Static calls initialized Feb 13 15:41:14.997192 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:41:14.997204 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:41:14.997213 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:41:14.997222 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:41:14.997231 kernel: extended physical RAM map: Feb 13 15:41:14.997240 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:41:14.997250 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:41:14.997259 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:41:14.997269 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:41:14.997278 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:41:14.997287 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:41:14.997299 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:41:14.997313 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:41:14.997323 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:41:14.997332 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:41:14.997342 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:41:14.997351 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:41:14.997364 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:41:14.997373 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:41:14.997383 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:41:14.997392 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:41:14.997402 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:41:14.997412 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:41:14.997421 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:41:14.997431 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:41:14.997441 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:41:14.997453 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:41:14.997463 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:41:14.997472 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:41:14.997482 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:41:14.997513 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:41:14.997523 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:41:14.997532 kernel: efi: EFI v2.7 by EDK II Feb 13 15:41:14.997542 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:41:14.997551 kernel: random: crng init done Feb 13 15:41:14.997561 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:41:14.997570 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:41:14.997579 kernel: secureboot: Secure boot disabled Feb 13 15:41:14.997592 kernel: SMBIOS 2.8 present. Feb 13 15:41:14.997613 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:41:14.997623 kernel: Hypervisor detected: KVM Feb 13 15:41:14.997632 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:41:14.997641 kernel: kvm-clock: using sched offset of 3109116281 cycles Feb 13 15:41:14.997651 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:41:14.997661 kernel: tsc: Detected 2794.750 MHz processor Feb 13 15:41:14.997671 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:41:14.997681 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:41:14.997691 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:41:14.997704 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:41:14.997714 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:41:14.997724 kernel: Using GB pages for direct mapping Feb 13 15:41:14.997734 kernel: ACPI: Early table checksum verification disabled Feb 13 15:41:14.997744 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:41:14.997754 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:41:14.997763 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:14.997773 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:14.997783 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:41:14.997795 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:14.997805 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:14.997815 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:14.997825 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:41:14.997834 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:41:14.997844 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:41:14.997854 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:41:14.997864 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:41:14.997873 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:41:14.997885 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:41:14.997895 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:41:14.997905 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:41:14.997914 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:41:14.997924 kernel: No NUMA configuration found Feb 13 15:41:14.997934 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:41:14.997943 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:41:14.997953 kernel: Zone ranges: Feb 13 15:41:14.997963 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:41:14.997975 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:41:14.997984 kernel: Normal empty Feb 13 15:41:14.997994 kernel: Movable zone start for each node Feb 13 15:41:14.998004 kernel: Early memory node ranges Feb 13 15:41:14.998013 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:41:14.998023 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:41:14.998033 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:41:14.998042 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:41:14.998052 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:41:14.998062 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:41:14.998082 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:41:14.998091 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:41:14.998101 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:41:14.998111 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:41:14.998121 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:41:14.998158 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:41:14.998171 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:41:14.998182 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:41:14.998192 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:41:14.998203 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:41:14.998213 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:41:14.998224 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:41:14.998238 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:41:14.998248 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:41:14.998259 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:41:14.998270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:41:14.998280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:41:14.998294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:41:14.998305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:41:14.998315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:41:14.998326 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:41:14.998336 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:41:14.998347 kernel: TSC deadline timer available Feb 13 15:41:14.998357 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:41:14.998368 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:41:14.998379 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:41:14.998392 kernel: kvm-guest: setup PV sched yield Feb 13 15:41:14.998403 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:41:14.998414 kernel: Booting paravirtualized kernel on KVM Feb 13 15:41:14.998425 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:41:14.998436 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:41:14.998446 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:41:14.998457 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:41:14.998467 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:41:14.998478 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:41:14.998492 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:41:14.998504 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:41:14.998515 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:41:14.998526 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:41:14.998537 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:41:14.998547 kernel: Fallback order for Node 0: 0 Feb 13 15:41:14.998558 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:41:14.998568 kernel: Policy zone: DMA32 Feb 13 15:41:14.998582 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:41:14.998593 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 177824K reserved, 0K cma-reserved) Feb 13 15:41:14.998739 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:41:14.998750 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 15:41:14.998761 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:41:14.998771 kernel: Dynamic Preempt: voluntary Feb 13 15:41:14.998782 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:41:14.998793 kernel: rcu: RCU event tracing is enabled. Feb 13 15:41:14.998804 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:41:14.998819 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:41:14.998834 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:41:14.998845 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:41:14.998855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:41:14.998866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:41:14.998876 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:41:14.998887 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:41:14.998898 kernel: Console: colour dummy device 80x25 Feb 13 15:41:14.998908 kernel: printk: console [ttyS0] enabled Feb 13 15:41:14.998919 kernel: ACPI: Core revision 20230628 Feb 13 15:41:14.998933 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:41:14.998944 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:41:14.998955 kernel: x2apic enabled Feb 13 15:41:14.998966 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:41:14.998977 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:41:14.998987 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:41:14.998998 kernel: kvm-guest: setup PV IPIs Feb 13 15:41:14.999009 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:41:14.999019 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:41:14.999033 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 15:41:14.999044 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:41:14.999055 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:41:14.999073 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:41:14.999084 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:41:14.999095 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:41:14.999106 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:41:14.999116 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:41:14.999131 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:41:14.999141 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:41:14.999152 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:41:14.999162 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:41:14.999172 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:41:14.999183 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:41:14.999194 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:41:14.999204 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:41:14.999214 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:41:14.999228 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:41:14.999238 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:41:14.999249 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:41:14.999261 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:41:14.999271 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:41:14.999284 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:41:14.999296 kernel: landlock: Up and running. Feb 13 15:41:14.999307 kernel: SELinux: Initializing. Feb 13 15:41:14.999317 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:41:14.999330 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:41:14.999340 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:41:14.999351 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:41:14.999361 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:41:14.999372 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:41:14.999382 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:41:14.999392 kernel: ... version: 0 Feb 13 15:41:14.999402 kernel: ... bit width: 48 Feb 13 15:41:14.999415 kernel: ... generic registers: 6 Feb 13 15:41:14.999425 kernel: ... value mask: 0000ffffffffffff Feb 13 15:41:14.999435 kernel: ... max period: 00007fffffffffff Feb 13 15:41:14.999445 kernel: ... fixed-purpose events: 0 Feb 13 15:41:14.999456 kernel: ... event mask: 000000000000003f Feb 13 15:41:14.999466 kernel: signal: max sigframe size: 1776 Feb 13 15:41:14.999476 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:41:14.999486 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:41:14.999497 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:41:14.999507 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:41:14.999519 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:41:14.999530 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:41:14.999540 kernel: smpboot: Max logical packages: 1 Feb 13 15:41:14.999550 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 15:41:14.999560 kernel: devtmpfs: initialized Feb 13 15:41:14.999571 kernel: x86/mm: Memory block size: 128MB Feb 13 15:41:14.999581 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:41:14.999591 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:41:14.999614 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:41:14.999628 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:41:14.999638 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:41:14.999649 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:41:14.999659 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:41:14.999670 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:41:14.999680 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:41:14.999690 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:41:14.999700 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:41:14.999711 kernel: audit: type=2000 audit(1739461274.999:1): state=initialized audit_enabled=0 res=1 Feb 13 15:41:14.999724 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:41:14.999734 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:41:14.999744 kernel: cpuidle: using governor menu Feb 13 15:41:14.999754 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:41:14.999765 kernel: dca service started, version 1.12.1 Feb 13 15:41:14.999775 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:41:14.999786 kernel: PCI: Using configuration type 1 for base access Feb 13 15:41:14.999796 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:41:14.999809 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:41:14.999819 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:41:14.999830 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:41:14.999840 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:41:14.999850 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:41:14.999860 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:41:14.999870 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:41:14.999880 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:41:14.999891 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:41:14.999904 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:41:14.999914 kernel: ACPI: Interpreter enabled Feb 13 15:41:14.999925 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:41:14.999935 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:41:14.999945 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:41:14.999956 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:41:14.999966 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:41:14.999976 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:41:15.000196 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:41:15.000358 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:41:15.000502 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:41:15.000515 kernel: PCI host bridge to bus 0000:00 Feb 13 15:41:15.000742 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:41:15.000919 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:41:15.001063 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:41:15.001223 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:41:15.001360 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:41:15.001496 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:41:15.001651 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:41:15.001824 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:41:15.001997 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:41:15.002163 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:41:15.002329 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:41:15.002491 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:41:15.002667 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:41:15.002830 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:41:15.003001 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:41:15.003177 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:41:15.003342 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:41:15.003512 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:41:15.003764 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:41:15.003950 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:41:15.004152 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:41:15.004313 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:41:15.004484 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:41:15.004669 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:41:15.004829 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:41:15.004989 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:41:15.005158 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:41:15.005323 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:41:15.005478 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:41:15.005678 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:41:15.005863 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:41:15.006023 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:41:15.006204 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:41:15.006369 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:41:15.006385 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:41:15.006397 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:41:15.006408 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:41:15.006419 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:41:15.006435 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:41:15.006446 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:41:15.006458 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:41:15.006469 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:41:15.006480 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:41:15.006491 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:41:15.006503 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:41:15.006515 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:41:15.006525 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:41:15.006540 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:41:15.006551 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:41:15.006562 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:41:15.006573 kernel: iommu: Default domain type: Translated Feb 13 15:41:15.006585 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:41:15.006610 kernel: efivars: Registered efivars operations Feb 13 15:41:15.006622 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:41:15.006634 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:41:15.006646 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:41:15.006660 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:41:15.006671 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:41:15.006682 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:41:15.006693 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:41:15.006704 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:41:15.006716 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:41:15.006727 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:41:15.006927 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:41:15.007104 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:41:15.007266 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:41:15.007281 kernel: vgaarb: loaded Feb 13 15:41:15.007292 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:41:15.007304 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:41:15.007315 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:41:15.007326 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:41:15.007338 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:41:15.007349 kernel: pnp: PnP ACPI init Feb 13 15:41:15.007530 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:41:15.007547 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:41:15.007558 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:41:15.007570 kernel: NET: Registered PF_INET protocol family Feb 13 15:41:15.007617 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:41:15.007632 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:41:15.007644 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:41:15.007656 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:41:15.007670 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:41:15.007682 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:41:15.007693 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:41:15.007705 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:41:15.007716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:41:15.007728 kernel: NET: Registered PF_XDP protocol family Feb 13 15:41:15.007890 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:41:15.008050 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:41:15.008211 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:41:15.008356 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:41:15.008502 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:41:15.008661 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:41:15.008808 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:41:15.008951 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:41:15.008971 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:41:15.008983 kernel: Initialise system trusted keyrings Feb 13 15:41:15.008997 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:41:15.009009 kernel: Key type asymmetric registered Feb 13 15:41:15.009020 kernel: Asymmetric key parser 'x509' registered Feb 13 15:41:15.009031 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:41:15.009041 kernel: io scheduler mq-deadline registered Feb 13 15:41:15.009053 kernel: io scheduler kyber registered Feb 13 15:41:15.009073 kernel: io scheduler bfq registered Feb 13 15:41:15.009085 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:41:15.009097 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:41:15.009109 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:41:15.009125 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:41:15.009137 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:41:15.009149 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:41:15.009161 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:41:15.009172 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:41:15.009189 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:41:15.009201 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:41:15.009365 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:41:15.009513 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:41:15.009728 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:41:14 UTC (1739461274) Feb 13 15:41:15.009874 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:41:15.009890 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:41:15.009902 kernel: efifb: probing for efifb Feb 13 15:41:15.009919 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:41:15.009930 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:41:15.009942 kernel: efifb: scrolling: redraw Feb 13 15:41:15.009953 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:41:15.009965 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:41:15.009977 kernel: fb0: EFI VGA frame buffer device Feb 13 15:41:15.009989 kernel: pstore: Using crash dump compression: deflate Feb 13 15:41:15.010001 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:41:15.010012 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:41:15.010027 kernel: Segment Routing with IPv6 Feb 13 15:41:15.010038 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:41:15.010050 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:41:15.010061 kernel: Key type dns_resolver registered Feb 13 15:41:15.010083 kernel: IPI shorthand broadcast: enabled Feb 13 15:41:15.010095 kernel: sched_clock: Marking stable (597002625, 293482329)->(1085673016, -195188062) Feb 13 15:41:15.010106 kernel: registered taskstats version 1 Feb 13 15:41:15.010117 kernel: Loading compiled-in X.509 certificates Feb 13 15:41:15.010129 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: a260c8876205efb4ca2ab3eb040cd310ec7afd21' Feb 13 15:41:15.010144 kernel: Key type .fscrypt registered Feb 13 15:41:15.010155 kernel: Key type fscrypt-provisioning registered Feb 13 15:41:15.010167 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:41:15.010178 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:41:15.010190 kernel: ima: No architecture policies found Feb 13 15:41:15.010204 kernel: clk: Disabling unused clocks Feb 13 15:41:15.010217 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 15:41:15.010229 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:41:15.010241 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 15:41:15.010256 kernel: Run /init as init process Feb 13 15:41:15.010267 kernel: with arguments: Feb 13 15:41:15.010279 kernel: /init Feb 13 15:41:15.010313 kernel: with environment: Feb 13 15:41:15.010324 kernel: HOME=/ Feb 13 15:41:15.010335 kernel: TERM=linux Feb 13 15:41:15.010346 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:41:15.010361 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:41:15.010376 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:41:15.010392 systemd[1]: Detected virtualization kvm. Feb 13 15:41:15.010404 systemd[1]: Detected architecture x86-64. Feb 13 15:41:15.010415 systemd[1]: Running in initrd. Feb 13 15:41:15.010427 systemd[1]: No hostname configured, using default hostname. Feb 13 15:41:15.010439 systemd[1]: Hostname set to . Feb 13 15:41:15.010451 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:41:15.010462 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:41:15.010478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:41:15.010490 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:41:15.010503 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:41:15.010515 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:41:15.010527 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:41:15.010540 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:41:15.010554 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:41:15.010569 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:41:15.010581 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:41:15.010593 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:41:15.010619 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:41:15.010631 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:41:15.010643 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:41:15.010655 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:41:15.010667 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:41:15.010682 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:41:15.010694 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:41:15.010706 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:41:15.010718 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:41:15.010730 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:41:15.010742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:41:15.010754 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:41:15.010765 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:41:15.010777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:41:15.010792 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:41:15.010804 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:41:15.010816 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:41:15.010827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:41:15.010839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:15.010851 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:41:15.010894 systemd-journald[192]: Collecting audit messages is disabled. Feb 13 15:41:15.010928 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:41:15.010941 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:41:15.010957 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:41:15.010969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:15.010981 systemd-journald[192]: Journal started Feb 13 15:41:15.011006 systemd-journald[192]: Runtime Journal (/run/log/journal/6cb76ae90b2b46f29b7316f36e9177b1) is 6M, max 48.2M, 42.2M free. Feb 13 15:41:15.004172 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 15:41:15.013791 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:41:15.014970 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:41:15.025781 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:41:15.026837 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:41:15.047697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:41:15.058060 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:41:15.063766 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:41:15.070148 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:41:15.088626 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:41:15.090371 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 15:41:15.091429 kernel: Bridge firewalling registered Feb 13 15:41:15.093944 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:41:15.096434 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:41:15.100437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:41:15.106459 dracut-cmdline[225]: dracut-dracut-053 Feb 13 15:41:15.110134 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:41:15.119798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:41:15.139782 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:41:15.176531 systemd-resolved[264]: Positive Trust Anchors: Feb 13 15:41:15.176549 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:41:15.176579 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:41:15.179036 systemd-resolved[264]: Defaulting to hostname 'linux'. Feb 13 15:41:15.180073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:41:15.187278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:41:15.205639 kernel: SCSI subsystem initialized Feb 13 15:41:15.214629 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:41:15.225630 kernel: iscsi: registered transport (tcp) Feb 13 15:41:15.249623 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:41:15.249646 kernel: QLogic iSCSI HBA Driver Feb 13 15:41:15.305829 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:41:15.336753 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:41:15.382617 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:41:15.382641 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:41:15.384618 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:41:15.425634 kernel: raid6: avx2x4 gen() 30549 MB/s Feb 13 15:41:15.442633 kernel: raid6: avx2x2 gen() 30278 MB/s Feb 13 15:41:15.468213 kernel: raid6: avx2x1 gen() 23729 MB/s Feb 13 15:41:15.468257 kernel: raid6: using algorithm avx2x4 gen() 30549 MB/s Feb 13 15:41:15.485704 kernel: raid6: .... xor() 8130 MB/s, rmw enabled Feb 13 15:41:15.485781 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:41:15.507644 kernel: xor: automatically using best checksumming function avx Feb 13 15:41:15.656646 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:41:15.671243 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:41:15.683732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:41:15.726017 systemd-udevd[416]: Using default interface naming scheme 'v255'. Feb 13 15:41:15.732960 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:41:15.773773 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:41:15.789550 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Feb 13 15:41:15.825054 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:41:15.837767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:41:15.902795 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:41:15.929952 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:41:15.940591 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:41:15.976490 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:41:16.016652 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:41:16.016671 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:41:16.016823 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:41:16.016835 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:41:16.016846 kernel: GPT:9289727 != 19775487 Feb 13 15:41:16.016856 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:41:16.016866 kernel: GPT:9289727 != 19775487 Feb 13 15:41:16.016886 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:41:16.016895 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:41:16.016905 kernel: AES CTR mode by8 optimization enabled Feb 13 15:41:15.973668 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:41:15.976846 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:41:15.978456 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:41:15.989816 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:41:16.010675 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:41:16.010824 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:41:16.023366 kernel: libata version 3.00 loaded. Feb 13 15:41:16.025810 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:41:16.035236 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:41:16.035737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:16.038757 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:16.071942 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:41:16.118894 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:41:16.118913 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:41:16.119080 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:41:16.119221 kernel: BTRFS: device fsid 506754f7-5ef1-4c63-ad2a-b7b855a48f85 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (463) Feb 13 15:41:16.119232 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (475) Feb 13 15:41:16.119244 kernel: scsi host0: ahci Feb 13 15:41:16.119419 kernel: scsi host1: ahci Feb 13 15:41:16.119572 kernel: scsi host2: ahci Feb 13 15:41:16.119756 kernel: scsi host3: ahci Feb 13 15:41:16.119904 kernel: scsi host4: ahci Feb 13 15:41:16.120060 kernel: scsi host5: ahci Feb 13 15:41:16.120207 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:41:16.120219 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:41:16.120234 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:41:16.120245 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:41:16.120265 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:41:16.120276 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:41:16.073067 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:16.078434 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:41:16.093360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:16.117657 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:41:16.154814 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:41:16.184413 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:41:16.187738 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:41:16.199294 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:41:16.216729 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:41:16.219075 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:41:16.219132 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:16.222528 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:16.225360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:16.227730 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:41:16.240150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:16.266730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:41:16.296404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:41:16.426772 disk-uuid[569]: Primary Header is updated. Feb 13 15:41:16.426772 disk-uuid[569]: Secondary Entries is updated. Feb 13 15:41:16.426772 disk-uuid[569]: Secondary Header is updated. Feb 13 15:41:16.446876 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:16.446904 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:16.446915 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:16.446925 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:16.446945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:41:16.448655 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:41:16.450654 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:41:16.450676 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:41:16.452497 kernel: ata3.00: applying bridge limits Feb 13 15:41:16.452517 kernel: ata3.00: configured for UDMA/100 Feb 13 15:41:16.455686 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:41:16.501632 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:41:16.527515 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:41:16.527536 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:41:17.464625 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:41:17.464839 disk-uuid[585]: The operation has completed successfully. Feb 13 15:41:17.499212 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:41:17.499358 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:41:17.545729 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:41:17.549055 sh[600]: Success Feb 13 15:41:17.562622 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:41:17.598934 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:41:17.611133 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:41:17.625039 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:41:17.636185 kernel: BTRFS info (device dm-0): first mount of filesystem 506754f7-5ef1-4c63-ad2a-b7b855a48f85 Feb 13 15:41:17.636221 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:41:17.636235 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:41:17.637968 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:41:17.637984 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:41:17.642961 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:41:17.643711 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:41:17.651761 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:41:17.660450 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:41:17.666492 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:17.666512 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:41:17.666525 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:41:17.669620 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:41:17.678953 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:41:17.680687 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:17.767682 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:41:17.779796 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:41:17.781176 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:41:17.784657 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:41:17.816854 systemd-networkd[779]: lo: Link UP Feb 13 15:41:17.816866 systemd-networkd[779]: lo: Gained carrier Feb 13 15:41:17.818562 systemd-networkd[779]: Enumeration completed Feb 13 15:41:17.818922 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:41:17.818926 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:41:17.819704 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:41:17.829040 systemd-networkd[779]: eth0: Link UP Feb 13 15:41:17.829045 systemd-networkd[779]: eth0: Gained carrier Feb 13 15:41:17.829051 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:41:17.835869 systemd[1]: Reached target network.target - Network. Feb 13 15:41:17.848688 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:41:17.850542 ignition[782]: Ignition 2.20.0 Feb 13 15:41:17.850557 ignition[782]: Stage: fetch-offline Feb 13 15:41:17.850630 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:17.850648 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:17.850770 ignition[782]: parsed url from cmdline: "" Feb 13 15:41:17.850775 ignition[782]: no config URL provided Feb 13 15:41:17.850782 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:41:17.850794 ignition[782]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:41:17.850822 ignition[782]: op(1): [started] loading QEMU firmware config module Feb 13 15:41:17.850829 ignition[782]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:41:17.858674 ignition[782]: op(1): [finished] loading QEMU firmware config module Feb 13 15:41:17.898615 ignition[782]: parsing config with SHA512: 47b523829500c32e3bffcc2b35483c1521bb678b88d4950d161ad5d5d17b9813520aee40778b3ca535de9f1179cef02eeaa544b904e79d1e49d8e2d4944d5160 Feb 13 15:41:17.904202 unknown[782]: fetched base config from "system" Feb 13 15:41:17.904219 unknown[782]: fetched user config from "qemu" Feb 13 15:41:17.905538 ignition[782]: fetch-offline: fetch-offline passed Feb 13 15:41:17.905702 ignition[782]: Ignition finished successfully Feb 13 15:41:17.910855 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:41:17.912404 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:41:17.919891 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:41:17.936218 ignition[794]: Ignition 2.20.0 Feb 13 15:41:17.936229 ignition[794]: Stage: kargs Feb 13 15:41:17.936430 ignition[794]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:17.936445 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:17.938083 ignition[794]: kargs: kargs passed Feb 13 15:41:17.938133 ignition[794]: Ignition finished successfully Feb 13 15:41:17.944516 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:41:17.957734 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:41:17.972616 ignition[803]: Ignition 2.20.0 Feb 13 15:41:17.972631 ignition[803]: Stage: disks Feb 13 15:41:17.972853 ignition[803]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:17.972871 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:17.974000 ignition[803]: disks: disks passed Feb 13 15:41:17.974060 ignition[803]: Ignition finished successfully Feb 13 15:41:17.979502 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:41:17.981687 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:41:17.981757 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:41:17.985205 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:41:17.985407 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:41:17.985917 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:41:18.001749 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:41:18.024381 systemd-fsck[813]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:41:18.159806 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:41:18.641732 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:41:18.748640 kernel: EXT4-fs (vda9): mounted filesystem 8023eced-1511-4e72-a58a-db1b8cb3210e r/w with ordered data mode. Quota mode: none. Feb 13 15:41:18.749123 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:41:18.749955 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:41:18.762707 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:41:18.765570 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:41:18.766001 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:41:18.766054 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:41:18.774818 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (821) Feb 13 15:41:18.774836 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:18.766087 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:41:18.780798 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:41:18.780815 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:41:18.780828 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:41:18.772945 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:41:18.780808 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:41:18.783459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:41:18.820415 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:41:18.825918 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:41:18.830132 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:41:18.835128 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:41:18.924346 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:41:18.938682 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:41:18.940492 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:41:18.946628 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:18.973117 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:41:18.984409 ignition[937]: INFO : Ignition 2.20.0 Feb 13 15:41:18.984409 ignition[937]: INFO : Stage: mount Feb 13 15:41:18.986268 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:18.986268 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:18.988892 ignition[937]: INFO : mount: mount passed Feb 13 15:41:18.989662 ignition[937]: INFO : Ignition finished successfully Feb 13 15:41:18.992570 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:41:19.006795 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:41:19.246811 systemd-networkd[779]: eth0: Gained IPv6LL Feb 13 15:41:19.633539 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:41:19.645783 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:41:19.656002 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (948) Feb 13 15:41:19.656036 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:41:19.656048 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:41:19.657707 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:41:19.660624 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:41:19.661872 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:41:19.685416 ignition[965]: INFO : Ignition 2.20.0 Feb 13 15:41:19.685416 ignition[965]: INFO : Stage: files Feb 13 15:41:19.687099 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:19.687099 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:19.687099 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:41:19.690881 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:41:19.690881 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:41:19.695045 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:41:19.696561 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:41:19.698278 unknown[965]: wrote ssh authorized keys file for user: core Feb 13 15:41:19.699420 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:41:19.701069 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:41:19.703255 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:41:19.738556 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:41:19.920779 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:41:19.920779 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:41:19.924552 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:41:20.293122 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:41:20.381647 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:41:20.383765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:41:20.655428 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:41:20.933746 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:41:20.933746 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:41:20.938499 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:41:20.938499 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:41:20.938499 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:41:20.938499 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:41:20.938499 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:41:20.938499 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:41:20.938499 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:41:20.938499 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:41:20.956682 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:41:20.960474 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:41:20.962148 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:41:20.962148 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:41:20.962148 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:41:20.962148 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:41:20.962148 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:41:20.962148 ignition[965]: INFO : files: files passed Feb 13 15:41:20.962148 ignition[965]: INFO : Ignition finished successfully Feb 13 15:41:20.963498 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:41:20.972751 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:41:20.974883 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:41:20.976641 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:41:20.976747 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:41:20.984852 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:41:20.986313 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:41:20.986313 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:41:20.990800 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:41:20.988953 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:41:20.991337 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:41:21.003720 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:41:21.025800 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:41:21.025926 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:41:21.028261 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:41:21.030367 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:41:21.030473 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:41:21.043710 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:41:21.056489 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:41:21.070719 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:41:21.080967 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:41:21.082241 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:41:21.084508 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:41:21.086556 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:41:21.086685 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:41:21.089016 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:41:21.090629 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:41:21.092662 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:41:21.095921 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:41:21.097988 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:41:21.100187 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:41:21.102339 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:41:21.104658 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:41:21.106693 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:41:21.108995 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:41:21.110800 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:41:21.110920 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:41:21.113254 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:41:21.114726 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:41:21.116848 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:41:21.116949 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:41:21.119109 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:41:21.119218 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:41:21.121626 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:41:21.121734 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:41:21.123611 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:41:21.125359 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:41:21.128678 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:41:21.130066 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:41:21.131979 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:41:21.134063 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:41:21.134164 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:41:21.135915 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:41:21.136004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:41:21.138020 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:41:21.138137 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:41:21.140690 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:41:21.140798 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:41:21.149736 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:41:21.152134 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:41:21.153278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:41:21.153396 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:41:21.155696 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:41:21.155879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:41:21.162203 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:41:21.165326 ignition[1020]: INFO : Ignition 2.20.0 Feb 13 15:41:21.165326 ignition[1020]: INFO : Stage: umount Feb 13 15:41:21.165326 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:41:21.165326 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:41:21.165326 ignition[1020]: INFO : umount: umount passed Feb 13 15:41:21.165326 ignition[1020]: INFO : Ignition finished successfully Feb 13 15:41:21.162319 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:41:21.166816 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:41:21.166944 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:41:21.169214 systemd[1]: Stopped target network.target - Network. Feb 13 15:41:21.170781 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:41:21.170843 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:41:21.173048 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:41:21.173094 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:41:21.174883 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:41:21.174941 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:41:21.176917 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:41:21.176965 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:41:21.179608 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:41:21.181657 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:41:21.192200 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:41:21.197263 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:41:21.197388 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:41:21.208587 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:41:21.208898 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:41:21.209018 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:41:21.212098 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:41:21.212776 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:41:21.212831 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:41:21.222671 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:41:21.224271 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:41:21.224330 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:41:21.226751 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:41:21.226803 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:41:21.229230 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:41:21.229287 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:41:21.231304 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:41:21.231353 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:41:21.233641 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:41:21.237217 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:41:21.237284 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:41:21.249160 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:41:21.250206 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:41:21.257461 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:41:21.257652 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:41:21.261105 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:41:21.261160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:41:21.283676 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:41:21.283719 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:41:21.284727 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:41:21.284775 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:41:21.289510 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:41:21.289563 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:41:21.292337 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:41:21.292390 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:41:21.307753 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:41:21.308899 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:41:21.308956 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:41:21.312393 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:41:21.312441 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:41:21.313691 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:41:21.313738 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:41:21.315918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:41:21.315964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:21.321427 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:41:21.321494 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:41:21.325675 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:41:21.325801 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:41:21.621739 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:41:21.622791 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:41:21.625247 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:41:21.627334 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:41:21.628354 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:41:21.645858 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:41:21.654535 systemd[1]: Switching root. Feb 13 15:41:21.689125 systemd-journald[192]: Journal stopped Feb 13 15:41:23.328162 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Feb 13 15:41:23.328223 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:41:23.328237 kernel: SELinux: policy capability open_perms=1 Feb 13 15:41:23.328248 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:41:23.328260 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:41:23.328272 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:41:23.328287 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:41:23.328298 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:41:23.328310 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:41:23.328321 kernel: audit: type=1403 audit(1739461282.515:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:41:23.328334 systemd[1]: Successfully loaded SELinux policy in 40.219ms. Feb 13 15:41:23.328363 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.233ms. Feb 13 15:41:23.328377 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:41:23.328389 systemd[1]: Detected virtualization kvm. Feb 13 15:41:23.328402 systemd[1]: Detected architecture x86-64. Feb 13 15:41:23.328416 systemd[1]: Detected first boot. Feb 13 15:41:23.328433 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:41:23.328445 zram_generator::config[1067]: No configuration found. Feb 13 15:41:23.328458 kernel: Guest personality initialized and is inactive Feb 13 15:41:23.328471 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 15:41:23.328483 kernel: Initialized host personality Feb 13 15:41:23.328494 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:41:23.328506 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:41:23.328521 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:41:23.328533 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:41:23.328545 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:41:23.328557 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:41:23.328574 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:41:23.328586 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:41:23.328623 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:41:23.328637 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:41:23.328658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:41:23.328670 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:41:23.328683 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:41:23.328695 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:41:23.328707 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:41:23.328724 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:41:23.328737 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:41:23.328750 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:41:23.328762 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:41:23.328780 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:41:23.328792 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:41:23.328805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:41:23.328818 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:41:23.328838 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:41:23.328851 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:41:23.328863 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:41:23.328876 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:41:23.328891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:41:23.328903 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:41:23.328915 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:41:23.328927 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:41:23.328939 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:41:23.328951 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:41:23.328963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:41:23.328975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:41:23.328988 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:41:23.329003 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:41:23.329016 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:41:23.329029 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:41:23.329041 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:41:23.329056 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:23.329068 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:41:23.329081 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:41:23.329093 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:41:23.329106 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:41:23.329121 systemd[1]: Reached target machines.target - Containers. Feb 13 15:41:23.329133 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:41:23.329146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:41:23.329158 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:41:23.329170 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:41:23.329182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:41:23.329194 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:41:23.329206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:41:23.329222 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:41:23.329235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:41:23.329247 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:41:23.329260 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:41:23.329273 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:41:23.329285 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:41:23.329298 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:41:23.329311 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:41:23.329327 kernel: fuse: init (API version 7.39) Feb 13 15:41:23.329339 kernel: loop: module loaded Feb 13 15:41:23.329351 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:41:23.329369 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:41:23.329382 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:41:23.329394 kernel: ACPI: bus type drm_connector registered Feb 13 15:41:23.329406 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:41:23.329435 systemd-journald[1145]: Collecting audit messages is disabled. Feb 13 15:41:23.329461 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:41:23.329474 systemd-journald[1145]: Journal started Feb 13 15:41:23.329497 systemd-journald[1145]: Runtime Journal (/run/log/journal/6cb76ae90b2b46f29b7316f36e9177b1) is 6M, max 48.2M, 42.2M free. Feb 13 15:41:23.101772 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:41:23.112700 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:41:23.113174 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:41:23.331658 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:41:23.333625 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:41:23.333698 systemd[1]: Stopped verity-setup.service. Feb 13 15:41:23.336624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:23.340892 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:41:23.342128 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:41:23.343290 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:41:23.344530 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:41:23.345632 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:41:23.346817 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:41:23.348046 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:41:23.349314 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:41:23.350839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:41:23.352456 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:41:23.352712 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:41:23.354252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:41:23.354471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:41:23.355913 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:41:23.356159 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:41:23.357571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:41:23.357798 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:41:23.359356 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:41:23.359565 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:41:23.360951 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:41:23.361159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:41:23.362560 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:41:23.363997 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:41:23.365553 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:41:23.367113 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:41:23.382204 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:41:23.393744 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:41:23.396617 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:41:23.397981 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:41:23.398016 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:41:23.400189 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:41:23.414748 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:41:23.417452 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:41:23.418806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:41:23.421901 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:41:23.433892 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:41:23.435851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:41:23.437525 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:41:23.439658 systemd-journald[1145]: Time spent on flushing to /var/log/journal/6cb76ae90b2b46f29b7316f36e9177b1 is 19.435ms for 1060 entries. Feb 13 15:41:23.439658 systemd-journald[1145]: System Journal (/var/log/journal/6cb76ae90b2b46f29b7316f36e9177b1) is 8M, max 195.6M, 187.6M free. Feb 13 15:41:23.475454 systemd-journald[1145]: Received client request to flush runtime journal. Feb 13 15:41:23.475520 kernel: loop0: detected capacity change from 0 to 138176 Feb 13 15:41:23.442507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:41:23.443707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:41:23.449139 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:41:23.452876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:41:23.459401 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:41:23.461302 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:41:23.462972 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:41:23.466777 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:41:23.471563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:41:23.481407 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:41:23.483969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:41:23.490044 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 15:41:23.490064 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 15:41:23.492116 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:41:23.504844 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:41:23.505634 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:41:23.511305 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:41:23.513199 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:41:23.519019 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:41:23.523536 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:41:23.528318 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:41:23.533761 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 15:41:23.556588 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:41:23.567990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:41:23.573618 kernel: loop2: detected capacity change from 0 to 147912 Feb 13 15:41:23.589222 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Feb 13 15:41:23.589243 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Feb 13 15:41:23.596929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:41:23.616688 kernel: loop3: detected capacity change from 0 to 138176 Feb 13 15:41:23.630880 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 15:41:23.641793 kernel: loop5: detected capacity change from 0 to 147912 Feb 13 15:41:23.652481 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:41:23.653190 (sd-merge)[1215]: Merged extensions into '/usr'. Feb 13 15:41:23.657557 systemd[1]: Reload requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:41:23.657578 systemd[1]: Reloading... Feb 13 15:41:23.729623 zram_generator::config[1242]: No configuration found. Feb 13 15:41:23.766731 ldconfig[1182]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:41:23.854015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:41:23.919475 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:41:23.919629 systemd[1]: Reloading finished in 261 ms. Feb 13 15:41:23.943359 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:41:23.944898 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:41:23.959003 systemd[1]: Starting ensure-sysext.service... Feb 13 15:41:23.960862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:41:24.019425 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:41:24.019725 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:41:24.020675 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:41:24.020983 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Feb 13 15:41:24.021066 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Feb 13 15:41:24.025277 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:41:24.025290 systemd-tmpfiles[1281]: Skipping /boot Feb 13 15:41:24.026029 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:41:24.026048 systemd[1]: Reloading... Feb 13 15:41:24.038411 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:41:24.038424 systemd-tmpfiles[1281]: Skipping /boot Feb 13 15:41:24.073616 zram_generator::config[1310]: No configuration found. Feb 13 15:41:24.193420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:41:24.258468 systemd[1]: Reloading finished in 232 ms. Feb 13 15:41:24.271516 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:41:24.290584 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:41:24.300245 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:41:24.302647 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:41:24.305032 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:41:24.309868 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:41:24.314439 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:41:24.318900 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:41:24.323059 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:24.323238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:41:24.324579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:41:24.330847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:41:24.333210 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:41:24.334843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:41:24.334957 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:41:24.345814 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:41:24.346938 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:24.348371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:41:24.349096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:41:24.351569 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:41:24.353608 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:41:24.353886 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:41:24.355972 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:41:24.356291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:41:24.361401 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Feb 13 15:41:24.364047 augenrules[1380]: No rules Feb 13 15:41:24.365084 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:41:24.365355 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:41:24.374068 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:41:24.380911 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:24.388767 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:41:24.389979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:41:24.393737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:41:24.398808 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:41:24.401913 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:41:24.408923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:41:24.410244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:41:24.410295 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:41:24.414290 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:41:24.416421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:41:24.422712 augenrules[1389]: /sbin/augenrules: No change Feb 13 15:41:24.427244 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:41:24.429828 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:41:24.431472 systemd[1]: Finished ensure-sysext.service. Feb 13 15:41:24.433229 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:41:24.434781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:41:24.435013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:41:24.436579 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:41:24.436843 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:41:24.438250 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:41:24.438747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:41:24.439833 augenrules[1435]: No rules Feb 13 15:41:24.440327 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:41:24.440536 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:41:24.442039 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:41:24.442569 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:41:24.444939 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:41:24.468760 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:41:24.469892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:41:24.469960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:41:24.472885 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:41:24.474608 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:41:24.480443 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:41:24.502634 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1427) Feb 13 15:41:24.503109 systemd-resolved[1352]: Positive Trust Anchors: Feb 13 15:41:24.503422 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:41:24.503523 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:41:24.509034 systemd-resolved[1352]: Defaulting to hostname 'linux'. Feb 13 15:41:24.515102 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:41:24.516382 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:41:24.539621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:41:24.546619 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:41:24.559514 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:41:24.571168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:41:24.572706 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:41:24.574645 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:41:24.577863 systemd-networkd[1448]: lo: Link UP Feb 13 15:41:24.577875 systemd-networkd[1448]: lo: Gained carrier Feb 13 15:41:24.584625 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:41:24.583176 systemd-networkd[1448]: Enumeration completed Feb 13 15:41:24.583547 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:41:24.583551 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:41:24.584283 systemd-networkd[1448]: eth0: Link UP Feb 13 15:41:24.584287 systemd-networkd[1448]: eth0: Gained carrier Feb 13 15:41:24.584300 systemd-networkd[1448]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:41:24.586242 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:41:24.586410 systemd[1]: Reached target network.target - Network. Feb 13 15:41:24.594075 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:41:24.596163 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:41:24.597524 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:41:24.597952 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:41:24.594949 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:41:24.596138 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:41:24.601238 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:41:24.601684 systemd-networkd[1448]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:41:24.602438 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. Feb 13 15:41:26.044086 systemd-timesyncd[1450]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:41:26.044124 systemd-timesyncd[1450]: Initial clock synchronization to Thu 2025-02-13 15:41:26.044013 UTC. Feb 13 15:41:26.044214 systemd-resolved[1352]: Clock change detected. Flushing caches. Feb 13 15:41:26.060039 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:41:26.120752 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:41:26.121052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:26.125955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:41:26.126214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:26.133075 kernel: kvm_amd: TSC scaling supported Feb 13 15:41:26.133111 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:41:26.133129 kernel: kvm_amd: Nested Paging enabled Feb 13 15:41:26.133146 kernel: kvm_amd: LBR virtualization supported Feb 13 15:41:26.134143 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:41:26.134173 kernel: kvm_amd: Virtual GIF supported Feb 13 15:41:26.148353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:41:26.156724 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:41:26.191371 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:41:26.203864 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:41:26.205950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:41:26.211118 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:41:26.242747 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:41:26.244290 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:41:26.245469 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:41:26.246684 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:41:26.248018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:41:26.249489 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:41:26.250736 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:41:26.252063 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:41:26.253356 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:41:26.253381 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:41:26.254344 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:41:26.256156 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:41:26.258982 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:41:26.262447 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:41:26.263916 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:41:26.265271 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:41:26.269554 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:41:26.271350 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:41:26.274011 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:41:26.275715 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:41:26.276994 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:41:26.278006 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:41:26.279039 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:41:26.279066 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:41:26.280348 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:41:26.281676 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:41:26.282521 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:41:26.285822 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:41:26.288658 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:41:26.289785 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:41:26.291084 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:41:26.294868 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:41:26.298837 jq[1487]: false Feb 13 15:41:26.304932 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:41:26.310379 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:41:26.312131 extend-filesystems[1488]: Found loop3 Feb 13 15:41:26.312131 extend-filesystems[1488]: Found loop4 Feb 13 15:41:26.314055 extend-filesystems[1488]: Found loop5 Feb 13 15:41:26.314055 extend-filesystems[1488]: Found sr0 Feb 13 15:41:26.314055 extend-filesystems[1488]: Found vda Feb 13 15:41:26.314055 extend-filesystems[1488]: Found vda1 Feb 13 15:41:26.314055 extend-filesystems[1488]: Found vda2 Feb 13 15:41:26.314055 extend-filesystems[1488]: Found vda3 Feb 13 15:41:26.314055 extend-filesystems[1488]: Found usr Feb 13 15:41:26.314055 extend-filesystems[1488]: Found vda4 Feb 13 15:41:26.314055 extend-filesystems[1488]: Found vda6 Feb 13 15:41:26.314055 extend-filesystems[1488]: Found vda7 Feb 13 15:41:26.314055 extend-filesystems[1488]: Found vda9 Feb 13 15:41:26.314055 extend-filesystems[1488]: Checking size of /dev/vda9 Feb 13 15:41:26.314387 dbus-daemon[1486]: [system] SELinux support is enabled Feb 13 15:41:26.316135 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:41:26.319388 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:41:26.322236 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:41:26.324880 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:41:26.327796 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:41:26.331349 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:41:26.335807 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:41:26.339309 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:41:26.339878 extend-filesystems[1488]: Resized partition /dev/vda9 Feb 13 15:41:26.340096 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:41:26.340433 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:41:26.340662 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:41:26.348363 extend-filesystems[1510]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:41:26.370973 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1403) Feb 13 15:41:26.371002 update_engine[1502]: I20250213 15:41:26.358039 1502 main.cc:92] Flatcar Update Engine starting Feb 13 15:41:26.371002 update_engine[1502]: I20250213 15:41:26.359208 1502 update_check_scheduler.cc:74] Next update check in 8m27s Feb 13 15:41:26.377545 jq[1503]: true Feb 13 15:41:26.350482 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:41:26.350900 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:41:26.381855 jq[1513]: true Feb 13 15:41:26.384191 (ntainerd)[1517]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:41:26.395577 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:41:26.397081 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:41:26.397122 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:41:26.398467 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:41:26.398489 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:41:26.413865 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:41:26.427415 systemd-logind[1497]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:41:26.428193 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:41:26.429170 systemd-logind[1497]: New seat seat0. Feb 13 15:41:26.435480 tar[1511]: linux-amd64/helm Feb 13 15:41:26.436230 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:41:26.436941 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:41:26.464124 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:41:26.493389 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:41:26.502307 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:41:26.501872 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:41:26.508974 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:41:26.509232 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:41:26.513365 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:41:26.562725 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:41:26.573040 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:41:26.575509 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:41:26.576795 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:41:26.622730 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:41:26.704275 extend-filesystems[1510]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:41:26.704275 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:41:26.704275 extend-filesystems[1510]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:41:26.709163 extend-filesystems[1488]: Resized filesystem in /dev/vda9 Feb 13 15:41:26.705432 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:41:26.705720 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:41:26.757125 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:41:26.759079 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:41:26.761371 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:41:26.766491 containerd[1517]: time="2025-02-13T15:41:26.766407738Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:41:26.791462 containerd[1517]: time="2025-02-13T15:41:26.791422822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:26.793149 containerd[1517]: time="2025-02-13T15:41:26.793104555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:26.793149 containerd[1517]: time="2025-02-13T15:41:26.793133700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:41:26.793149 containerd[1517]: time="2025-02-13T15:41:26.793149058Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:41:26.793361 containerd[1517]: time="2025-02-13T15:41:26.793331420Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:41:26.793361 containerd[1517]: time="2025-02-13T15:41:26.793357028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:26.793440 containerd[1517]: time="2025-02-13T15:41:26.793421509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:26.793440 containerd[1517]: time="2025-02-13T15:41:26.793437449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:26.793675 containerd[1517]: time="2025-02-13T15:41:26.793648525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:26.793675 containerd[1517]: time="2025-02-13T15:41:26.793668082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:26.793739 containerd[1517]: time="2025-02-13T15:41:26.793682258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:26.793739 containerd[1517]: time="2025-02-13T15:41:26.793692968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:26.793819 containerd[1517]: time="2025-02-13T15:41:26.793799588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:26.794072 containerd[1517]: time="2025-02-13T15:41:26.794045720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:41:26.794247 containerd[1517]: time="2025-02-13T15:41:26.794222521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:41:26.794247 containerd[1517]: time="2025-02-13T15:41:26.794240465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:41:26.794360 containerd[1517]: time="2025-02-13T15:41:26.794336755Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:41:26.794426 containerd[1517]: time="2025-02-13T15:41:26.794403210Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:41:26.891008 containerd[1517]: time="2025-02-13T15:41:26.890955345Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:41:26.891008 containerd[1517]: time="2025-02-13T15:41:26.891020427Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:41:26.891157 containerd[1517]: time="2025-02-13T15:41:26.891040705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:41:26.891157 containerd[1517]: time="2025-02-13T15:41:26.891060903Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:41:26.891157 containerd[1517]: time="2025-02-13T15:41:26.891076192Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:41:26.891297 containerd[1517]: time="2025-02-13T15:41:26.891274954Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:41:26.891541 containerd[1517]: time="2025-02-13T15:41:26.891517870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:41:26.891665 containerd[1517]: time="2025-02-13T15:41:26.891641933Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:41:26.891690 containerd[1517]: time="2025-02-13T15:41:26.891664685Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:41:26.891690 containerd[1517]: time="2025-02-13T15:41:26.891684392Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:41:26.891753 containerd[1517]: time="2025-02-13T15:41:26.891727393Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:41:26.891753 containerd[1517]: time="2025-02-13T15:41:26.891745597Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:41:26.891791 containerd[1517]: time="2025-02-13T15:41:26.891760194Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:41:26.891791 containerd[1517]: time="2025-02-13T15:41:26.891778739Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:41:26.891846 containerd[1517]: time="2025-02-13T15:41:26.891795220Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:41:26.891846 containerd[1517]: time="2025-02-13T15:41:26.891812603Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:41:26.891846 containerd[1517]: time="2025-02-13T15:41:26.891828382Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:41:26.891846 containerd[1517]: time="2025-02-13T15:41:26.891839904Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:41:26.891925 containerd[1517]: time="2025-02-13T15:41:26.891860112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.891925 containerd[1517]: time="2025-02-13T15:41:26.891886000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.891925 containerd[1517]: time="2025-02-13T15:41:26.891900187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.891925 containerd[1517]: time="2025-02-13T15:41:26.891914494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892012 containerd[1517]: time="2025-02-13T15:41:26.891930754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892012 containerd[1517]: time="2025-02-13T15:41:26.891946594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892012 containerd[1517]: time="2025-02-13T15:41:26.891958706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892012 containerd[1517]: time="2025-02-13T15:41:26.891971781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892012 containerd[1517]: time="2025-02-13T15:41:26.891985266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892012 containerd[1517]: time="2025-02-13T15:41:26.891999753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892012 containerd[1517]: time="2025-02-13T15:41:26.892012377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892145 containerd[1517]: time="2025-02-13T15:41:26.892024570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892145 containerd[1517]: time="2025-02-13T15:41:26.892037705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892145 containerd[1517]: time="2025-02-13T15:41:26.892052623Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:41:26.892145 containerd[1517]: time="2025-02-13T15:41:26.892079122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892145 containerd[1517]: time="2025-02-13T15:41:26.892092928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892145 containerd[1517]: time="2025-02-13T15:41:26.892104640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:41:26.892256 containerd[1517]: time="2025-02-13T15:41:26.892150045Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:41:26.892256 containerd[1517]: time="2025-02-13T15:41:26.892169281Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:41:26.892256 containerd[1517]: time="2025-02-13T15:41:26.892179470Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:41:26.892256 containerd[1517]: time="2025-02-13T15:41:26.892191162Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:41:26.892256 containerd[1517]: time="2025-02-13T15:41:26.892201251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892256 containerd[1517]: time="2025-02-13T15:41:26.892214316Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:41:26.892256 containerd[1517]: time="2025-02-13T15:41:26.892225587Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:41:26.892256 containerd[1517]: time="2025-02-13T15:41:26.892238561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:41:26.892571 containerd[1517]: time="2025-02-13T15:41:26.892524267Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:41:26.892571 containerd[1517]: time="2025-02-13T15:41:26.892569402Z" level=info msg="Connect containerd service" Feb 13 15:41:26.892752 containerd[1517]: time="2025-02-13T15:41:26.892604507Z" level=info msg="using legacy CRI server" Feb 13 15:41:26.892752 containerd[1517]: time="2025-02-13T15:41:26.892611971Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:41:26.892752 containerd[1517]: time="2025-02-13T15:41:26.892723751Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:41:26.893359 containerd[1517]: time="2025-02-13T15:41:26.893321021Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:41:26.893643 containerd[1517]: time="2025-02-13T15:41:26.893609872Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:41:26.893774 containerd[1517]: time="2025-02-13T15:41:26.893660638Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:41:26.893774 containerd[1517]: time="2025-02-13T15:41:26.893689391Z" level=info msg="Start subscribing containerd event" Feb 13 15:41:26.893908 containerd[1517]: time="2025-02-13T15:41:26.893779541Z" level=info msg="Start recovering state" Feb 13 15:41:26.893908 containerd[1517]: time="2025-02-13T15:41:26.893862536Z" level=info msg="Start event monitor" Feb 13 15:41:26.893908 containerd[1517]: time="2025-02-13T15:41:26.893900537Z" level=info msg="Start snapshots syncer" Feb 13 15:41:26.893967 containerd[1517]: time="2025-02-13T15:41:26.893911758Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:41:26.893967 containerd[1517]: time="2025-02-13T15:41:26.893920976Z" level=info msg="Start streaming server" Feb 13 15:41:26.894095 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:41:26.894523 containerd[1517]: time="2025-02-13T15:41:26.894500432Z" level=info msg="containerd successfully booted in 0.129216s" Feb 13 15:41:26.911961 tar[1511]: linux-amd64/LICENSE Feb 13 15:41:26.912036 tar[1511]: linux-amd64/README.md Feb 13 15:41:26.926879 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:41:27.791924 systemd-networkd[1448]: eth0: Gained IPv6LL Feb 13 15:41:27.795332 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:41:27.797226 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:41:27.808919 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:41:27.820595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:27.822837 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:41:27.843286 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:41:27.843584 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:41:27.845299 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:41:27.847926 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:41:28.443229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:28.445031 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:41:28.446382 systemd[1]: Startup finished in 729ms (kernel) + 7.806s (initrd) + 4.529s (userspace) = 13.065s. Feb 13 15:41:28.450336 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:41:28.909459 kubelet[1600]: E0213 15:41:28.909337 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:41:28.913588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:41:28.913809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:41:28.914185 systemd[1]: kubelet.service: Consumed 960ms CPU time, 245.5M memory peak. Feb 13 15:41:31.787515 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:41:31.802146 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:57750.service - OpenSSH per-connection server daemon (10.0.0.1:57750). Feb 13 15:41:32.072747 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 57750 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:32.079613 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:32.091957 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:41:32.105092 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:41:32.111893 systemd-logind[1497]: New session 1 of user core. Feb 13 15:41:32.122374 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:41:32.139199 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:41:32.145437 (systemd)[1618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:41:32.149219 systemd-logind[1497]: New session c1 of user core. Feb 13 15:41:32.378250 systemd[1618]: Queued start job for default target default.target. Feb 13 15:41:32.389248 systemd[1618]: Created slice app.slice - User Application Slice. Feb 13 15:41:32.389272 systemd[1618]: Reached target paths.target - Paths. Feb 13 15:41:32.389326 systemd[1618]: Reached target timers.target - Timers. Feb 13 15:41:32.391411 systemd[1618]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:41:32.411594 systemd[1618]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:41:32.411837 systemd[1618]: Reached target sockets.target - Sockets. Feb 13 15:41:32.411919 systemd[1618]: Reached target basic.target - Basic System. Feb 13 15:41:32.411978 systemd[1618]: Reached target default.target - Main User Target. Feb 13 15:41:32.412028 systemd[1618]: Startup finished in 250ms. Feb 13 15:41:32.412942 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:41:32.432098 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:41:32.507643 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:57754.service - OpenSSH per-connection server daemon (10.0.0.1:57754). Feb 13 15:41:32.583549 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 57754 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:32.585611 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:32.591871 systemd-logind[1497]: New session 2 of user core. Feb 13 15:41:32.600690 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:41:32.668751 sshd[1631]: Connection closed by 10.0.0.1 port 57754 Feb 13 15:41:32.669128 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:32.683591 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:57754.service: Deactivated successfully. Feb 13 15:41:32.685639 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:41:32.687365 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:41:32.688981 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:57756.service - OpenSSH per-connection server daemon (10.0.0.1:57756). Feb 13 15:41:32.689868 systemd-logind[1497]: Removed session 2. Feb 13 15:41:32.743984 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 57756 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:32.745572 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:32.752320 systemd-logind[1497]: New session 3 of user core. Feb 13 15:41:32.761859 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:41:32.811546 sshd[1639]: Connection closed by 10.0.0.1 port 57756 Feb 13 15:41:32.812023 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:32.827775 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:57756.service: Deactivated successfully. Feb 13 15:41:32.829488 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:41:32.830925 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:41:32.832511 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:57770.service - OpenSSH per-connection server daemon (10.0.0.1:57770). Feb 13 15:41:32.833417 systemd-logind[1497]: Removed session 3. Feb 13 15:41:32.884650 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 57770 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:32.886242 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:32.891033 systemd-logind[1497]: New session 4 of user core. Feb 13 15:41:32.904921 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:41:32.959809 sshd[1647]: Connection closed by 10.0.0.1 port 57770 Feb 13 15:41:32.960266 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:32.975852 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:57770.service: Deactivated successfully. Feb 13 15:41:32.977849 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:41:32.979258 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:41:32.996141 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:57774.service - OpenSSH per-connection server daemon (10.0.0.1:57774). Feb 13 15:41:32.997290 systemd-logind[1497]: Removed session 4. Feb 13 15:41:33.032833 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 57774 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:33.034254 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:33.039378 systemd-logind[1497]: New session 5 of user core. Feb 13 15:41:33.052918 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:41:33.119743 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:41:33.120127 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:41:33.149385 sudo[1656]: pam_unix(sudo:session): session closed for user root Feb 13 15:41:33.151062 sshd[1655]: Connection closed by 10.0.0.1 port 57774 Feb 13 15:41:33.151500 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:33.166306 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:57774.service: Deactivated successfully. Feb 13 15:41:33.168280 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:41:33.169918 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:41:33.189094 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:57786.service - OpenSSH per-connection server daemon (10.0.0.1:57786). Feb 13 15:41:33.189975 systemd-logind[1497]: Removed session 5. Feb 13 15:41:33.223504 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 57786 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:33.225093 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:33.229557 systemd-logind[1497]: New session 6 of user core. Feb 13 15:41:33.238852 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:41:33.292811 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:41:33.293123 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:41:33.296726 sudo[1666]: pam_unix(sudo:session): session closed for user root Feb 13 15:41:33.303184 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:41:33.303553 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:41:33.330184 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:41:33.367304 augenrules[1688]: No rules Feb 13 15:41:33.369153 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:41:33.369449 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:41:33.370794 sudo[1665]: pam_unix(sudo:session): session closed for user root Feb 13 15:41:33.372436 sshd[1664]: Connection closed by 10.0.0.1 port 57786 Feb 13 15:41:33.372845 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:33.385133 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:57786.service: Deactivated successfully. Feb 13 15:41:33.386775 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:41:33.388331 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:41:33.389636 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:57790.service - OpenSSH per-connection server daemon (10.0.0.1:57790). Feb 13 15:41:33.390398 systemd-logind[1497]: Removed session 6. Feb 13 15:41:33.439845 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 57790 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:41:33.441232 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:33.445340 systemd-logind[1497]: New session 7 of user core. Feb 13 15:41:33.462844 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:41:33.516737 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:41:33.517075 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:41:34.466097 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:41:34.466385 (dockerd)[1720]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:41:35.005276 dockerd[1720]: time="2025-02-13T15:41:35.005045131Z" level=info msg="Starting up" Feb 13 15:41:35.456741 systemd[1]: var-lib-docker-metacopy\x2dcheck393600719-merged.mount: Deactivated successfully. Feb 13 15:41:35.483716 dockerd[1720]: time="2025-02-13T15:41:35.483607496Z" level=info msg="Loading containers: start." Feb 13 15:41:35.757741 kernel: Initializing XFRM netlink socket Feb 13 15:41:35.844277 systemd-networkd[1448]: docker0: Link UP Feb 13 15:41:35.896019 dockerd[1720]: time="2025-02-13T15:41:35.895966398Z" level=info msg="Loading containers: done." Feb 13 15:41:35.913039 dockerd[1720]: time="2025-02-13T15:41:35.912974869Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:41:35.913216 dockerd[1720]: time="2025-02-13T15:41:35.913107899Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:41:35.913293 dockerd[1720]: time="2025-02-13T15:41:35.913264462Z" level=info msg="Daemon has completed initialization" Feb 13 15:41:35.956079 dockerd[1720]: time="2025-02-13T15:41:35.956003418Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:41:35.956241 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:41:36.790378 containerd[1517]: time="2025-02-13T15:41:36.790317737Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:41:37.380237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158843205.mount: Deactivated successfully. Feb 13 15:41:38.624772 containerd[1517]: time="2025-02-13T15:41:38.624719059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:38.625473 containerd[1517]: time="2025-02-13T15:41:38.625418440Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 15:41:38.626766 containerd[1517]: time="2025-02-13T15:41:38.626693391Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:38.629434 containerd[1517]: time="2025-02-13T15:41:38.629408933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:38.630344 containerd[1517]: time="2025-02-13T15:41:38.630307327Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.839947691s" Feb 13 15:41:38.630344 containerd[1517]: time="2025-02-13T15:41:38.630341711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 15:41:38.652785 containerd[1517]: time="2025-02-13T15:41:38.652738776Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:41:39.164315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:41:39.175933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:39.335131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:39.339318 (kubelet)[1993]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:41:39.426735 kubelet[1993]: E0213 15:41:39.425681 1993 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:41:39.433234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:41:39.433429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:41:39.433814 systemd[1]: kubelet.service: Consumed 294ms CPU time, 100.3M memory peak. Feb 13 15:41:40.820999 containerd[1517]: time="2025-02-13T15:41:40.820929733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:40.821803 containerd[1517]: time="2025-02-13T15:41:40.821752246Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 15:41:40.823029 containerd[1517]: time="2025-02-13T15:41:40.822994585Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:40.825972 containerd[1517]: time="2025-02-13T15:41:40.825928646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:40.826855 containerd[1517]: time="2025-02-13T15:41:40.826816762Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.174035135s" Feb 13 15:41:40.826855 containerd[1517]: time="2025-02-13T15:41:40.826848000Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 15:41:40.851579 containerd[1517]: time="2025-02-13T15:41:40.851523306Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:41:41.774771 containerd[1517]: time="2025-02-13T15:41:41.774724558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:41.775414 containerd[1517]: time="2025-02-13T15:41:41.775378925Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 15:41:41.776484 containerd[1517]: time="2025-02-13T15:41:41.776414367Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:41.779005 containerd[1517]: time="2025-02-13T15:41:41.778956994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:41.779940 containerd[1517]: time="2025-02-13T15:41:41.779910352Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 928.345808ms" Feb 13 15:41:41.779984 containerd[1517]: time="2025-02-13T15:41:41.779939236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 15:41:41.804982 containerd[1517]: time="2025-02-13T15:41:41.804939702Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:41:42.994951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941994256.mount: Deactivated successfully. Feb 13 15:41:43.393083 containerd[1517]: time="2025-02-13T15:41:43.392950742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:43.393752 containerd[1517]: time="2025-02-13T15:41:43.393718542Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 15:41:43.394786 containerd[1517]: time="2025-02-13T15:41:43.394754815Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:43.396942 containerd[1517]: time="2025-02-13T15:41:43.396891611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:43.397847 containerd[1517]: time="2025-02-13T15:41:43.397808801Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.592832931s" Feb 13 15:41:43.397847 containerd[1517]: time="2025-02-13T15:41:43.397841172Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:41:43.427378 containerd[1517]: time="2025-02-13T15:41:43.427302061Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:41:43.993819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823328867.mount: Deactivated successfully. Feb 13 15:41:47.246324 containerd[1517]: time="2025-02-13T15:41:47.246240570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:47.256735 containerd[1517]: time="2025-02-13T15:41:47.256630581Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:41:47.259136 containerd[1517]: time="2025-02-13T15:41:47.259079553Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:47.262644 containerd[1517]: time="2025-02-13T15:41:47.262567893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:47.263573 containerd[1517]: time="2025-02-13T15:41:47.263534846Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.836183183s" Feb 13 15:41:47.263625 containerd[1517]: time="2025-02-13T15:41:47.263573118Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:41:47.293792 containerd[1517]: time="2025-02-13T15:41:47.293748367Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:41:49.684029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:41:49.693929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:49.840476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:49.844410 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:41:50.051220 kubelet[2095]: E0213 15:41:50.051002 2095 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:41:50.055419 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:41:50.055626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:41:50.056028 systemd[1]: kubelet.service: Consumed 201ms CPU time, 98.8M memory peak. Feb 13 15:41:51.386052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905743861.mount: Deactivated successfully. Feb 13 15:41:51.558824 containerd[1517]: time="2025-02-13T15:41:51.558742480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:51.560231 containerd[1517]: time="2025-02-13T15:41:51.560147794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:41:51.565776 containerd[1517]: time="2025-02-13T15:41:51.565730352Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:51.583255 containerd[1517]: time="2025-02-13T15:41:51.583153069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:51.584286 containerd[1517]: time="2025-02-13T15:41:51.584230620Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 4.290431849s" Feb 13 15:41:51.584286 containerd[1517]: time="2025-02-13T15:41:51.584282237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:41:51.607959 containerd[1517]: time="2025-02-13T15:41:51.607912183Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:41:53.159911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054635567.mount: Deactivated successfully. Feb 13 15:41:55.464908 containerd[1517]: time="2025-02-13T15:41:55.464822949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:55.495602 containerd[1517]: time="2025-02-13T15:41:55.495506882Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 15:41:55.526231 containerd[1517]: time="2025-02-13T15:41:55.526145329Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:55.557353 containerd[1517]: time="2025-02-13T15:41:55.557317607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:55.558758 containerd[1517]: time="2025-02-13T15:41:55.558695500Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.950738484s" Feb 13 15:41:55.558758 containerd[1517]: time="2025-02-13T15:41:55.558750694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 15:41:58.246051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:58.246209 systemd[1]: kubelet.service: Consumed 201ms CPU time, 98.8M memory peak. Feb 13 15:41:58.260892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:58.279309 systemd[1]: Reload requested from client PID 2243 ('systemctl') (unit session-7.scope)... Feb 13 15:41:58.279326 systemd[1]: Reloading... Feb 13 15:41:58.350739 zram_generator::config[2287]: No configuration found. Feb 13 15:41:58.685024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:41:58.788340 systemd[1]: Reloading finished in 508 ms. Feb 13 15:41:58.846990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:58.850431 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:58.851457 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:41:58.851765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:58.851802 systemd[1]: kubelet.service: Consumed 141ms CPU time, 83.6M memory peak. Feb 13 15:41:58.853451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:59.000719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:59.004572 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:41:59.038567 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:41:59.038567 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:41:59.038567 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:41:59.039487 kubelet[2337]: I0213 15:41:59.039452 2337 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:41:59.336885 kubelet[2337]: I0213 15:41:59.336780 2337 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:41:59.336885 kubelet[2337]: I0213 15:41:59.336815 2337 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:41:59.337125 kubelet[2337]: I0213 15:41:59.337098 2337 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:41:59.350336 kubelet[2337]: I0213 15:41:59.350297 2337 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:41:59.350784 kubelet[2337]: E0213 15:41:59.350760 2337 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:41:59.362176 kubelet[2337]: I0213 15:41:59.362149 2337 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:41:59.363207 kubelet[2337]: I0213 15:41:59.363165 2337 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:41:59.363376 kubelet[2337]: I0213 15:41:59.363195 2337 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:41:59.363467 kubelet[2337]: I0213 15:41:59.363383 2337 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:41:59.363467 kubelet[2337]: I0213 15:41:59.363394 2337 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:41:59.363531 kubelet[2337]: I0213 15:41:59.363513 2337 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:41:59.364117 kubelet[2337]: I0213 15:41:59.364093 2337 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:41:59.364148 kubelet[2337]: I0213 15:41:59.364131 2337 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:41:59.364172 kubelet[2337]: I0213 15:41:59.364153 2337 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:41:59.364195 kubelet[2337]: I0213 15:41:59.364171 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:41:59.366421 kubelet[2337]: W0213 15:41:59.366365 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:41:59.366476 kubelet[2337]: E0213 15:41:59.366430 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:41:59.368198 kubelet[2337]: W0213 15:41:59.368162 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:41:59.368198 kubelet[2337]: E0213 15:41:59.368198 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:41:59.369291 kubelet[2337]: I0213 15:41:59.369274 2337 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:41:59.370413 kubelet[2337]: I0213 15:41:59.370394 2337 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:41:59.370476 kubelet[2337]: W0213 15:41:59.370453 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:41:59.371092 kubelet[2337]: I0213 15:41:59.371070 2337 server.go:1264] "Started kubelet" Feb 13 15:41:59.372734 kubelet[2337]: I0213 15:41:59.372184 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:41:59.372734 kubelet[2337]: I0213 15:41:59.372305 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:41:59.372734 kubelet[2337]: I0213 15:41:59.372490 2337 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:41:59.372734 kubelet[2337]: I0213 15:41:59.372523 2337 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:41:59.373833 kubelet[2337]: I0213 15:41:59.373375 2337 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:41:59.374508 kubelet[2337]: I0213 15:41:59.374102 2337 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:41:59.374508 kubelet[2337]: I0213 15:41:59.374201 2337 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:41:59.374508 kubelet[2337]: I0213 15:41:59.374251 2337 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:41:59.374628 kubelet[2337]: W0213 15:41:59.374580 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:41:59.374659 kubelet[2337]: E0213 15:41:59.374639 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:41:59.375312 kubelet[2337]: E0213 15:41:59.375279 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Feb 13 15:41:59.375831 kubelet[2337]: I0213 15:41:59.375806 2337 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:41:59.375879 kubelet[2337]: I0213 15:41:59.375871 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:41:59.376806 kubelet[2337]: E0213 15:41:59.376788 2337 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:41:59.376907 kubelet[2337]: E0213 15:41:59.376788 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ceda99a4d1a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:41:59.371051433 +0000 UTC m=+0.362970308,LastTimestamp:2025-02-13 15:41:59.371051433 +0000 UTC m=+0.362970308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:41:59.377726 kubelet[2337]: I0213 15:41:59.377205 2337 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:41:59.390896 kubelet[2337]: I0213 15:41:59.390863 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:41:59.390896 kubelet[2337]: I0213 15:41:59.390889 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:41:59.390896 kubelet[2337]: I0213 15:41:59.390905 2337 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:41:59.475895 kubelet[2337]: I0213 15:41:59.475863 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:41:59.476312 kubelet[2337]: E0213 15:41:59.476264 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Feb 13 15:41:59.575771 kubelet[2337]: E0213 15:41:59.575676 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Feb 13 15:41:59.678138 kubelet[2337]: I0213 15:41:59.678103 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:41:59.678395 kubelet[2337]: E0213 15:41:59.678370 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Feb 13 15:41:59.976654 kubelet[2337]: E0213 15:41:59.976512 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Feb 13 15:42:00.038428 kubelet[2337]: I0213 15:42:00.038377 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:42:00.039813 kubelet[2337]: I0213 15:42:00.039785 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:42:00.040313 kubelet[2337]: I0213 15:42:00.039818 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:42:00.040313 kubelet[2337]: I0213 15:42:00.039841 2337 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:42:00.040313 kubelet[2337]: E0213 15:42:00.039884 2337 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:42:00.040386 kubelet[2337]: W0213 15:42:00.040357 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:00.040411 kubelet[2337]: E0213 15:42:00.040393 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:00.057200 kubelet[2337]: I0213 15:42:00.057156 2337 policy_none.go:49] "None policy: Start" Feb 13 15:42:00.057855 kubelet[2337]: I0213 15:42:00.057836 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:42:00.057917 kubelet[2337]: I0213 15:42:00.057862 2337 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:42:00.075054 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:42:00.079552 kubelet[2337]: I0213 15:42:00.079536 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:42:00.079950 kubelet[2337]: E0213 15:42:00.079930 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Feb 13 15:42:00.090074 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:42:00.093403 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:42:00.106739 kubelet[2337]: I0213 15:42:00.106715 2337 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:42:00.107393 kubelet[2337]: I0213 15:42:00.107060 2337 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:42:00.107393 kubelet[2337]: I0213 15:42:00.107231 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:42:00.108564 kubelet[2337]: E0213 15:42:00.108524 2337 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:42:00.140508 kubelet[2337]: I0213 15:42:00.140455 2337 topology_manager.go:215] "Topology Admit Handler" podUID="053c366d924797b6a04e247e65ab9e6d" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:42:00.141577 kubelet[2337]: I0213 15:42:00.141554 2337 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:42:00.142567 kubelet[2337]: I0213 15:42:00.142539 2337 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:42:00.147635 systemd[1]: Created slice kubepods-burstable-pod053c366d924797b6a04e247e65ab9e6d.slice - libcontainer container kubepods-burstable-pod053c366d924797b6a04e247e65ab9e6d.slice. Feb 13 15:42:00.161944 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 15:42:00.165947 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 15:42:00.179528 kubelet[2337]: I0213 15:42:00.179502 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:00.179580 kubelet[2337]: I0213 15:42:00.179535 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:42:00.179580 kubelet[2337]: I0213 15:42:00.179555 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/053c366d924797b6a04e247e65ab9e6d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"053c366d924797b6a04e247e65ab9e6d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:00.179580 kubelet[2337]: I0213 15:42:00.179573 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/053c366d924797b6a04e247e65ab9e6d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"053c366d924797b6a04e247e65ab9e6d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:00.179656 kubelet[2337]: I0213 15:42:00.179593 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:00.179656 kubelet[2337]: I0213 15:42:00.179614 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:00.179656 kubelet[2337]: I0213 15:42:00.179636 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:00.179656 kubelet[2337]: I0213 15:42:00.179655 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/053c366d924797b6a04e247e65ab9e6d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"053c366d924797b6a04e247e65ab9e6d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:00.179758 kubelet[2337]: I0213 15:42:00.179673 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:00.459594 kubelet[2337]: E0213 15:42:00.459547 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:00.460343 containerd[1517]: time="2025-02-13T15:42:00.460297189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:053c366d924797b6a04e247e65ab9e6d,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:00.464636 kubelet[2337]: E0213 15:42:00.464613 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:00.465057 containerd[1517]: time="2025-02-13T15:42:00.465026928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:00.468329 kubelet[2337]: E0213 15:42:00.468301 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:00.468713 containerd[1517]: time="2025-02-13T15:42:00.468661427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:00.512556 kubelet[2337]: W0213 15:42:00.512495 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:00.512556 kubelet[2337]: E0213 15:42:00.512555 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:00.750236 kubelet[2337]: W0213 15:42:00.750069 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:00.750236 kubelet[2337]: E0213 15:42:00.750155 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:00.777765 kubelet[2337]: E0213 15:42:00.777691 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="1.6s" Feb 13 15:42:00.881632 kubelet[2337]: I0213 15:42:00.881593 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:42:00.882056 kubelet[2337]: E0213 15:42:00.882022 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Feb 13 15:42:00.938594 kubelet[2337]: W0213 15:42:00.938543 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:00.938594 kubelet[2337]: E0213 15:42:00.938594 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.38:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:01.260869 kubelet[2337]: W0213 15:42:01.260813 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:01.260869 kubelet[2337]: E0213 15:42:01.260874 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:01.395620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204583114.mount: Deactivated successfully. Feb 13 15:42:01.400970 containerd[1517]: time="2025-02-13T15:42:01.400921221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:01.404601 containerd[1517]: time="2025-02-13T15:42:01.404556609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:42:01.405564 containerd[1517]: time="2025-02-13T15:42:01.405494074Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:01.407626 containerd[1517]: time="2025-02-13T15:42:01.407598013Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:01.409750 containerd[1517]: time="2025-02-13T15:42:01.409656476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:42:01.410734 containerd[1517]: time="2025-02-13T15:42:01.410693671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:01.411905 containerd[1517]: time="2025-02-13T15:42:01.411876850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:42:01.412861 containerd[1517]: time="2025-02-13T15:42:01.412677922Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:42:01.412939 containerd[1517]: time="2025-02-13T15:42:01.412900640Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 952.479362ms" Feb 13 15:42:01.415668 containerd[1517]: time="2025-02-13T15:42:01.415642137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 950.547066ms" Feb 13 15:42:01.418206 containerd[1517]: time="2025-02-13T15:42:01.418162748Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 949.398544ms" Feb 13 15:42:01.434532 kubelet[2337]: E0213 15:42:01.434480 2337 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.38:6443: connect: connection refused Feb 13 15:42:01.559361 containerd[1517]: time="2025-02-13T15:42:01.558978158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:01.559361 containerd[1517]: time="2025-02-13T15:42:01.559151421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:01.559361 containerd[1517]: time="2025-02-13T15:42:01.559183503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:01.559846 containerd[1517]: time="2025-02-13T15:42:01.559448423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:01.559846 containerd[1517]: time="2025-02-13T15:42:01.559494952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:01.559846 containerd[1517]: time="2025-02-13T15:42:01.559510422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:01.559846 containerd[1517]: time="2025-02-13T15:42:01.559592650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:01.560393 containerd[1517]: time="2025-02-13T15:42:01.559861298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:01.560770 containerd[1517]: time="2025-02-13T15:42:01.557938447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:01.560770 containerd[1517]: time="2025-02-13T15:42:01.560590731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:01.560770 containerd[1517]: time="2025-02-13T15:42:01.560608075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:01.560770 containerd[1517]: time="2025-02-13T15:42:01.560692327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:01.589896 systemd[1]: Started cri-containerd-30c413b05ac4c2fbd02b457e9aeaf4e4a4cdbf100a01541d2545b38f92f02281.scope - libcontainer container 30c413b05ac4c2fbd02b457e9aeaf4e4a4cdbf100a01541d2545b38f92f02281. Feb 13 15:42:01.592132 systemd[1]: Started cri-containerd-cf21669204940bd52d3962fb3a97c83ad0f1e0bed7c9e8668ea42da6d4a445e1.scope - libcontainer container cf21669204940bd52d3962fb3a97c83ad0f1e0bed7c9e8668ea42da6d4a445e1. Feb 13 15:42:01.593976 systemd[1]: Started cri-containerd-ee4e52f7f771097eae3e808ad2dbc397369cf61d61de63a3559bf723b94faab8.scope - libcontainer container ee4e52f7f771097eae3e808ad2dbc397369cf61d61de63a3559bf723b94faab8. Feb 13 15:42:01.638634 containerd[1517]: time="2025-02-13T15:42:01.638582685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:053c366d924797b6a04e247e65ab9e6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"30c413b05ac4c2fbd02b457e9aeaf4e4a4cdbf100a01541d2545b38f92f02281\"" Feb 13 15:42:01.640110 containerd[1517]: time="2025-02-13T15:42:01.639843893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf21669204940bd52d3962fb3a97c83ad0f1e0bed7c9e8668ea42da6d4a445e1\"" Feb 13 15:42:01.640838 kubelet[2337]: E0213 15:42:01.640811 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:01.642010 kubelet[2337]: E0213 15:42:01.641984 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:01.642609 containerd[1517]: time="2025-02-13T15:42:01.642573957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee4e52f7f771097eae3e808ad2dbc397369cf61d61de63a3559bf723b94faab8\"" Feb 13 15:42:01.644253 kubelet[2337]: E0213 15:42:01.644201 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:01.644343 containerd[1517]: time="2025-02-13T15:42:01.644256546Z" level=info msg="CreateContainer within sandbox \"30c413b05ac4c2fbd02b457e9aeaf4e4a4cdbf100a01541d2545b38f92f02281\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:42:01.644693 containerd[1517]: time="2025-02-13T15:42:01.644658379Z" level=info msg="CreateContainer within sandbox \"cf21669204940bd52d3962fb3a97c83ad0f1e0bed7c9e8668ea42da6d4a445e1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:42:01.646803 containerd[1517]: time="2025-02-13T15:42:01.646772950Z" level=info msg="CreateContainer within sandbox \"ee4e52f7f771097eae3e808ad2dbc397369cf61d61de63a3559bf723b94faab8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:42:01.801363 containerd[1517]: time="2025-02-13T15:42:01.801301587Z" level=info msg="CreateContainer within sandbox \"ee4e52f7f771097eae3e808ad2dbc397369cf61d61de63a3559bf723b94faab8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32e9f7df343ffea0996dc4a12d1c8dfca64470c186df167d4ebbfbf7c1636ebd\"" Feb 13 15:42:01.802096 containerd[1517]: time="2025-02-13T15:42:01.802053895Z" level=info msg="StartContainer for \"32e9f7df343ffea0996dc4a12d1c8dfca64470c186df167d4ebbfbf7c1636ebd\"" Feb 13 15:42:01.830839 systemd[1]: Started cri-containerd-32e9f7df343ffea0996dc4a12d1c8dfca64470c186df167d4ebbfbf7c1636ebd.scope - libcontainer container 32e9f7df343ffea0996dc4a12d1c8dfca64470c186df167d4ebbfbf7c1636ebd. Feb 13 15:42:01.848537 containerd[1517]: time="2025-02-13T15:42:01.848462495Z" level=info msg="CreateContainer within sandbox \"cf21669204940bd52d3962fb3a97c83ad0f1e0bed7c9e8668ea42da6d4a445e1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"48cce7d9837075bcaa263dc4debdec275a4d453ad3b9875f7a72b5cc1442ade6\"" Feb 13 15:42:01.849134 containerd[1517]: time="2025-02-13T15:42:01.849098669Z" level=info msg="StartContainer for \"48cce7d9837075bcaa263dc4debdec275a4d453ad3b9875f7a72b5cc1442ade6\"" Feb 13 15:42:01.880947 systemd[1]: Started cri-containerd-48cce7d9837075bcaa263dc4debdec275a4d453ad3b9875f7a72b5cc1442ade6.scope - libcontainer container 48cce7d9837075bcaa263dc4debdec275a4d453ad3b9875f7a72b5cc1442ade6. Feb 13 15:42:01.997600 containerd[1517]: time="2025-02-13T15:42:01.997550457Z" level=info msg="CreateContainer within sandbox \"30c413b05ac4c2fbd02b457e9aeaf4e4a4cdbf100a01541d2545b38f92f02281\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa9899cdda8e0eff492fbf322a68df70344655b5f188b1ba25f20aa6dd1e96d3\"" Feb 13 15:42:01.998203 containerd[1517]: time="2025-02-13T15:42:01.997695085Z" level=info msg="StartContainer for \"32e9f7df343ffea0996dc4a12d1c8dfca64470c186df167d4ebbfbf7c1636ebd\" returns successfully" Feb 13 15:42:01.998326 containerd[1517]: time="2025-02-13T15:42:01.997757245Z" level=info msg="StartContainer for \"48cce7d9837075bcaa263dc4debdec275a4d453ad3b9875f7a72b5cc1442ade6\" returns successfully" Feb 13 15:42:01.998369 containerd[1517]: time="2025-02-13T15:42:01.998169629Z" level=info msg="StartContainer for \"aa9899cdda8e0eff492fbf322a68df70344655b5f188b1ba25f20aa6dd1e96d3\"" Feb 13 15:42:02.024862 systemd[1]: Started cri-containerd-aa9899cdda8e0eff492fbf322a68df70344655b5f188b1ba25f20aa6dd1e96d3.scope - libcontainer container aa9899cdda8e0eff492fbf322a68df70344655b5f188b1ba25f20aa6dd1e96d3. Feb 13 15:42:02.049192 kubelet[2337]: E0213 15:42:02.049149 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:02.055962 kubelet[2337]: E0213 15:42:02.055769 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:02.144231 containerd[1517]: time="2025-02-13T15:42:02.143694396Z" level=info msg="StartContainer for \"aa9899cdda8e0eff492fbf322a68df70344655b5f188b1ba25f20aa6dd1e96d3\" returns successfully" Feb 13 15:42:02.483650 kubelet[2337]: I0213 15:42:02.483601 2337 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:42:03.061776 kubelet[2337]: E0213 15:42:03.061750 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:03.062032 kubelet[2337]: E0213 15:42:03.062007 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:03.257073 kubelet[2337]: E0213 15:42:03.257028 2337 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:42:03.352583 kubelet[2337]: I0213 15:42:03.352437 2337 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:42:03.360791 kubelet[2337]: E0213 15:42:03.360751 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:42:03.461332 kubelet[2337]: E0213 15:42:03.461276 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:42:03.561985 kubelet[2337]: E0213 15:42:03.561923 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:42:04.069590 kubelet[2337]: E0213 15:42:04.069547 2337 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:04.070185 kubelet[2337]: E0213 15:42:04.070131 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:04.369733 kubelet[2337]: I0213 15:42:04.369598 2337 apiserver.go:52] "Watching apiserver" Feb 13 15:42:04.375230 kubelet[2337]: I0213 15:42:04.375190 2337 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:42:05.073494 kubelet[2337]: E0213 15:42:05.073184 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:05.936676 systemd[1]: Reload requested from client PID 2622 ('systemctl') (unit session-7.scope)... Feb 13 15:42:05.936692 systemd[1]: Reloading... Feb 13 15:42:06.018755 zram_generator::config[2669]: No configuration found. Feb 13 15:42:06.065616 kubelet[2337]: E0213 15:42:06.065592 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:06.126356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:42:06.247328 systemd[1]: Reloading finished in 310 ms. Feb 13 15:42:06.280581 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:42:06.301009 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:42:06.301341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:42:06.301407 systemd[1]: kubelet.service: Consumed 861ms CPU time, 117M memory peak. Feb 13 15:42:06.307335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:42:06.466452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:42:06.470760 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:42:06.514744 kubelet[2711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:42:06.514744 kubelet[2711]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:42:06.514744 kubelet[2711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:42:06.514744 kubelet[2711]: I0213 15:42:06.514661 2711 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:42:06.519454 kubelet[2711]: I0213 15:42:06.519428 2711 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:42:06.519454 kubelet[2711]: I0213 15:42:06.519447 2711 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:42:06.519595 kubelet[2711]: I0213 15:42:06.519579 2711 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:42:06.520687 kubelet[2711]: I0213 15:42:06.520655 2711 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:42:06.522368 kubelet[2711]: I0213 15:42:06.522327 2711 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:42:06.529906 kubelet[2711]: I0213 15:42:06.529880 2711 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:42:06.530137 kubelet[2711]: I0213 15:42:06.530103 2711 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:42:06.530274 kubelet[2711]: I0213 15:42:06.530125 2711 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:42:06.530358 kubelet[2711]: I0213 15:42:06.530290 2711 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:42:06.530358 kubelet[2711]: I0213 15:42:06.530300 2711 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:42:06.530358 kubelet[2711]: I0213 15:42:06.530340 2711 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:42:06.530437 kubelet[2711]: I0213 15:42:06.530426 2711 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:42:06.530465 kubelet[2711]: I0213 15:42:06.530438 2711 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:42:06.530465 kubelet[2711]: I0213 15:42:06.530459 2711 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:42:06.530513 kubelet[2711]: I0213 15:42:06.530477 2711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:42:06.530992 kubelet[2711]: I0213 15:42:06.530956 2711 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:42:06.531184 kubelet[2711]: I0213 15:42:06.531163 2711 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:42:06.532160 kubelet[2711]: I0213 15:42:06.532141 2711 server.go:1264] "Started kubelet" Feb 13 15:42:06.535729 kubelet[2711]: I0213 15:42:06.533394 2711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:42:06.536282 kubelet[2711]: I0213 15:42:06.536219 2711 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:42:06.537245 kubelet[2711]: I0213 15:42:06.537075 2711 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:42:06.537422 kubelet[2711]: I0213 15:42:06.537411 2711 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:42:06.537614 kubelet[2711]: I0213 15:42:06.537603 2711 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:42:06.538590 kubelet[2711]: I0213 15:42:06.538575 2711 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:42:06.541459 kubelet[2711]: I0213 15:42:06.541423 2711 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:42:06.541910 kubelet[2711]: I0213 15:42:06.541842 2711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:42:06.541948 kubelet[2711]: E0213 15:42:06.541936 2711 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:42:06.542157 kubelet[2711]: I0213 15:42:06.542138 2711 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:42:06.547118 kubelet[2711]: I0213 15:42:06.547075 2711 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:42:06.547118 kubelet[2711]: I0213 15:42:06.547107 2711 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:42:06.550242 kubelet[2711]: I0213 15:42:06.550105 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:42:06.551325 kubelet[2711]: I0213 15:42:06.551289 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:42:06.551325 kubelet[2711]: I0213 15:42:06.551320 2711 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:42:06.551464 kubelet[2711]: I0213 15:42:06.551343 2711 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:42:06.551464 kubelet[2711]: E0213 15:42:06.551381 2711 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:42:06.573828 kubelet[2711]: I0213 15:42:06.573798 2711 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:42:06.573828 kubelet[2711]: I0213 15:42:06.573812 2711 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:42:06.573828 kubelet[2711]: I0213 15:42:06.573828 2711 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:42:06.574074 kubelet[2711]: I0213 15:42:06.573951 2711 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:42:06.574074 kubelet[2711]: I0213 15:42:06.573961 2711 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:42:06.574074 kubelet[2711]: I0213 15:42:06.573977 2711 policy_none.go:49] "None policy: Start" Feb 13 15:42:06.574469 kubelet[2711]: I0213 15:42:06.574447 2711 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:42:06.574469 kubelet[2711]: I0213 15:42:06.574467 2711 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:42:06.574609 kubelet[2711]: I0213 15:42:06.574574 2711 state_mem.go:75] "Updated machine memory state" Feb 13 15:42:06.579051 kubelet[2711]: I0213 15:42:06.578766 2711 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:42:06.579051 kubelet[2711]: I0213 15:42:06.578925 2711 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:42:06.580067 kubelet[2711]: I0213 15:42:06.580043 2711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:42:06.641979 kubelet[2711]: I0213 15:42:06.641934 2711 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:42:06.648202 kubelet[2711]: I0213 15:42:06.648175 2711 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:42:06.648336 kubelet[2711]: I0213 15:42:06.648248 2711 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:42:06.652005 kubelet[2711]: I0213 15:42:06.651974 2711 topology_manager.go:215] "Topology Admit Handler" podUID="053c366d924797b6a04e247e65ab9e6d" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:42:06.652118 kubelet[2711]: I0213 15:42:06.652104 2711 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:42:06.652162 kubelet[2711]: I0213 15:42:06.652148 2711 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:42:06.657720 kubelet[2711]: E0213 15:42:06.657524 2711 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:06.839560 kubelet[2711]: I0213 15:42:06.839410 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/053c366d924797b6a04e247e65ab9e6d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"053c366d924797b6a04e247e65ab9e6d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:06.839560 kubelet[2711]: I0213 15:42:06.839457 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/053c366d924797b6a04e247e65ab9e6d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"053c366d924797b6a04e247e65ab9e6d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:06.839560 kubelet[2711]: I0213 15:42:06.839480 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/053c366d924797b6a04e247e65ab9e6d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"053c366d924797b6a04e247e65ab9e6d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:06.839560 kubelet[2711]: I0213 15:42:06.839506 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:06.839560 kubelet[2711]: I0213 15:42:06.839534 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:42:06.839864 kubelet[2711]: I0213 15:42:06.839565 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:06.839864 kubelet[2711]: I0213 15:42:06.839594 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:06.839864 kubelet[2711]: I0213 15:42:06.839616 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:06.839864 kubelet[2711]: I0213 15:42:06.839633 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:42:06.939110 sudo[2745]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:42:06.939481 sudo[2745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:42:06.959585 kubelet[2711]: E0213 15:42:06.959365 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:06.959585 kubelet[2711]: E0213 15:42:06.959498 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:06.959585 kubelet[2711]: E0213 15:42:06.959534 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:07.531657 kubelet[2711]: I0213 15:42:07.531613 2711 apiserver.go:52] "Watching apiserver" Feb 13 15:42:07.538302 kubelet[2711]: I0213 15:42:07.538269 2711 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:42:07.562889 kubelet[2711]: E0213 15:42:07.562551 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:07.563075 kubelet[2711]: E0213 15:42:07.563059 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:07.569443 kubelet[2711]: E0213 15:42:07.569393 2711 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:42:07.570117 kubelet[2711]: E0213 15:42:07.570078 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:07.581219 kubelet[2711]: I0213 15:42:07.581115 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.581068987 podStartE2EDuration="2.581068987s" podCreationTimestamp="2025-02-13 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:07.580680455 +0000 UTC m=+1.106186842" watchObservedRunningTime="2025-02-13 15:42:07.581068987 +0000 UTC m=+1.106575373" Feb 13 15:42:07.581280 sudo[2745]: pam_unix(sudo:session): session closed for user root Feb 13 15:42:07.598898 kubelet[2711]: I0213 15:42:07.598824 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5987855949999998 podStartE2EDuration="1.598785595s" podCreationTimestamp="2025-02-13 15:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:07.590173787 +0000 UTC m=+1.115680173" watchObservedRunningTime="2025-02-13 15:42:07.598785595 +0000 UTC m=+1.124291971" Feb 13 15:42:08.563313 kubelet[2711]: E0213 15:42:08.563281 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:08.767589 kubelet[2711]: E0213 15:42:08.767549 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:08.965154 sudo[1700]: pam_unix(sudo:session): session closed for user root Feb 13 15:42:08.966516 sshd[1699]: Connection closed by 10.0.0.1 port 57790 Feb 13 15:42:08.966984 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:08.971233 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:57790.service: Deactivated successfully. Feb 13 15:42:08.973597 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:42:08.973928 systemd[1]: session-7.scope: Consumed 5.676s CPU time, 278.1M memory peak. Feb 13 15:42:08.975328 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:42:08.976216 systemd-logind[1497]: Removed session 7. Feb 13 15:42:09.565580 kubelet[2711]: E0213 15:42:09.565532 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:11.383692 update_engine[1502]: I20250213 15:42:11.383596 1502 update_attempter.cc:509] Updating boot flags... Feb 13 15:42:11.612759 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2796) Feb 13 15:42:11.671594 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2799) Feb 13 15:42:11.682746 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2799) Feb 13 15:42:12.392243 kubelet[2711]: E0213 15:42:12.392211 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:12.466969 kubelet[2711]: I0213 15:42:12.464971 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.464949329 podStartE2EDuration="6.464949329s" podCreationTimestamp="2025-02-13 15:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:07.599242668 +0000 UTC m=+1.124749054" watchObservedRunningTime="2025-02-13 15:42:12.464949329 +0000 UTC m=+5.990455715" Feb 13 15:42:12.568919 kubelet[2711]: E0213 15:42:12.568886 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:18.143686 kubelet[2711]: E0213 15:42:18.143651 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:18.577607 kubelet[2711]: E0213 15:42:18.577566 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:18.771554 kubelet[2711]: E0213 15:42:18.771513 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:20.769927 kubelet[2711]: I0213 15:42:20.769816 2711 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:42:20.770397 containerd[1517]: time="2025-02-13T15:42:20.770250310Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:42:20.770665 kubelet[2711]: I0213 15:42:20.770541 2711 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:42:20.961679 kubelet[2711]: I0213 15:42:20.961627 2711 topology_manager.go:215] "Topology Admit Handler" podUID="d5e69da2-603c-4c7c-be64-72586f2d0153" podNamespace="kube-system" podName="kube-proxy-5mw6l" Feb 13 15:42:20.973745 kubelet[2711]: I0213 15:42:20.973202 2711 topology_manager.go:215] "Topology Admit Handler" podUID="0d59c881-9643-44f2-8e14-f893a95c8137" podNamespace="kube-system" podName="cilium-8jrth" Feb 13 15:42:20.980060 systemd[1]: Created slice kubepods-besteffort-podd5e69da2_603c_4c7c_be64_72586f2d0153.slice - libcontainer container kubepods-besteffort-podd5e69da2_603c_4c7c_be64_72586f2d0153.slice. Feb 13 15:42:21.001131 systemd[1]: Created slice kubepods-burstable-pod0d59c881_9643_44f2_8e14_f893a95c8137.slice - libcontainer container kubepods-burstable-pod0d59c881_9643_44f2_8e14_f893a95c8137.slice. Feb 13 15:42:21.003489 kubelet[2711]: I0213 15:42:21.003450 2711 topology_manager.go:215] "Topology Admit Handler" podUID="76a9dd05-a84f-4d16-bf36-d4759d088fad" podNamespace="kube-system" podName="cilium-operator-599987898-66c7t" Feb 13 15:42:21.013080 systemd[1]: Created slice kubepods-besteffort-pod76a9dd05_a84f_4d16_bf36_d4759d088fad.slice - libcontainer container kubepods-besteffort-pod76a9dd05_a84f_4d16_bf36_d4759d088fad.slice. Feb 13 15:42:21.127013 kubelet[2711]: I0213 15:42:21.126875 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cni-path\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127013 kubelet[2711]: I0213 15:42:21.126915 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-run\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127013 kubelet[2711]: I0213 15:42:21.126939 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5xpt\" (UniqueName: \"kubernetes.io/projected/0d59c881-9643-44f2-8e14-f893a95c8137-kube-api-access-h5xpt\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127013 kubelet[2711]: I0213 15:42:21.126962 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd6rn\" (UniqueName: \"kubernetes.io/projected/d5e69da2-603c-4c7c-be64-72586f2d0153-kube-api-access-wd6rn\") pod \"kube-proxy-5mw6l\" (UID: \"d5e69da2-603c-4c7c-be64-72586f2d0153\") " pod="kube-system/kube-proxy-5mw6l" Feb 13 15:42:21.127013 kubelet[2711]: I0213 15:42:21.126998 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-bpf-maps\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127293 kubelet[2711]: I0213 15:42:21.127038 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-hostproc\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127293 kubelet[2711]: I0213 15:42:21.127063 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-config-path\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127293 kubelet[2711]: I0213 15:42:21.127089 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-host-proc-sys-net\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127293 kubelet[2711]: I0213 15:42:21.127145 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqgjf\" (UniqueName: \"kubernetes.io/projected/76a9dd05-a84f-4d16-bf36-d4759d088fad-kube-api-access-qqgjf\") pod \"cilium-operator-599987898-66c7t\" (UID: \"76a9dd05-a84f-4d16-bf36-d4759d088fad\") " pod="kube-system/cilium-operator-599987898-66c7t" Feb 13 15:42:21.127293 kubelet[2711]: I0213 15:42:21.127170 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-etc-cni-netd\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127460 kubelet[2711]: I0213 15:42:21.127188 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-xtables-lock\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127460 kubelet[2711]: I0213 15:42:21.127209 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-host-proc-sys-kernel\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127460 kubelet[2711]: I0213 15:42:21.127227 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d59c881-9643-44f2-8e14-f893a95c8137-clustermesh-secrets\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127460 kubelet[2711]: I0213 15:42:21.127245 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76a9dd05-a84f-4d16-bf36-d4759d088fad-cilium-config-path\") pod \"cilium-operator-599987898-66c7t\" (UID: \"76a9dd05-a84f-4d16-bf36-d4759d088fad\") " pod="kube-system/cilium-operator-599987898-66c7t" Feb 13 15:42:21.127460 kubelet[2711]: I0213 15:42:21.127262 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5e69da2-603c-4c7c-be64-72586f2d0153-lib-modules\") pod \"kube-proxy-5mw6l\" (UID: \"d5e69da2-603c-4c7c-be64-72586f2d0153\") " pod="kube-system/kube-proxy-5mw6l" Feb 13 15:42:21.127620 kubelet[2711]: I0213 15:42:21.127280 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-lib-modules\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127620 kubelet[2711]: I0213 15:42:21.127297 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5e69da2-603c-4c7c-be64-72586f2d0153-kube-proxy\") pod \"kube-proxy-5mw6l\" (UID: \"d5e69da2-603c-4c7c-be64-72586f2d0153\") " pod="kube-system/kube-proxy-5mw6l" Feb 13 15:42:21.127620 kubelet[2711]: I0213 15:42:21.127314 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-cgroup\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127620 kubelet[2711]: I0213 15:42:21.127330 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d59c881-9643-44f2-8e14-f893a95c8137-hubble-tls\") pod \"cilium-8jrth\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " pod="kube-system/cilium-8jrth" Feb 13 15:42:21.127620 kubelet[2711]: I0213 15:42:21.127346 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5e69da2-603c-4c7c-be64-72586f2d0153-xtables-lock\") pod \"kube-proxy-5mw6l\" (UID: \"d5e69da2-603c-4c7c-be64-72586f2d0153\") " pod="kube-system/kube-proxy-5mw6l" Feb 13 15:42:21.297464 kubelet[2711]: E0213 15:42:21.297428 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:21.298143 containerd[1517]: time="2025-02-13T15:42:21.298087425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5mw6l,Uid:d5e69da2-603c-4c7c-be64-72586f2d0153,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:21.306274 kubelet[2711]: E0213 15:42:21.306239 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:21.306794 containerd[1517]: time="2025-02-13T15:42:21.306747992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jrth,Uid:0d59c881-9643-44f2-8e14-f893a95c8137,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:21.316298 kubelet[2711]: E0213 15:42:21.316243 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:21.316850 containerd[1517]: time="2025-02-13T15:42:21.316804837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-66c7t,Uid:76a9dd05-a84f-4d16-bf36-d4759d088fad,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:21.339780 containerd[1517]: time="2025-02-13T15:42:21.339510311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:21.339780 containerd[1517]: time="2025-02-13T15:42:21.339579662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:21.339780 containerd[1517]: time="2025-02-13T15:42:21.339589741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:21.339780 containerd[1517]: time="2025-02-13T15:42:21.339656106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:21.340118 containerd[1517]: time="2025-02-13T15:42:21.339854822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:21.341318 containerd[1517]: time="2025-02-13T15:42:21.340957554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:21.341318 containerd[1517]: time="2025-02-13T15:42:21.341122425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:21.341318 containerd[1517]: time="2025-02-13T15:42:21.341214640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:21.361841 containerd[1517]: time="2025-02-13T15:42:21.361738775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:21.362013 containerd[1517]: time="2025-02-13T15:42:21.361823965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:21.362013 containerd[1517]: time="2025-02-13T15:42:21.361850896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:21.362953 containerd[1517]: time="2025-02-13T15:42:21.362877815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:21.368521 systemd[1]: Started cri-containerd-0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba.scope - libcontainer container 0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba. Feb 13 15:42:21.372800 systemd[1]: Started cri-containerd-9f5135e22c5d86d6db95431360cb3760886dadd9fdc653c0bbd4f2c5b3976c8c.scope - libcontainer container 9f5135e22c5d86d6db95431360cb3760886dadd9fdc653c0bbd4f2c5b3976c8c. Feb 13 15:42:21.381544 systemd[1]: Started cri-containerd-a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a.scope - libcontainer container a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a. Feb 13 15:42:21.408545 containerd[1517]: time="2025-02-13T15:42:21.408475948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jrth,Uid:0d59c881-9643-44f2-8e14-f893a95c8137,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\"" Feb 13 15:42:21.409472 kubelet[2711]: E0213 15:42:21.409441 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:21.413482 containerd[1517]: time="2025-02-13T15:42:21.413175965Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:42:21.413482 containerd[1517]: time="2025-02-13T15:42:21.413354251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5mw6l,Uid:d5e69da2-603c-4c7c-be64-72586f2d0153,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f5135e22c5d86d6db95431360cb3760886dadd9fdc653c0bbd4f2c5b3976c8c\"" Feb 13 15:42:21.414332 kubelet[2711]: E0213 15:42:21.414286 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:21.416719 containerd[1517]: time="2025-02-13T15:42:21.416673751Z" level=info msg="CreateContainer within sandbox \"9f5135e22c5d86d6db95431360cb3760886dadd9fdc653c0bbd4f2c5b3976c8c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:42:21.438752 containerd[1517]: time="2025-02-13T15:42:21.438657522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-66c7t,Uid:76a9dd05-a84f-4d16-bf36-d4759d088fad,Namespace:kube-system,Attempt:0,} returns sandbox id \"a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a\"" Feb 13 15:42:21.439549 kubelet[2711]: E0213 15:42:21.439517 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:21.614834 containerd[1517]: time="2025-02-13T15:42:21.614739220Z" level=info msg="CreateContainer within sandbox \"9f5135e22c5d86d6db95431360cb3760886dadd9fdc653c0bbd4f2c5b3976c8c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b54861564f992ca10ac8ff8906de55c3fa60790a161cd345a88b965d96048ca\"" Feb 13 15:42:21.615468 containerd[1517]: time="2025-02-13T15:42:21.615440515Z" level=info msg="StartContainer for \"1b54861564f992ca10ac8ff8906de55c3fa60790a161cd345a88b965d96048ca\"" Feb 13 15:42:21.651035 systemd[1]: Started cri-containerd-1b54861564f992ca10ac8ff8906de55c3fa60790a161cd345a88b965d96048ca.scope - libcontainer container 1b54861564f992ca10ac8ff8906de55c3fa60790a161cd345a88b965d96048ca. Feb 13 15:42:21.746561 containerd[1517]: time="2025-02-13T15:42:21.746464030Z" level=info msg="StartContainer for \"1b54861564f992ca10ac8ff8906de55c3fa60790a161cd345a88b965d96048ca\" returns successfully" Feb 13 15:42:22.589481 kubelet[2711]: E0213 15:42:22.589424 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:23.591290 kubelet[2711]: E0213 15:42:23.591233 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:31.375664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1555669792.mount: Deactivated successfully. Feb 13 15:42:35.173737 containerd[1517]: time="2025-02-13T15:42:35.173227399Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:35.176165 containerd[1517]: time="2025-02-13T15:42:35.176103779Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:42:35.178621 containerd[1517]: time="2025-02-13T15:42:35.178561140Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:35.180658 containerd[1517]: time="2025-02-13T15:42:35.180573655Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.767358917s" Feb 13 15:42:35.180658 containerd[1517]: time="2025-02-13T15:42:35.180615353Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:42:35.181862 containerd[1517]: time="2025-02-13T15:42:35.181830688Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:42:35.186849 containerd[1517]: time="2025-02-13T15:42:35.186662214Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:42:35.208544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803364977.mount: Deactivated successfully. Feb 13 15:42:35.304470 containerd[1517]: time="2025-02-13T15:42:35.304383195Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\"" Feb 13 15:42:35.305098 containerd[1517]: time="2025-02-13T15:42:35.305044038Z" level=info msg="StartContainer for \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\"" Feb 13 15:42:35.338846 systemd[1]: Started cri-containerd-cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2.scope - libcontainer container cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2. Feb 13 15:42:35.438405 systemd[1]: cri-containerd-cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2.scope: Deactivated successfully. Feb 13 15:42:35.438889 systemd[1]: cri-containerd-cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2.scope: Consumed 30ms CPU time, 7.1M memory peak, 4K read from disk, 3.2M written to disk. Feb 13 15:42:35.455974 containerd[1517]: time="2025-02-13T15:42:35.455919507Z" level=info msg="StartContainer for \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\" returns successfully" Feb 13 15:42:35.834509 kubelet[2711]: E0213 15:42:35.834237 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:36.045543 kubelet[2711]: I0213 15:42:36.045464 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5mw6l" podStartSLOduration=16.045446099 podStartE2EDuration="16.045446099s" podCreationTimestamp="2025-02-13 15:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:22.694104037 +0000 UTC m=+16.219610423" watchObservedRunningTime="2025-02-13 15:42:36.045446099 +0000 UTC m=+29.570952485" Feb 13 15:42:36.204379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2-rootfs.mount: Deactivated successfully. Feb 13 15:42:36.585355 containerd[1517]: time="2025-02-13T15:42:36.585034217Z" level=info msg="shim disconnected" id=cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2 namespace=k8s.io Feb 13 15:42:36.585355 containerd[1517]: time="2025-02-13T15:42:36.585095863Z" level=warning msg="cleaning up after shim disconnected" id=cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2 namespace=k8s.io Feb 13 15:42:36.585355 containerd[1517]: time="2025-02-13T15:42:36.585105301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:42:36.837619 kubelet[2711]: E0213 15:42:36.837478 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:36.840388 containerd[1517]: time="2025-02-13T15:42:36.840332311Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:42:36.867319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1989509439.mount: Deactivated successfully. Feb 13 15:42:36.883123 containerd[1517]: time="2025-02-13T15:42:36.883063514Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\"" Feb 13 15:42:36.885130 containerd[1517]: time="2025-02-13T15:42:36.885081058Z" level=info msg="StartContainer for \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\"" Feb 13 15:42:36.919048 systemd[1]: Started cri-containerd-52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853.scope - libcontainer container 52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853. Feb 13 15:42:36.949819 containerd[1517]: time="2025-02-13T15:42:36.949773192Z" level=info msg="StartContainer for \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\" returns successfully" Feb 13 15:42:36.965885 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:42:36.966203 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:42:36.966818 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:42:36.974195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:42:36.974504 systemd[1]: cri-containerd-52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853.scope: Deactivated successfully. Feb 13 15:42:36.992264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:42:37.011588 containerd[1517]: time="2025-02-13T15:42:37.011504617Z" level=info msg="shim disconnected" id=52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853 namespace=k8s.io Feb 13 15:42:37.011588 containerd[1517]: time="2025-02-13T15:42:37.011565061Z" level=warning msg="cleaning up after shim disconnected" id=52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853 namespace=k8s.io Feb 13 15:42:37.011588 containerd[1517]: time="2025-02-13T15:42:37.011575351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:42:37.205064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853-rootfs.mount: Deactivated successfully. Feb 13 15:42:37.894143 kubelet[2711]: E0213 15:42:37.894097 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:37.895732 containerd[1517]: time="2025-02-13T15:42:37.895661198Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:42:38.616516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1212472347.mount: Deactivated successfully. Feb 13 15:42:39.384795 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:44644.service - OpenSSH per-connection server daemon (10.0.0.1:44644). Feb 13 15:42:39.416510 containerd[1517]: time="2025-02-13T15:42:39.416427553Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\"" Feb 13 15:42:39.417218 containerd[1517]: time="2025-02-13T15:42:39.417188934Z" level=info msg="StartContainer for \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\"" Feb 13 15:42:39.444688 sshd[3250]: Accepted publickey for core from 10.0.0.1 port 44644 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:42:39.446845 sshd-session[3250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:39.447937 systemd[1]: Started cri-containerd-5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e.scope - libcontainer container 5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e. Feb 13 15:42:39.454379 systemd-logind[1497]: New session 8 of user core. Feb 13 15:42:39.462921 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:42:39.492149 containerd[1517]: time="2025-02-13T15:42:39.491798697Z" level=info msg="StartContainer for \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\" returns successfully" Feb 13 15:42:39.492457 systemd[1]: cri-containerd-5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e.scope: Deactivated successfully. Feb 13 15:42:39.591012 containerd[1517]: time="2025-02-13T15:42:39.590763731Z" level=info msg="shim disconnected" id=5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e namespace=k8s.io Feb 13 15:42:39.591012 containerd[1517]: time="2025-02-13T15:42:39.590823584Z" level=warning msg="cleaning up after shim disconnected" id=5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e namespace=k8s.io Feb 13 15:42:39.591012 containerd[1517]: time="2025-02-13T15:42:39.590831900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:42:39.613755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e-rootfs.mount: Deactivated successfully. Feb 13 15:42:39.648554 sshd[3280]: Connection closed by 10.0.0.1 port 44644 Feb 13 15:42:39.649575 sshd-session[3250]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:39.656969 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:44644.service: Deactivated successfully. Feb 13 15:42:39.660444 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:42:39.662023 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:42:39.663239 systemd-logind[1497]: Removed session 8. Feb 13 15:42:39.899233 kubelet[2711]: E0213 15:42:39.899102 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:39.901240 containerd[1517]: time="2025-02-13T15:42:39.901176738Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:42:40.530430 containerd[1517]: time="2025-02-13T15:42:40.530354123Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\"" Feb 13 15:42:40.531194 containerd[1517]: time="2025-02-13T15:42:40.531084766Z" level=info msg="StartContainer for \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\"" Feb 13 15:42:40.567916 systemd[1]: Started cri-containerd-5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9.scope - libcontainer container 5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9. Feb 13 15:42:40.595522 systemd[1]: cri-containerd-5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9.scope: Deactivated successfully. Feb 13 15:42:40.657366 containerd[1517]: time="2025-02-13T15:42:40.657208455Z" level=info msg="StartContainer for \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\" returns successfully" Feb 13 15:42:40.681051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9-rootfs.mount: Deactivated successfully. Feb 13 15:42:40.859765 containerd[1517]: time="2025-02-13T15:42:40.859583593Z" level=info msg="shim disconnected" id=5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9 namespace=k8s.io Feb 13 15:42:40.859765 containerd[1517]: time="2025-02-13T15:42:40.859664726Z" level=warning msg="cleaning up after shim disconnected" id=5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9 namespace=k8s.io Feb 13 15:42:40.859765 containerd[1517]: time="2025-02-13T15:42:40.859677340Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:42:40.902833 kubelet[2711]: E0213 15:42:40.902805 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:40.905087 containerd[1517]: time="2025-02-13T15:42:40.905030059Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:42:40.926559 containerd[1517]: time="2025-02-13T15:42:40.926499299Z" level=info msg="CreateContainer within sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\"" Feb 13 15:42:40.927163 containerd[1517]: time="2025-02-13T15:42:40.927127631Z" level=info msg="StartContainer for \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\"" Feb 13 15:42:40.958896 systemd[1]: Started cri-containerd-3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c.scope - libcontainer container 3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c. Feb 13 15:42:40.990877 containerd[1517]: time="2025-02-13T15:42:40.990826613Z" level=info msg="StartContainer for \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\" returns successfully" Feb 13 15:42:41.086636 kubelet[2711]: I0213 15:42:41.086588 2711 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:42:41.142010 kubelet[2711]: I0213 15:42:41.141846 2711 topology_manager.go:215] "Topology Admit Handler" podUID="67ba5c46-39d0-4718-b94d-26f8e1c503c4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hwpnn" Feb 13 15:42:41.149449 systemd[1]: Created slice kubepods-burstable-pod67ba5c46_39d0_4718_b94d_26f8e1c503c4.slice - libcontainer container kubepods-burstable-pod67ba5c46_39d0_4718_b94d_26f8e1c503c4.slice. Feb 13 15:42:41.233397 kubelet[2711]: I0213 15:42:41.233357 2711 topology_manager.go:215] "Topology Admit Handler" podUID="cd6cb10d-3703-4533-8bec-25beef4ff667" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fhcrq" Feb 13 15:42:41.241477 systemd[1]: Created slice kubepods-burstable-podcd6cb10d_3703_4533_8bec_25beef4ff667.slice - libcontainer container kubepods-burstable-podcd6cb10d_3703_4533_8bec_25beef4ff667.slice. Feb 13 15:42:41.258484 kubelet[2711]: I0213 15:42:41.258453 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67ba5c46-39d0-4718-b94d-26f8e1c503c4-config-volume\") pod \"coredns-7db6d8ff4d-hwpnn\" (UID: \"67ba5c46-39d0-4718-b94d-26f8e1c503c4\") " pod="kube-system/coredns-7db6d8ff4d-hwpnn" Feb 13 15:42:41.258484 kubelet[2711]: I0213 15:42:41.258489 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf864\" (UniqueName: \"kubernetes.io/projected/67ba5c46-39d0-4718-b94d-26f8e1c503c4-kube-api-access-bf864\") pod \"coredns-7db6d8ff4d-hwpnn\" (UID: \"67ba5c46-39d0-4718-b94d-26f8e1c503c4\") " pod="kube-system/coredns-7db6d8ff4d-hwpnn" Feb 13 15:42:41.359323 kubelet[2711]: I0213 15:42:41.359249 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd6cb10d-3703-4533-8bec-25beef4ff667-config-volume\") pod \"coredns-7db6d8ff4d-fhcrq\" (UID: \"cd6cb10d-3703-4533-8bec-25beef4ff667\") " pod="kube-system/coredns-7db6d8ff4d-fhcrq" Feb 13 15:42:41.359323 kubelet[2711]: I0213 15:42:41.359300 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr95g\" (UniqueName: \"kubernetes.io/projected/cd6cb10d-3703-4533-8bec-25beef4ff667-kube-api-access-rr95g\") pod \"coredns-7db6d8ff4d-fhcrq\" (UID: \"cd6cb10d-3703-4533-8bec-25beef4ff667\") " pod="kube-system/coredns-7db6d8ff4d-fhcrq" Feb 13 15:42:41.452956 kubelet[2711]: E0213 15:42:41.452896 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:41.454208 containerd[1517]: time="2025-02-13T15:42:41.454011003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hwpnn,Uid:67ba5c46-39d0-4718-b94d-26f8e1c503c4,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:41.546511 kubelet[2711]: E0213 15:42:41.546466 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:41.547194 containerd[1517]: time="2025-02-13T15:42:41.547049110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fhcrq,Uid:cd6cb10d-3703-4533-8bec-25beef4ff667,Namespace:kube-system,Attempt:0,}" Feb 13 15:42:41.907866 kubelet[2711]: E0213 15:42:41.907740 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:41.975360 kubelet[2711]: I0213 15:42:41.975293 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8jrth" podStartSLOduration=8.2042842 podStartE2EDuration="21.975269999s" podCreationTimestamp="2025-02-13 15:42:20 +0000 UTC" firstStartedPulling="2025-02-13 15:42:21.410621209 +0000 UTC m=+14.936127605" lastFinishedPulling="2025-02-13 15:42:35.181607018 +0000 UTC m=+28.707113404" observedRunningTime="2025-02-13 15:42:41.974645967 +0000 UTC m=+35.500152373" watchObservedRunningTime="2025-02-13 15:42:41.975269999 +0000 UTC m=+35.500776385" Feb 13 15:42:42.909845 kubelet[2711]: E0213 15:42:42.909804 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:43.911452 kubelet[2711]: E0213 15:42:43.911421 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:44.049150 containerd[1517]: time="2025-02-13T15:42:44.049075431Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:44.049956 containerd[1517]: time="2025-02-13T15:42:44.049891794Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:42:44.051133 containerd[1517]: time="2025-02-13T15:42:44.051097959Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:42:44.052531 containerd[1517]: time="2025-02-13T15:42:44.052493401Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.870630552s" Feb 13 15:42:44.052598 containerd[1517]: time="2025-02-13T15:42:44.052529639Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:42:44.054909 containerd[1517]: time="2025-02-13T15:42:44.054839617Z" level=info msg="CreateContainer within sandbox \"a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:42:44.071092 containerd[1517]: time="2025-02-13T15:42:44.071049651Z" level=info msg="CreateContainer within sandbox \"a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\"" Feb 13 15:42:44.071694 containerd[1517]: time="2025-02-13T15:42:44.071642674Z" level=info msg="StartContainer for \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\"" Feb 13 15:42:44.124865 systemd[1]: Started cri-containerd-ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892.scope - libcontainer container ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892. Feb 13 15:42:44.158255 containerd[1517]: time="2025-02-13T15:42:44.157764971Z" level=info msg="StartContainer for \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\" returns successfully" Feb 13 15:42:44.661185 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:44652.service - OpenSSH per-connection server daemon (10.0.0.1:44652). Feb 13 15:42:44.706581 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 44652 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:42:44.711554 sshd-session[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:44.725061 systemd-logind[1497]: New session 9 of user core. Feb 13 15:42:44.734938 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:42:44.880502 sshd[3574]: Connection closed by 10.0.0.1 port 44652 Feb 13 15:42:44.881366 sshd-session[3572]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:44.885241 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:44652.service: Deactivated successfully. Feb 13 15:42:44.887889 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:42:44.888690 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:42:44.890038 systemd-logind[1497]: Removed session 9. Feb 13 15:42:44.915837 kubelet[2711]: E0213 15:42:44.914510 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:45.916496 kubelet[2711]: E0213 15:42:45.916437 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:48.172135 systemd-networkd[1448]: cilium_host: Link UP Feb 13 15:42:48.172608 systemd-networkd[1448]: cilium_net: Link UP Feb 13 15:42:48.173259 systemd-networkd[1448]: cilium_net: Gained carrier Feb 13 15:42:48.173532 systemd-networkd[1448]: cilium_host: Gained carrier Feb 13 15:42:48.294877 systemd-networkd[1448]: cilium_vxlan: Link UP Feb 13 15:42:48.294888 systemd-networkd[1448]: cilium_vxlan: Gained carrier Feb 13 15:42:48.513741 kernel: NET: Registered PF_ALG protocol family Feb 13 15:42:48.879863 systemd-networkd[1448]: cilium_net: Gained IPv6LL Feb 13 15:42:49.136892 systemd-networkd[1448]: cilium_host: Gained IPv6LL Feb 13 15:42:49.228151 systemd-networkd[1448]: lxc_health: Link UP Feb 13 15:42:49.238133 systemd-networkd[1448]: lxc_health: Gained carrier Feb 13 15:42:49.317055 kubelet[2711]: E0213 15:42:49.317017 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:49.378773 kubelet[2711]: I0213 15:42:49.377629 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-66c7t" podStartSLOduration=6.764640595 podStartE2EDuration="29.377612552s" podCreationTimestamp="2025-02-13 15:42:20 +0000 UTC" firstStartedPulling="2025-02-13 15:42:21.440358034 +0000 UTC m=+14.965864420" lastFinishedPulling="2025-02-13 15:42:44.053329981 +0000 UTC m=+37.578836377" observedRunningTime="2025-02-13 15:42:45.037911925 +0000 UTC m=+38.563418311" watchObservedRunningTime="2025-02-13 15:42:49.377612552 +0000 UTC m=+42.903118948" Feb 13 15:42:49.450621 systemd-networkd[1448]: lxc5f03debcfa67: Link UP Feb 13 15:42:49.481762 kernel: eth0: renamed from tmp47bbe Feb 13 15:42:49.490143 systemd-networkd[1448]: lxc5f03debcfa67: Gained carrier Feb 13 15:42:49.499830 systemd-networkd[1448]: lxcc1606457707a: Link UP Feb 13 15:42:49.510773 kernel: eth0: renamed from tmp91a20 Feb 13 15:42:49.518660 systemd-networkd[1448]: lxcc1606457707a: Gained carrier Feb 13 15:42:49.775883 systemd-networkd[1448]: cilium_vxlan: Gained IPv6LL Feb 13 15:42:49.900950 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:45268.service - OpenSSH per-connection server daemon (10.0.0.1:45268). Feb 13 15:42:49.925737 kubelet[2711]: E0213 15:42:49.924142 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:49.941168 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 45268 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:42:49.942718 sshd-session[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:49.947793 systemd-logind[1497]: New session 10 of user core. Feb 13 15:42:49.956989 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:42:50.090216 sshd[3962]: Connection closed by 10.0.0.1 port 45268 Feb 13 15:42:50.090527 sshd-session[3960]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:50.094848 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:45268.service: Deactivated successfully. Feb 13 15:42:50.097332 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:42:50.098084 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:42:50.099136 systemd-logind[1497]: Removed session 10. Feb 13 15:42:50.863862 systemd-networkd[1448]: lxcc1606457707a: Gained IPv6LL Feb 13 15:42:50.926806 kubelet[2711]: E0213 15:42:50.926765 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:51.183929 systemd-networkd[1448]: lxc5f03debcfa67: Gained IPv6LL Feb 13 15:42:51.311899 systemd-networkd[1448]: lxc_health: Gained IPv6LL Feb 13 15:42:53.150484 containerd[1517]: time="2025-02-13T15:42:53.149822719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:53.150484 containerd[1517]: time="2025-02-13T15:42:53.150466809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:53.150484 containerd[1517]: time="2025-02-13T15:42:53.150489752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:53.150969 containerd[1517]: time="2025-02-13T15:42:53.150578989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:53.165012 systemd[1]: run-containerd-runc-k8s.io-47bbe579b121d2a21016705097cd742d50f0ee2c50a40f94b6a6483b540af6f9-runc.BTxyer.mount: Deactivated successfully. Feb 13 15:42:53.172950 systemd[1]: Started cri-containerd-47bbe579b121d2a21016705097cd742d50f0ee2c50a40f94b6a6483b540af6f9.scope - libcontainer container 47bbe579b121d2a21016705097cd742d50f0ee2c50a40f94b6a6483b540af6f9. Feb 13 15:42:53.184600 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:42:53.208631 containerd[1517]: time="2025-02-13T15:42:53.208593017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hwpnn,Uid:67ba5c46-39d0-4718-b94d-26f8e1c503c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"47bbe579b121d2a21016705097cd742d50f0ee2c50a40f94b6a6483b540af6f9\"" Feb 13 15:42:53.209245 kubelet[2711]: E0213 15:42:53.209210 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:53.211592 containerd[1517]: time="2025-02-13T15:42:53.211559874Z" level=info msg="CreateContainer within sandbox \"47bbe579b121d2a21016705097cd742d50f0ee2c50a40f94b6a6483b540af6f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:42:53.272292 containerd[1517]: time="2025-02-13T15:42:53.272200802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:42:53.272292 containerd[1517]: time="2025-02-13T15:42:53.272263560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:42:53.272292 containerd[1517]: time="2025-02-13T15:42:53.272274981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:53.272516 containerd[1517]: time="2025-02-13T15:42:53.272370610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:42:53.301915 systemd[1]: Started cri-containerd-91a20268a82aadb5aff6a8773724c36d4dabb86fb18ab5d4a7568049d3f19652.scope - libcontainer container 91a20268a82aadb5aff6a8773724c36d4dabb86fb18ab5d4a7568049d3f19652. Feb 13 15:42:53.315046 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:42:53.342718 containerd[1517]: time="2025-02-13T15:42:53.342659735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fhcrq,Uid:cd6cb10d-3703-4533-8bec-25beef4ff667,Namespace:kube-system,Attempt:0,} returns sandbox id \"91a20268a82aadb5aff6a8773724c36d4dabb86fb18ab5d4a7568049d3f19652\"" Feb 13 15:42:53.343519 kubelet[2711]: E0213 15:42:53.343473 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:53.345431 containerd[1517]: time="2025-02-13T15:42:53.345401199Z" level=info msg="CreateContainer within sandbox \"91a20268a82aadb5aff6a8773724c36d4dabb86fb18ab5d4a7568049d3f19652\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:42:53.969646 containerd[1517]: time="2025-02-13T15:42:53.969586812Z" level=info msg="CreateContainer within sandbox \"47bbe579b121d2a21016705097cd742d50f0ee2c50a40f94b6a6483b540af6f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bdc000e64407daa470f9eac52c84f8b5fd68eb3431022c5b58ae4e243b01d590\"" Feb 13 15:42:53.970159 containerd[1517]: time="2025-02-13T15:42:53.970112870Z" level=info msg="StartContainer for \"bdc000e64407daa470f9eac52c84f8b5fd68eb3431022c5b58ae4e243b01d590\"" Feb 13 15:42:53.975323 containerd[1517]: time="2025-02-13T15:42:53.975217760Z" level=info msg="CreateContainer within sandbox \"91a20268a82aadb5aff6a8773724c36d4dabb86fb18ab5d4a7568049d3f19652\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7f07e102221d0c742355f89f045e6358eeff05ee581c07460eb0bc4c698f426\"" Feb 13 15:42:53.975897 containerd[1517]: time="2025-02-13T15:42:53.975842342Z" level=info msg="StartContainer for \"f7f07e102221d0c742355f89f045e6358eeff05ee581c07460eb0bc4c698f426\"" Feb 13 15:42:54.008865 systemd[1]: Started cri-containerd-bdc000e64407daa470f9eac52c84f8b5fd68eb3431022c5b58ae4e243b01d590.scope - libcontainer container bdc000e64407daa470f9eac52c84f8b5fd68eb3431022c5b58ae4e243b01d590. Feb 13 15:42:54.011983 systemd[1]: Started cri-containerd-f7f07e102221d0c742355f89f045e6358eeff05ee581c07460eb0bc4c698f426.scope - libcontainer container f7f07e102221d0c742355f89f045e6358eeff05ee581c07460eb0bc4c698f426. Feb 13 15:42:54.065025 containerd[1517]: time="2025-02-13T15:42:54.064874886Z" level=info msg="StartContainer for \"bdc000e64407daa470f9eac52c84f8b5fd68eb3431022c5b58ae4e243b01d590\" returns successfully" Feb 13 15:42:54.065025 containerd[1517]: time="2025-02-13T15:42:54.064882661Z" level=info msg="StartContainer for \"f7f07e102221d0c742355f89f045e6358eeff05ee581c07460eb0bc4c698f426\" returns successfully" Feb 13 15:42:54.938349 kubelet[2711]: E0213 15:42:54.937987 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:54.940130 kubelet[2711]: E0213 15:42:54.940088 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:54.949730 kubelet[2711]: I0213 15:42:54.947956 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fhcrq" podStartSLOduration=33.947935093 podStartE2EDuration="33.947935093s" podCreationTimestamp="2025-02-13 15:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:54.947220502 +0000 UTC m=+48.472726898" watchObservedRunningTime="2025-02-13 15:42:54.947935093 +0000 UTC m=+48.473441489" Feb 13 15:42:54.972866 kubelet[2711]: I0213 15:42:54.972687 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hwpnn" podStartSLOduration=34.972665973 podStartE2EDuration="34.972665973s" podCreationTimestamp="2025-02-13 15:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:42:54.961471035 +0000 UTC m=+48.486977421" watchObservedRunningTime="2025-02-13 15:42:54.972665973 +0000 UTC m=+48.498172359" Feb 13 15:42:55.104032 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:45278.service - OpenSSH per-connection server daemon (10.0.0.1:45278). Feb 13 15:42:55.146480 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 45278 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:42:55.148275 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:55.153315 systemd-logind[1497]: New session 11 of user core. Feb 13 15:42:55.161882 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:42:55.287562 sshd[4166]: Connection closed by 10.0.0.1 port 45278 Feb 13 15:42:55.287886 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:55.292437 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:45278.service: Deactivated successfully. Feb 13 15:42:55.294679 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:42:55.295577 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:42:55.296542 systemd-logind[1497]: Removed session 11. Feb 13 15:42:55.942410 kubelet[2711]: E0213 15:42:55.942378 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:55.943007 kubelet[2711]: E0213 15:42:55.942466 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:56.943510 kubelet[2711]: E0213 15:42:56.943474 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:42:56.943510 kubelet[2711]: E0213 15:42:56.943476 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:00.305446 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:59342.service - OpenSSH per-connection server daemon (10.0.0.1:59342). Feb 13 15:43:00.342195 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 59342 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:00.343651 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:00.348204 systemd-logind[1497]: New session 12 of user core. Feb 13 15:43:00.359827 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:43:00.481662 sshd[4183]: Connection closed by 10.0.0.1 port 59342 Feb 13 15:43:00.482135 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:00.492571 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:59342.service: Deactivated successfully. Feb 13 15:43:00.494550 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:43:00.496312 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:43:00.508963 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:59346.service - OpenSSH per-connection server daemon (10.0.0.1:59346). Feb 13 15:43:00.510065 systemd-logind[1497]: Removed session 12. Feb 13 15:43:00.548373 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 59346 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:00.549885 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:00.554987 systemd-logind[1497]: New session 13 of user core. Feb 13 15:43:00.566845 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:43:00.979694 sshd[4200]: Connection closed by 10.0.0.1 port 59346 Feb 13 15:43:00.980261 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:00.992762 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:59346.service: Deactivated successfully. Feb 13 15:43:00.994947 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:43:00.996519 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:43:00.997948 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:59354.service - OpenSSH per-connection server daemon (10.0.0.1:59354). Feb 13 15:43:00.998762 systemd-logind[1497]: Removed session 13. Feb 13 15:43:01.048038 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 59354 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:01.049860 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:01.054916 systemd-logind[1497]: New session 14 of user core. Feb 13 15:43:01.064846 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:43:01.375790 sshd[4213]: Connection closed by 10.0.0.1 port 59354 Feb 13 15:43:01.376035 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:01.381147 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:59354.service: Deactivated successfully. Feb 13 15:43:01.383885 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:43:01.384758 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:43:01.385786 systemd-logind[1497]: Removed session 14. Feb 13 15:43:06.391376 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:59364.service - OpenSSH per-connection server daemon (10.0.0.1:59364). Feb 13 15:43:06.426803 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 59364 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:06.428129 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:06.432017 systemd-logind[1497]: New session 15 of user core. Feb 13 15:43:06.441817 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:43:06.554215 sshd[4229]: Connection closed by 10.0.0.1 port 59364 Feb 13 15:43:06.554645 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:06.558940 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:59364.service: Deactivated successfully. Feb 13 15:43:06.561116 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:43:06.561918 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:43:06.562778 systemd-logind[1497]: Removed session 15. Feb 13 15:43:11.571122 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:54454.service - OpenSSH per-connection server daemon (10.0.0.1:54454). Feb 13 15:43:11.615840 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 54454 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:11.617494 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:11.622166 systemd-logind[1497]: New session 16 of user core. Feb 13 15:43:11.631851 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:43:11.757726 sshd[4246]: Connection closed by 10.0.0.1 port 54454 Feb 13 15:43:11.758147 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:11.762964 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:54454.service: Deactivated successfully. Feb 13 15:43:11.764922 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:43:11.765798 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:43:11.766784 systemd-logind[1497]: Removed session 16. Feb 13 15:43:16.770890 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:54466.service - OpenSSH per-connection server daemon (10.0.0.1:54466). Feb 13 15:43:16.808227 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 54466 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:16.809844 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:16.814065 systemd-logind[1497]: New session 17 of user core. Feb 13 15:43:16.823834 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:43:16.934092 sshd[4261]: Connection closed by 10.0.0.1 port 54466 Feb 13 15:43:16.934559 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:16.950327 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:54466.service: Deactivated successfully. Feb 13 15:43:16.952350 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:43:16.954214 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:43:16.963021 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:54476.service - OpenSSH per-connection server daemon (10.0.0.1:54476). Feb 13 15:43:16.964102 systemd-logind[1497]: Removed session 17. Feb 13 15:43:16.995157 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 54476 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:16.996537 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:17.000869 systemd-logind[1497]: New session 18 of user core. Feb 13 15:43:17.014839 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:43:17.346087 sshd[4276]: Connection closed by 10.0.0.1 port 54476 Feb 13 15:43:17.346402 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:17.362824 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:54476.service: Deactivated successfully. Feb 13 15:43:17.364781 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:43:17.366206 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:43:17.371960 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:54478.service - OpenSSH per-connection server daemon (10.0.0.1:54478). Feb 13 15:43:17.373177 systemd-logind[1497]: Removed session 18. Feb 13 15:43:17.405971 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 54478 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:17.407333 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:17.411329 systemd-logind[1497]: New session 19 of user core. Feb 13 15:43:17.421850 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:43:19.112388 kernel: hrtimer: interrupt took 6445279 ns Feb 13 15:43:19.426069 sshd[4289]: Connection closed by 10.0.0.1 port 54478 Feb 13 15:43:19.430664 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:19.457160 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:39286.service - OpenSSH per-connection server daemon (10.0.0.1:39286). Feb 13 15:43:19.457753 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:54478.service: Deactivated successfully. Feb 13 15:43:19.460981 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:43:19.463522 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:43:19.465516 systemd-logind[1497]: Removed session 19. Feb 13 15:43:19.519811 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 39286 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:19.520724 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:19.542995 systemd-logind[1497]: New session 20 of user core. Feb 13 15:43:19.557565 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:43:20.074907 sshd[4309]: Connection closed by 10.0.0.1 port 39286 Feb 13 15:43:20.076814 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:20.109002 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:39286.service: Deactivated successfully. Feb 13 15:43:20.116088 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:43:20.123851 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:43:20.146385 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:39298.service - OpenSSH per-connection server daemon (10.0.0.1:39298). Feb 13 15:43:20.148425 systemd-logind[1497]: Removed session 20. Feb 13 15:43:20.204624 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 39298 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:20.206493 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:20.221854 systemd-logind[1497]: New session 21 of user core. Feb 13 15:43:20.235630 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:43:20.466374 sshd[4323]: Connection closed by 10.0.0.1 port 39298 Feb 13 15:43:20.465445 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:20.479434 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:39298.service: Deactivated successfully. Feb 13 15:43:20.488964 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:43:20.491238 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:43:20.496176 systemd-logind[1497]: Removed session 21. Feb 13 15:43:25.478769 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:39302.service - OpenSSH per-connection server daemon (10.0.0.1:39302). Feb 13 15:43:25.516165 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 39302 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:25.517854 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:25.522231 systemd-logind[1497]: New session 22 of user core. Feb 13 15:43:25.538854 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:43:25.645081 sshd[4341]: Connection closed by 10.0.0.1 port 39302 Feb 13 15:43:25.645470 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:25.649744 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:39302.service: Deactivated successfully. Feb 13 15:43:25.651813 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:43:25.652594 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:43:25.653610 systemd-logind[1497]: Removed session 22. Feb 13 15:43:28.552320 kubelet[2711]: E0213 15:43:28.552279 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:28.552881 kubelet[2711]: E0213 15:43:28.552761 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:30.657676 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:38002.service - OpenSSH per-connection server daemon (10.0.0.1:38002). Feb 13 15:43:30.694126 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 38002 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:30.695489 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:30.699487 systemd-logind[1497]: New session 23 of user core. Feb 13 15:43:30.709833 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:43:30.830962 sshd[4359]: Connection closed by 10.0.0.1 port 38002 Feb 13 15:43:30.831320 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:30.834903 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:38002.service: Deactivated successfully. Feb 13 15:43:30.836750 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:43:30.837405 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:43:30.838279 systemd-logind[1497]: Removed session 23. Feb 13 15:43:35.552810 kubelet[2711]: E0213 15:43:35.552778 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:35.844626 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:38018.service - OpenSSH per-connection server daemon (10.0.0.1:38018). Feb 13 15:43:35.881141 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 38018 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:35.882571 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:35.887128 systemd-logind[1497]: New session 24 of user core. Feb 13 15:43:35.897819 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:43:36.010563 sshd[4374]: Connection closed by 10.0.0.1 port 38018 Feb 13 15:43:36.011056 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:36.015253 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:38018.service: Deactivated successfully. Feb 13 15:43:36.017189 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:43:36.017891 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:43:36.018836 systemd-logind[1497]: Removed session 24. Feb 13 15:43:36.552130 kubelet[2711]: E0213 15:43:36.552082 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:41.026781 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:42262.service - OpenSSH per-connection server daemon (10.0.0.1:42262). Feb 13 15:43:41.063053 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 42262 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:41.064420 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:41.068520 systemd-logind[1497]: New session 25 of user core. Feb 13 15:43:41.077840 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:43:41.240321 sshd[4389]: Connection closed by 10.0.0.1 port 42262 Feb 13 15:43:41.240806 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:41.253835 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:42262.service: Deactivated successfully. Feb 13 15:43:41.255867 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:43:41.257458 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:43:41.263158 systemd[1]: Started sshd@25-10.0.0.38:22-10.0.0.1:42266.service - OpenSSH per-connection server daemon (10.0.0.1:42266). Feb 13 15:43:41.264080 systemd-logind[1497]: Removed session 25. Feb 13 15:43:41.295320 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 42266 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:41.296906 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:41.301041 systemd-logind[1497]: New session 26 of user core. Feb 13 15:43:41.312836 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:43:42.680961 containerd[1517]: time="2025-02-13T15:43:42.680904925Z" level=info msg="StopContainer for \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\" with timeout 30 (s)" Feb 13 15:43:42.711207 containerd[1517]: time="2025-02-13T15:43:42.711110932Z" level=info msg="Stop container \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\" with signal terminated" Feb 13 15:43:42.714571 containerd[1517]: time="2025-02-13T15:43:42.714406540Z" level=info msg="StopContainer for \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\" with timeout 2 (s)" Feb 13 15:43:42.714799 containerd[1517]: time="2025-02-13T15:43:42.714758879Z" level=info msg="Stop container \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\" with signal terminated" Feb 13 15:43:42.715119 containerd[1517]: time="2025-02-13T15:43:42.715047067Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:43:42.723599 systemd[1]: cri-containerd-ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892.scope: Deactivated successfully. Feb 13 15:43:42.725452 systemd-networkd[1448]: lxc_health: Link DOWN Feb 13 15:43:42.725460 systemd-networkd[1448]: lxc_health: Lost carrier Feb 13 15:43:42.750194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892-rootfs.mount: Deactivated successfully. Feb 13 15:43:42.751640 systemd[1]: cri-containerd-3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c.scope: Deactivated successfully. Feb 13 15:43:42.752154 systemd[1]: cri-containerd-3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c.scope: Consumed 7.144s CPU time, 126.5M memory peak, 192K read from disk, 13.3M written to disk. Feb 13 15:43:42.767132 containerd[1517]: time="2025-02-13T15:43:42.766984666Z" level=info msg="shim disconnected" id=ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892 namespace=k8s.io Feb 13 15:43:42.767132 containerd[1517]: time="2025-02-13T15:43:42.767116396Z" level=warning msg="cleaning up after shim disconnected" id=ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892 namespace=k8s.io Feb 13 15:43:42.767132 containerd[1517]: time="2025-02-13T15:43:42.767126215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:42.774344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c-rootfs.mount: Deactivated successfully. Feb 13 15:43:42.779329 containerd[1517]: time="2025-02-13T15:43:42.779276311Z" level=info msg="shim disconnected" id=3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c namespace=k8s.io Feb 13 15:43:42.779329 containerd[1517]: time="2025-02-13T15:43:42.779327007Z" level=warning msg="cleaning up after shim disconnected" id=3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c namespace=k8s.io Feb 13 15:43:42.779458 containerd[1517]: time="2025-02-13T15:43:42.779335242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:42.786097 containerd[1517]: time="2025-02-13T15:43:42.786055726Z" level=info msg="StopContainer for \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\" returns successfully" Feb 13 15:43:42.790420 containerd[1517]: time="2025-02-13T15:43:42.790374849Z" level=info msg="StopPodSandbox for \"a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a\"" Feb 13 15:43:42.799441 containerd[1517]: time="2025-02-13T15:43:42.799402253Z" level=info msg="StopContainer for \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\" returns successfully" Feb 13 15:43:42.799894 containerd[1517]: time="2025-02-13T15:43:42.799871174Z" level=info msg="StopPodSandbox for \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\"" Feb 13 15:43:42.804158 containerd[1517]: time="2025-02-13T15:43:42.790421016Z" level=info msg="Container to stop \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:42.806889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a-shm.mount: Deactivated successfully. Feb 13 15:43:42.812955 systemd[1]: cri-containerd-a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a.scope: Deactivated successfully. Feb 13 15:43:42.815185 containerd[1517]: time="2025-02-13T15:43:42.799978678Z" level=info msg="Container to stop \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:42.815259 containerd[1517]: time="2025-02-13T15:43:42.815180900Z" level=info msg="Container to stop \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:42.815259 containerd[1517]: time="2025-02-13T15:43:42.815237818Z" level=info msg="Container to stop \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:42.815323 containerd[1517]: time="2025-02-13T15:43:42.815251594Z" level=info msg="Container to stop \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:42.815323 containerd[1517]: time="2025-02-13T15:43:42.815271071Z" level=info msg="Container to stop \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:43:42.818293 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba-shm.mount: Deactivated successfully. Feb 13 15:43:42.831521 systemd[1]: cri-containerd-0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba.scope: Deactivated successfully. Feb 13 15:43:42.858315 containerd[1517]: time="2025-02-13T15:43:42.857300323Z" level=info msg="shim disconnected" id=a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a namespace=k8s.io Feb 13 15:43:42.858315 containerd[1517]: time="2025-02-13T15:43:42.857358654Z" level=warning msg="cleaning up after shim disconnected" id=a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a namespace=k8s.io Feb 13 15:43:42.858315 containerd[1517]: time="2025-02-13T15:43:42.857369213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:42.858315 containerd[1517]: time="2025-02-13T15:43:42.857550518Z" level=info msg="shim disconnected" id=0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba namespace=k8s.io Feb 13 15:43:42.858315 containerd[1517]: time="2025-02-13T15:43:42.857582057Z" level=warning msg="cleaning up after shim disconnected" id=0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba namespace=k8s.io Feb 13 15:43:42.858315 containerd[1517]: time="2025-02-13T15:43:42.857592177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:42.873982 containerd[1517]: time="2025-02-13T15:43:42.873780632Z" level=info msg="TearDown network for sandbox \"a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a\" successfully" Feb 13 15:43:42.873982 containerd[1517]: time="2025-02-13T15:43:42.873846127Z" level=info msg="StopPodSandbox for \"a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a\" returns successfully" Feb 13 15:43:42.874865 containerd[1517]: time="2025-02-13T15:43:42.874842960Z" level=info msg="TearDown network for sandbox \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" successfully" Feb 13 15:43:42.874989 containerd[1517]: time="2025-02-13T15:43:42.874863478Z" level=info msg="StopPodSandbox for \"0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba\" returns successfully" Feb 13 15:43:43.045819 kubelet[2711]: I0213 15:43:43.045659 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-config-path\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.045819 kubelet[2711]: I0213 15:43:43.045730 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76a9dd05-a84f-4d16-bf36-d4759d088fad-cilium-config-path\") pod \"76a9dd05-a84f-4d16-bf36-d4759d088fad\" (UID: \"76a9dd05-a84f-4d16-bf36-d4759d088fad\") " Feb 13 15:43:43.045819 kubelet[2711]: I0213 15:43:43.045748 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-hostproc\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.045819 kubelet[2711]: I0213 15:43:43.045766 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d59c881-9643-44f2-8e14-f893a95c8137-hubble-tls\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.045819 kubelet[2711]: I0213 15:43:43.045778 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-bpf-maps\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.045819 kubelet[2711]: I0213 15:43:43.045795 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-cgroup\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.046493 kubelet[2711]: I0213 15:43:43.045809 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5xpt\" (UniqueName: \"kubernetes.io/projected/0d59c881-9643-44f2-8e14-f893a95c8137-kube-api-access-h5xpt\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.046493 kubelet[2711]: I0213 15:43:43.045825 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d59c881-9643-44f2-8e14-f893a95c8137-clustermesh-secrets\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.046493 kubelet[2711]: I0213 15:43:43.045838 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-lib-modules\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.046493 kubelet[2711]: I0213 15:43:43.045850 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-host-proc-sys-net\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.046493 kubelet[2711]: I0213 15:43:43.045862 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-etc-cni-netd\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.046493 kubelet[2711]: I0213 15:43:43.045873 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-xtables-lock\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.047677 kubelet[2711]: I0213 15:43:43.045886 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-host-proc-sys-kernel\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.047677 kubelet[2711]: I0213 15:43:43.045898 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cni-path\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.047677 kubelet[2711]: I0213 15:43:43.045910 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-run\") pod \"0d59c881-9643-44f2-8e14-f893a95c8137\" (UID: \"0d59c881-9643-44f2-8e14-f893a95c8137\") " Feb 13 15:43:43.047677 kubelet[2711]: I0213 15:43:43.045957 2711 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqgjf\" (UniqueName: \"kubernetes.io/projected/76a9dd05-a84f-4d16-bf36-d4759d088fad-kube-api-access-qqgjf\") pod \"76a9dd05-a84f-4d16-bf36-d4759d088fad\" (UID: \"76a9dd05-a84f-4d16-bf36-d4759d088fad\") " Feb 13 15:43:43.047677 kubelet[2711]: I0213 15:43:43.046503 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.047677 kubelet[2711]: I0213 15:43:43.046581 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-hostproc" (OuterVolumeSpecName: "hostproc") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.047887 kubelet[2711]: I0213 15:43:43.047581 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.047887 kubelet[2711]: I0213 15:43:43.047614 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.047887 kubelet[2711]: I0213 15:43:43.047628 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.047887 kubelet[2711]: I0213 15:43:43.047642 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.047887 kubelet[2711]: I0213 15:43:43.047657 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cni-path" (OuterVolumeSpecName: "cni-path") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.048071 kubelet[2711]: I0213 15:43:43.047669 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.050269 kubelet[2711]: I0213 15:43:43.049197 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.050269 kubelet[2711]: I0213 15:43:43.049225 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:43:43.050782 kubelet[2711]: I0213 15:43:43.050755 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:43:43.051516 kubelet[2711]: I0213 15:43:43.051465 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a9dd05-a84f-4d16-bf36-d4759d088fad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "76a9dd05-a84f-4d16-bf36-d4759d088fad" (UID: "76a9dd05-a84f-4d16-bf36-d4759d088fad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:43:43.052317 kubelet[2711]: I0213 15:43:43.052296 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d59c881-9643-44f2-8e14-f893a95c8137-kube-api-access-h5xpt" (OuterVolumeSpecName: "kube-api-access-h5xpt") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "kube-api-access-h5xpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:43.052899 kubelet[2711]: I0213 15:43:43.052868 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d59c881-9643-44f2-8e14-f893a95c8137-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:43.052899 kubelet[2711]: I0213 15:43:43.052873 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a9dd05-a84f-4d16-bf36-d4759d088fad-kube-api-access-qqgjf" (OuterVolumeSpecName: "kube-api-access-qqgjf") pod "76a9dd05-a84f-4d16-bf36-d4759d088fad" (UID: "76a9dd05-a84f-4d16-bf36-d4759d088fad"). InnerVolumeSpecName "kube-api-access-qqgjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:43:43.053350 kubelet[2711]: I0213 15:43:43.053328 2711 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d59c881-9643-44f2-8e14-f893a95c8137-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0d59c881-9643-44f2-8e14-f893a95c8137" (UID: "0d59c881-9643-44f2-8e14-f893a95c8137"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:43:43.058599 kubelet[2711]: I0213 15:43:43.058577 2711 scope.go:117] "RemoveContainer" containerID="ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892" Feb 13 15:43:43.063637 systemd[1]: Removed slice kubepods-besteffort-pod76a9dd05_a84f_4d16_bf36_d4759d088fad.slice - libcontainer container kubepods-besteffort-pod76a9dd05_a84f_4d16_bf36_d4759d088fad.slice. Feb 13 15:43:43.066033 containerd[1517]: time="2025-02-13T15:43:43.065992835Z" level=info msg="RemoveContainer for \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\"" Feb 13 15:43:43.069131 systemd[1]: Removed slice kubepods-burstable-pod0d59c881_9643_44f2_8e14_f893a95c8137.slice - libcontainer container kubepods-burstable-pod0d59c881_9643_44f2_8e14_f893a95c8137.slice. Feb 13 15:43:43.069227 systemd[1]: kubepods-burstable-pod0d59c881_9643_44f2_8e14_f893a95c8137.slice: Consumed 7.262s CPU time, 126.8M memory peak, 220K read from disk, 16.6M written to disk. Feb 13 15:43:43.070097 containerd[1517]: time="2025-02-13T15:43:43.070066590Z" level=info msg="RemoveContainer for \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\" returns successfully" Feb 13 15:43:43.070313 kubelet[2711]: I0213 15:43:43.070289 2711 scope.go:117] "RemoveContainer" containerID="ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892" Feb 13 15:43:43.070610 containerd[1517]: time="2025-02-13T15:43:43.070575506Z" level=error msg="ContainerStatus for \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\": not found" Feb 13 15:43:43.070907 kubelet[2711]: E0213 15:43:43.070717 2711 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\": not found" containerID="ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892" Feb 13 15:43:43.070907 kubelet[2711]: I0213 15:43:43.070763 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892"} err="failed to get container status \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce093d0723416ab8f621a0622de034185268d7013b03c0c1b0d6ad9700d54892\": not found" Feb 13 15:43:43.070907 kubelet[2711]: I0213 15:43:43.070836 2711 scope.go:117] "RemoveContainer" containerID="3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c" Feb 13 15:43:43.072168 containerd[1517]: time="2025-02-13T15:43:43.072142842Z" level=info msg="RemoveContainer for \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\"" Feb 13 15:43:43.076090 containerd[1517]: time="2025-02-13T15:43:43.076056693Z" level=info msg="RemoveContainer for \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\" returns successfully" Feb 13 15:43:43.076270 kubelet[2711]: I0213 15:43:43.076237 2711 scope.go:117] "RemoveContainer" containerID="5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9" Feb 13 15:43:43.077510 containerd[1517]: time="2025-02-13T15:43:43.077154357Z" level=info msg="RemoveContainer for \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\"" Feb 13 15:43:43.080572 containerd[1517]: time="2025-02-13T15:43:43.080548251Z" level=info msg="RemoveContainer for \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\" returns successfully" Feb 13 15:43:43.080726 kubelet[2711]: I0213 15:43:43.080678 2711 scope.go:117] "RemoveContainer" containerID="5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e" Feb 13 15:43:43.081527 containerd[1517]: time="2025-02-13T15:43:43.081501692Z" level=info msg="RemoveContainer for \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\"" Feb 13 15:43:43.085002 containerd[1517]: time="2025-02-13T15:43:43.084955960Z" level=info msg="RemoveContainer for \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\" returns successfully" Feb 13 15:43:43.085222 kubelet[2711]: I0213 15:43:43.085183 2711 scope.go:117] "RemoveContainer" containerID="52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853" Feb 13 15:43:43.086028 containerd[1517]: time="2025-02-13T15:43:43.086006395Z" level=info msg="RemoveContainer for \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\"" Feb 13 15:43:43.099932 containerd[1517]: time="2025-02-13T15:43:43.099901627Z" level=info msg="RemoveContainer for \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\" returns successfully" Feb 13 15:43:43.100081 kubelet[2711]: I0213 15:43:43.100062 2711 scope.go:117] "RemoveContainer" containerID="cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2" Feb 13 15:43:43.100836 containerd[1517]: time="2025-02-13T15:43:43.100814430Z" level=info msg="RemoveContainer for \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\"" Feb 13 15:43:43.104023 containerd[1517]: time="2025-02-13T15:43:43.104000829Z" level=info msg="RemoveContainer for \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\" returns successfully" Feb 13 15:43:43.104135 kubelet[2711]: I0213 15:43:43.104116 2711 scope.go:117] "RemoveContainer" containerID="3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c" Feb 13 15:43:43.104295 containerd[1517]: time="2025-02-13T15:43:43.104265352Z" level=error msg="ContainerStatus for \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\": not found" Feb 13 15:43:43.104398 kubelet[2711]: E0213 15:43:43.104380 2711 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\": not found" containerID="3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c" Feb 13 15:43:43.104456 kubelet[2711]: I0213 15:43:43.104401 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c"} err="failed to get container status \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\": rpc error: code = NotFound desc = an error occurred when try to find container \"3257fbbde25f99588bfbe77ce865c820de3b217c5bd1022ee6dc6366c93cf50c\": not found" Feb 13 15:43:43.104456 kubelet[2711]: I0213 15:43:43.104419 2711 scope.go:117] "RemoveContainer" containerID="5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9" Feb 13 15:43:43.104556 containerd[1517]: time="2025-02-13T15:43:43.104522761Z" level=error msg="ContainerStatus for \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\": not found" Feb 13 15:43:43.104665 kubelet[2711]: E0213 15:43:43.104640 2711 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\": not found" containerID="5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9" Feb 13 15:43:43.104665 kubelet[2711]: I0213 15:43:43.104659 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9"} err="failed to get container status \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a36343e0e62cef4e95ae36016f3527c1a81f39c57efb1eb425921e73beb4bd9\": not found" Feb 13 15:43:43.104760 kubelet[2711]: I0213 15:43:43.104671 2711 scope.go:117] "RemoveContainer" containerID="5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e" Feb 13 15:43:43.104849 containerd[1517]: time="2025-02-13T15:43:43.104819905Z" level=error msg="ContainerStatus for \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\": not found" Feb 13 15:43:43.104945 kubelet[2711]: E0213 15:43:43.104918 2711 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\": not found" containerID="5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e" Feb 13 15:43:43.105000 kubelet[2711]: I0213 15:43:43.104945 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e"} err="failed to get container status \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d086587ffd86e3465fa38bbc37b59cf0af642710fae62ca6c27e06beaeaeb2e\": not found" Feb 13 15:43:43.105000 kubelet[2711]: I0213 15:43:43.104958 2711 scope.go:117] "RemoveContainer" containerID="52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853" Feb 13 15:43:43.105133 containerd[1517]: time="2025-02-13T15:43:43.105106009Z" level=error msg="ContainerStatus for \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\": not found" Feb 13 15:43:43.105219 kubelet[2711]: E0213 15:43:43.105198 2711 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\": not found" containerID="52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853" Feb 13 15:43:43.105257 kubelet[2711]: I0213 15:43:43.105216 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853"} err="failed to get container status \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\": rpc error: code = NotFound desc = an error occurred when try to find container \"52a2e8c8e3ea3f793a74c7bac8cf0602f0dfd0f338c85f46454b12795bccc853\": not found" Feb 13 15:43:43.105257 kubelet[2711]: I0213 15:43:43.105228 2711 scope.go:117] "RemoveContainer" containerID="cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2" Feb 13 15:43:43.105393 containerd[1517]: time="2025-02-13T15:43:43.105355441Z" level=error msg="ContainerStatus for \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\": not found" Feb 13 15:43:43.105479 kubelet[2711]: E0213 15:43:43.105464 2711 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\": not found" containerID="cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2" Feb 13 15:43:43.105542 kubelet[2711]: I0213 15:43:43.105483 2711 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2"} err="failed to get container status \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc839993f96e7c0d60cf8931ff366cc0ffcb13d448b6d47723a4d21d373c1db2\": not found" Feb 13 15:43:43.146681 kubelet[2711]: I0213 15:43:43.146658 2711 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d59c881-9643-44f2-8e14-f893a95c8137-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146681 kubelet[2711]: I0213 15:43:43.146676 2711 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146681 kubelet[2711]: I0213 15:43:43.146685 2711 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146798 kubelet[2711]: I0213 15:43:43.146695 2711 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146798 kubelet[2711]: I0213 15:43:43.146720 2711 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146798 kubelet[2711]: I0213 15:43:43.146728 2711 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146798 kubelet[2711]: I0213 15:43:43.146737 2711 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146798 kubelet[2711]: I0213 15:43:43.146745 2711 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146798 kubelet[2711]: I0213 15:43:43.146755 2711 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qqgjf\" (UniqueName: \"kubernetes.io/projected/76a9dd05-a84f-4d16-bf36-d4759d088fad-kube-api-access-qqgjf\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146798 kubelet[2711]: I0213 15:43:43.146768 2711 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.146798 kubelet[2711]: I0213 15:43:43.146779 2711 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76a9dd05-a84f-4d16-bf36-d4759d088fad-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.147023 kubelet[2711]: I0213 15:43:43.146788 2711 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.147023 kubelet[2711]: I0213 15:43:43.146798 2711 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d59c881-9643-44f2-8e14-f893a95c8137-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.147023 kubelet[2711]: I0213 15:43:43.146808 2711 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.147023 kubelet[2711]: I0213 15:43:43.146816 2711 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d59c881-9643-44f2-8e14-f893a95c8137-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.147023 kubelet[2711]: I0213 15:43:43.146825 2711 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h5xpt\" (UniqueName: \"kubernetes.io/projected/0d59c881-9643-44f2-8e14-f893a95c8137-kube-api-access-h5xpt\") on node \"localhost\" DevicePath \"\"" Feb 13 15:43:43.692864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a178088e4f43240a5ac2588423d79471ea64a3509bcac8b1e0ff12421c7d7e4a-rootfs.mount: Deactivated successfully. Feb 13 15:43:43.692990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b3223fc31a0ec76720cf22ec1b3d1ef557151f4d31251f06c8da3d240c21cba-rootfs.mount: Deactivated successfully. Feb 13 15:43:43.693072 systemd[1]: var-lib-kubelet-pods-76a9dd05\x2da84f\x2d4d16\x2dbf36\x2dd4759d088fad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqqgjf.mount: Deactivated successfully. Feb 13 15:43:43.693167 systemd[1]: var-lib-kubelet-pods-0d59c881\x2d9643\x2d44f2\x2d8e14\x2df893a95c8137-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh5xpt.mount: Deactivated successfully. Feb 13 15:43:43.693243 systemd[1]: var-lib-kubelet-pods-0d59c881\x2d9643\x2d44f2\x2d8e14\x2df893a95c8137-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:43:43.693327 systemd[1]: var-lib-kubelet-pods-0d59c881\x2d9643\x2d44f2\x2d8e14\x2df893a95c8137-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:43:44.554254 kubelet[2711]: I0213 15:43:44.554210 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d59c881-9643-44f2-8e14-f893a95c8137" path="/var/lib/kubelet/pods/0d59c881-9643-44f2-8e14-f893a95c8137/volumes" Feb 13 15:43:44.555075 kubelet[2711]: I0213 15:43:44.555047 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a9dd05-a84f-4d16-bf36-d4759d088fad" path="/var/lib/kubelet/pods/76a9dd05-a84f-4d16-bf36-d4759d088fad/volumes" Feb 13 15:43:44.654420 sshd[4404]: Connection closed by 10.0.0.1 port 42266 Feb 13 15:43:44.654869 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:44.672997 systemd[1]: sshd@25-10.0.0.38:22-10.0.0.1:42266.service: Deactivated successfully. Feb 13 15:43:44.675903 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:43:44.678111 systemd-logind[1497]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:43:44.689036 systemd[1]: Started sshd@26-10.0.0.38:22-10.0.0.1:42268.service - OpenSSH per-connection server daemon (10.0.0.1:42268). Feb 13 15:43:44.690170 systemd-logind[1497]: Removed session 26. Feb 13 15:43:44.724646 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 42268 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:44.726273 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:44.731195 systemd-logind[1497]: New session 27 of user core. Feb 13 15:43:44.739888 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:43:45.584456 sshd[4569]: Connection closed by 10.0.0.1 port 42268 Feb 13 15:43:45.586031 sshd-session[4566]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:45.598030 systemd[1]: sshd@26-10.0.0.38:22-10.0.0.1:42268.service: Deactivated successfully. Feb 13 15:43:45.605764 kubelet[2711]: I0213 15:43:45.601816 2711 topology_manager.go:215] "Topology Admit Handler" podUID="b2445c47-87f3-43b0-9d3d-15d34a344c46" podNamespace="kube-system" podName="cilium-8ql7c" Feb 13 15:43:45.605764 kubelet[2711]: E0213 15:43:45.601884 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d59c881-9643-44f2-8e14-f893a95c8137" containerName="mount-cgroup" Feb 13 15:43:45.605764 kubelet[2711]: E0213 15:43:45.601896 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d59c881-9643-44f2-8e14-f893a95c8137" containerName="clean-cilium-state" Feb 13 15:43:45.605764 kubelet[2711]: E0213 15:43:45.601904 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d59c881-9643-44f2-8e14-f893a95c8137" containerName="cilium-agent" Feb 13 15:43:45.605764 kubelet[2711]: E0213 15:43:45.601913 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76a9dd05-a84f-4d16-bf36-d4759d088fad" containerName="cilium-operator" Feb 13 15:43:45.605764 kubelet[2711]: E0213 15:43:45.601921 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d59c881-9643-44f2-8e14-f893a95c8137" containerName="apply-sysctl-overwrites" Feb 13 15:43:45.605764 kubelet[2711]: E0213 15:43:45.601929 2711 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d59c881-9643-44f2-8e14-f893a95c8137" containerName="mount-bpf-fs" Feb 13 15:43:45.605764 kubelet[2711]: I0213 15:43:45.601957 2711 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d59c881-9643-44f2-8e14-f893a95c8137" containerName="cilium-agent" Feb 13 15:43:45.605764 kubelet[2711]: I0213 15:43:45.601965 2711 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a9dd05-a84f-4d16-bf36-d4759d088fad" containerName="cilium-operator" Feb 13 15:43:45.602756 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:43:45.608817 systemd-logind[1497]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:43:45.618539 systemd[1]: Started sshd@27-10.0.0.38:22-10.0.0.1:42280.service - OpenSSH per-connection server daemon (10.0.0.1:42280). Feb 13 15:43:45.624816 systemd-logind[1497]: Removed session 27. Feb 13 15:43:45.631989 systemd[1]: Created slice kubepods-burstable-podb2445c47_87f3_43b0_9d3d_15d34a344c46.slice - libcontainer container kubepods-burstable-podb2445c47_87f3_43b0_9d3d_15d34a344c46.slice. Feb 13 15:43:45.658905 kubelet[2711]: I0213 15:43:45.658850 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-xtables-lock\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659351 kubelet[2711]: I0213 15:43:45.659209 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-lib-modules\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659351 kubelet[2711]: I0213 15:43:45.659234 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-host-proc-sys-kernel\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659351 kubelet[2711]: I0213 15:43:45.659281 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2445c47-87f3-43b0-9d3d-15d34a344c46-cilium-config-path\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659351 kubelet[2711]: I0213 15:43:45.659297 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-bpf-maps\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659351 kubelet[2711]: I0213 15:43:45.659312 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-cilium-run\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659610 kubelet[2711]: I0213 15:43:45.659442 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-cni-path\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659610 kubelet[2711]: I0213 15:43:45.659458 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2445c47-87f3-43b0-9d3d-15d34a344c46-clustermesh-secrets\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659610 kubelet[2711]: I0213 15:43:45.659471 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2445c47-87f3-43b0-9d3d-15d34a344c46-hubble-tls\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659930 kubelet[2711]: I0213 15:43:45.659744 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z888q\" (UniqueName: \"kubernetes.io/projected/b2445c47-87f3-43b0-9d3d-15d34a344c46-kube-api-access-z888q\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659930 kubelet[2711]: I0213 15:43:45.659766 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-etc-cni-netd\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659930 kubelet[2711]: I0213 15:43:45.659811 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-host-proc-sys-net\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659930 kubelet[2711]: I0213 15:43:45.659828 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-cilium-cgroup\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659930 kubelet[2711]: I0213 15:43:45.659840 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b2445c47-87f3-43b0-9d3d-15d34a344c46-cilium-ipsec-secrets\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.659930 kubelet[2711]: I0213 15:43:45.659902 2711 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2445c47-87f3-43b0-9d3d-15d34a344c46-hostproc\") pod \"cilium-8ql7c\" (UID: \"b2445c47-87f3-43b0-9d3d-15d34a344c46\") " pod="kube-system/cilium-8ql7c" Feb 13 15:43:45.675157 sshd[4580]: Accepted publickey for core from 10.0.0.1 port 42280 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:45.676789 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:45.681546 systemd-logind[1497]: New session 28 of user core. Feb 13 15:43:45.691813 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:43:45.742531 sshd[4584]: Connection closed by 10.0.0.1 port 42280 Feb 13 15:43:45.743000 sshd-session[4580]: pam_unix(sshd:session): session closed for user core Feb 13 15:43:45.758685 systemd[1]: sshd@27-10.0.0.38:22-10.0.0.1:42280.service: Deactivated successfully. Feb 13 15:43:45.761554 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:43:45.782615 systemd-logind[1497]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:43:45.786113 systemd[1]: Started sshd@28-10.0.0.38:22-10.0.0.1:42288.service - OpenSSH per-connection server daemon (10.0.0.1:42288). Feb 13 15:43:45.787799 systemd-logind[1497]: Removed session 28. Feb 13 15:43:45.821986 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 42288 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:43:45.823567 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:43:45.829354 systemd-logind[1497]: New session 29 of user core. Feb 13 15:43:45.836861 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:43:45.937374 kubelet[2711]: E0213 15:43:45.936789 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:45.940792 containerd[1517]: time="2025-02-13T15:43:45.939916736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8ql7c,Uid:b2445c47-87f3-43b0-9d3d-15d34a344c46,Namespace:kube-system,Attempt:0,}" Feb 13 15:43:45.967984 containerd[1517]: time="2025-02-13T15:43:45.967826258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:43:45.968126 containerd[1517]: time="2025-02-13T15:43:45.967888066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:43:45.968580 containerd[1517]: time="2025-02-13T15:43:45.968508092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:45.968652 containerd[1517]: time="2025-02-13T15:43:45.968616808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:43:45.990097 systemd[1]: Started cri-containerd-f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8.scope - libcontainer container f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8. Feb 13 15:43:46.017318 containerd[1517]: time="2025-02-13T15:43:46.017248887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8ql7c,Uid:b2445c47-87f3-43b0-9d3d-15d34a344c46,Namespace:kube-system,Attempt:0,} returns sandbox id \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\"" Feb 13 15:43:46.018147 kubelet[2711]: E0213 15:43:46.018103 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:46.020484 containerd[1517]: time="2025-02-13T15:43:46.020433028Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:43:46.035446 containerd[1517]: time="2025-02-13T15:43:46.035381348Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"129b89675726edf34613ea73a8aeaf0c5702404cf9e1998ac30930584c21e0de\"" Feb 13 15:43:46.037347 containerd[1517]: time="2025-02-13T15:43:46.036017254Z" level=info msg="StartContainer for \"129b89675726edf34613ea73a8aeaf0c5702404cf9e1998ac30930584c21e0de\"" Feb 13 15:43:46.066918 systemd[1]: Started cri-containerd-129b89675726edf34613ea73a8aeaf0c5702404cf9e1998ac30930584c21e0de.scope - libcontainer container 129b89675726edf34613ea73a8aeaf0c5702404cf9e1998ac30930584c21e0de. Feb 13 15:43:46.099866 containerd[1517]: time="2025-02-13T15:43:46.099623987Z" level=info msg="StartContainer for \"129b89675726edf34613ea73a8aeaf0c5702404cf9e1998ac30930584c21e0de\" returns successfully" Feb 13 15:43:46.110297 systemd[1]: cri-containerd-129b89675726edf34613ea73a8aeaf0c5702404cf9e1998ac30930584c21e0de.scope: Deactivated successfully. Feb 13 15:43:46.151553 containerd[1517]: time="2025-02-13T15:43:46.151451914Z" level=info msg="shim disconnected" id=129b89675726edf34613ea73a8aeaf0c5702404cf9e1998ac30930584c21e0de namespace=k8s.io Feb 13 15:43:46.151553 containerd[1517]: time="2025-02-13T15:43:46.151539490Z" level=warning msg="cleaning up after shim disconnected" id=129b89675726edf34613ea73a8aeaf0c5702404cf9e1998ac30930584c21e0de namespace=k8s.io Feb 13 15:43:46.151553 containerd[1517]: time="2025-02-13T15:43:46.151553527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:46.599764 kubelet[2711]: E0213 15:43:46.598799 2711 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:43:47.074304 kubelet[2711]: E0213 15:43:47.074265 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:47.075882 containerd[1517]: time="2025-02-13T15:43:47.075776874Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:43:47.089508 containerd[1517]: time="2025-02-13T15:43:47.089455679Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e9e4bba339bd4d7f86818d5a3b0fd6418cc5187ec9b19933959f3c6267a1cc2a\"" Feb 13 15:43:47.089998 containerd[1517]: time="2025-02-13T15:43:47.089969874Z" level=info msg="StartContainer for \"e9e4bba339bd4d7f86818d5a3b0fd6418cc5187ec9b19933959f3c6267a1cc2a\"" Feb 13 15:43:47.117828 systemd[1]: Started cri-containerd-e9e4bba339bd4d7f86818d5a3b0fd6418cc5187ec9b19933959f3c6267a1cc2a.scope - libcontainer container e9e4bba339bd4d7f86818d5a3b0fd6418cc5187ec9b19933959f3c6267a1cc2a. Feb 13 15:43:47.143751 containerd[1517]: time="2025-02-13T15:43:47.143692791Z" level=info msg="StartContainer for \"e9e4bba339bd4d7f86818d5a3b0fd6418cc5187ec9b19933959f3c6267a1cc2a\" returns successfully" Feb 13 15:43:47.149787 systemd[1]: cri-containerd-e9e4bba339bd4d7f86818d5a3b0fd6418cc5187ec9b19933959f3c6267a1cc2a.scope: Deactivated successfully. Feb 13 15:43:47.174665 containerd[1517]: time="2025-02-13T15:43:47.174600614Z" level=info msg="shim disconnected" id=e9e4bba339bd4d7f86818d5a3b0fd6418cc5187ec9b19933959f3c6267a1cc2a namespace=k8s.io Feb 13 15:43:47.174665 containerd[1517]: time="2025-02-13T15:43:47.174652132Z" level=warning msg="cleaning up after shim disconnected" id=e9e4bba339bd4d7f86818d5a3b0fd6418cc5187ec9b19933959f3c6267a1cc2a namespace=k8s.io Feb 13 15:43:47.174665 containerd[1517]: time="2025-02-13T15:43:47.174660488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:47.767183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9e4bba339bd4d7f86818d5a3b0fd6418cc5187ec9b19933959f3c6267a1cc2a-rootfs.mount: Deactivated successfully. Feb 13 15:43:48.078368 kubelet[2711]: E0213 15:43:48.078251 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:48.080399 containerd[1517]: time="2025-02-13T15:43:48.080358778Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:43:48.113597 containerd[1517]: time="2025-02-13T15:43:48.113541311Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3df6d0d203eb531d023f23bf71fa7f22b8f5d374292a72209b7d69371e855712\"" Feb 13 15:43:48.114114 containerd[1517]: time="2025-02-13T15:43:48.114090842Z" level=info msg="StartContainer for \"3df6d0d203eb531d023f23bf71fa7f22b8f5d374292a72209b7d69371e855712\"" Feb 13 15:43:48.145907 systemd[1]: Started cri-containerd-3df6d0d203eb531d023f23bf71fa7f22b8f5d374292a72209b7d69371e855712.scope - libcontainer container 3df6d0d203eb531d023f23bf71fa7f22b8f5d374292a72209b7d69371e855712. Feb 13 15:43:48.181029 systemd[1]: cri-containerd-3df6d0d203eb531d023f23bf71fa7f22b8f5d374292a72209b7d69371e855712.scope: Deactivated successfully. Feb 13 15:43:48.181386 containerd[1517]: time="2025-02-13T15:43:48.180822623Z" level=info msg="StartContainer for \"3df6d0d203eb531d023f23bf71fa7f22b8f5d374292a72209b7d69371e855712\" returns successfully" Feb 13 15:43:48.209455 containerd[1517]: time="2025-02-13T15:43:48.209375737Z" level=info msg="shim disconnected" id=3df6d0d203eb531d023f23bf71fa7f22b8f5d374292a72209b7d69371e855712 namespace=k8s.io Feb 13 15:43:48.209455 containerd[1517]: time="2025-02-13T15:43:48.209436843Z" level=warning msg="cleaning up after shim disconnected" id=3df6d0d203eb531d023f23bf71fa7f22b8f5d374292a72209b7d69371e855712 namespace=k8s.io Feb 13 15:43:48.209455 containerd[1517]: time="2025-02-13T15:43:48.209445499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:48.494431 kubelet[2711]: I0213 15:43:48.494365 2711 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:43:48Z","lastTransitionTime":"2025-02-13T15:43:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:43:48.767548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3df6d0d203eb531d023f23bf71fa7f22b8f5d374292a72209b7d69371e855712-rootfs.mount: Deactivated successfully. Feb 13 15:43:49.082268 kubelet[2711]: E0213 15:43:49.082139 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:49.084388 containerd[1517]: time="2025-02-13T15:43:49.084348434Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:43:49.155561 containerd[1517]: time="2025-02-13T15:43:49.155507318Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6118f74634d2ec9ddb3bdfd9a7c3018f479da33e50bd9deb73cf2aa9b2f42b08\"" Feb 13 15:43:49.156002 containerd[1517]: time="2025-02-13T15:43:49.155975766Z" level=info msg="StartContainer for \"6118f74634d2ec9ddb3bdfd9a7c3018f479da33e50bd9deb73cf2aa9b2f42b08\"" Feb 13 15:43:49.185893 systemd[1]: Started cri-containerd-6118f74634d2ec9ddb3bdfd9a7c3018f479da33e50bd9deb73cf2aa9b2f42b08.scope - libcontainer container 6118f74634d2ec9ddb3bdfd9a7c3018f479da33e50bd9deb73cf2aa9b2f42b08. Feb 13 15:43:49.211253 systemd[1]: cri-containerd-6118f74634d2ec9ddb3bdfd9a7c3018f479da33e50bd9deb73cf2aa9b2f42b08.scope: Deactivated successfully. Feb 13 15:43:49.213283 containerd[1517]: time="2025-02-13T15:43:49.213248165Z" level=info msg="StartContainer for \"6118f74634d2ec9ddb3bdfd9a7c3018f479da33e50bd9deb73cf2aa9b2f42b08\" returns successfully" Feb 13 15:43:49.236116 containerd[1517]: time="2025-02-13T15:43:49.236038812Z" level=info msg="shim disconnected" id=6118f74634d2ec9ddb3bdfd9a7c3018f479da33e50bd9deb73cf2aa9b2f42b08 namespace=k8s.io Feb 13 15:43:49.236116 containerd[1517]: time="2025-02-13T15:43:49.236107912Z" level=warning msg="cleaning up after shim disconnected" id=6118f74634d2ec9ddb3bdfd9a7c3018f479da33e50bd9deb73cf2aa9b2f42b08 namespace=k8s.io Feb 13 15:43:49.236116 containerd[1517]: time="2025-02-13T15:43:49.236120697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:43:49.767524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6118f74634d2ec9ddb3bdfd9a7c3018f479da33e50bd9deb73cf2aa9b2f42b08-rootfs.mount: Deactivated successfully. Feb 13 15:43:50.087270 kubelet[2711]: E0213 15:43:50.087098 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:50.089682 containerd[1517]: time="2025-02-13T15:43:50.089627787Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:43:50.109677 containerd[1517]: time="2025-02-13T15:43:50.109621272Z" level=info msg="CreateContainer within sandbox \"f37cc26fbc1be5a0fff952dbeb6ebe8fdedb91e33856ed81e978b9d8bd108ed8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15adbfa216a6e6758037c425eff973df7d60c1c465091326956717cb03fa7498\"" Feb 13 15:43:50.111577 containerd[1517]: time="2025-02-13T15:43:50.110268960Z" level=info msg="StartContainer for \"15adbfa216a6e6758037c425eff973df7d60c1c465091326956717cb03fa7498\"" Feb 13 15:43:50.141959 systemd[1]: Started cri-containerd-15adbfa216a6e6758037c425eff973df7d60c1c465091326956717cb03fa7498.scope - libcontainer container 15adbfa216a6e6758037c425eff973df7d60c1c465091326956717cb03fa7498. Feb 13 15:43:50.177865 containerd[1517]: time="2025-02-13T15:43:50.177818674Z" level=info msg="StartContainer for \"15adbfa216a6e6758037c425eff973df7d60c1c465091326956717cb03fa7498\" returns successfully" Feb 13 15:43:50.622779 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:43:51.092385 kubelet[2711]: E0213 15:43:51.092352 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:51.104748 kubelet[2711]: I0213 15:43:51.104667 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8ql7c" podStartSLOduration=6.104645779 podStartE2EDuration="6.104645779s" podCreationTimestamp="2025-02-13 15:43:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:43:51.104309922 +0000 UTC m=+104.629816318" watchObservedRunningTime="2025-02-13 15:43:51.104645779 +0000 UTC m=+104.630152165" Feb 13 15:43:52.095447 kubelet[2711]: E0213 15:43:52.095407 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:53.096495 kubelet[2711]: E0213 15:43:53.096442 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:53.899871 systemd-networkd[1448]: lxc_health: Link UP Feb 13 15:43:53.902115 systemd-networkd[1448]: lxc_health: Gained carrier Feb 13 15:43:54.098864 kubelet[2711]: E0213 15:43:54.098828 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:55.100913 kubelet[2711]: E0213 15:43:55.100871 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:43:55.250801 systemd-networkd[1448]: lxc_health: Gained IPv6LL Feb 13 15:43:56.103043 kubelet[2711]: E0213 15:43:56.102995 2711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:44:01.039341 sshd[4599]: Connection closed by 10.0.0.1 port 42288 Feb 13 15:44:01.039970 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:01.044144 systemd[1]: sshd@28-10.0.0.38:22-10.0.0.1:42288.service: Deactivated successfully. Feb 13 15:44:01.046735 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:44:01.049473 systemd-logind[1497]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:44:01.051033 systemd-logind[1497]: Removed session 29.