Feb 13 15:44:16.897330 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:00:20 -00 2025 Feb 13 15:44:16.897351 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:44:16.897362 kernel: BIOS-provided physical RAM map: Feb 13 15:44:16.897369 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:44:16.897375 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:44:16.897382 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:44:16.897389 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:44:16.897396 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:44:16.897403 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:44:16.897409 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:44:16.897416 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:44:16.897424 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:44:16.897431 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:44:16.897438 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:44:16.897446 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:44:16.897453 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:44:16.897462 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:44:16.897469 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:44:16.897476 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:44:16.897483 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:44:16.897490 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:44:16.897497 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:44:16.897504 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:44:16.897511 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:44:16.897518 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:44:16.897525 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:44:16.897532 kernel: NX (Execute Disable) protection: active Feb 13 15:44:16.897542 kernel: APIC: Static calls initialized Feb 13 15:44:16.897549 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:44:16.897556 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:44:16.897563 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:44:16.897570 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:44:16.897577 kernel: extended physical RAM map: Feb 13 15:44:16.897584 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:44:16.897591 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:44:16.897598 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:44:16.897605 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:44:16.897612 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:44:16.897620 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:44:16.897629 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:44:16.897639 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:44:16.897647 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:44:16.897654 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:44:16.897661 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:44:16.897668 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:44:16.897678 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:44:16.897685 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:44:16.897693 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:44:16.897700 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:44:16.897707 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:44:16.897715 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:44:16.897722 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:44:16.897729 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:44:16.897737 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:44:16.897746 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:44:16.897753 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:44:16.897761 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:44:16.897768 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:44:16.897775 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:44:16.897782 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:44:16.897790 kernel: efi: EFI v2.7 by EDK II Feb 13 15:44:16.897797 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:44:16.897805 kernel: random: crng init done Feb 13 15:44:16.897825 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:44:16.897833 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:44:16.897840 kernel: secureboot: Secure boot disabled Feb 13 15:44:16.897850 kernel: SMBIOS 2.8 present. Feb 13 15:44:16.897857 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:44:16.897864 kernel: Hypervisor detected: KVM Feb 13 15:44:16.897872 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:44:16.897879 kernel: kvm-clock: using sched offset of 2698188993 cycles Feb 13 15:44:16.897887 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:44:16.897895 kernel: tsc: Detected 2794.750 MHz processor Feb 13 15:44:16.897902 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:44:16.897910 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:44:16.897918 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:44:16.897928 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:44:16.897935 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:44:16.897943 kernel: Using GB pages for direct mapping Feb 13 15:44:16.897950 kernel: ACPI: Early table checksum verification disabled Feb 13 15:44:16.897958 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:44:16.897965 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:44:16.897973 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:44:16.897983 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:44:16.897991 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:44:16.898001 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:44:16.898010 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:44:16.898018 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:44:16.898027 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:44:16.898036 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:44:16.898044 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:44:16.898058 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:44:16.898066 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:44:16.898073 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:44:16.898084 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:44:16.898091 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:44:16.898099 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:44:16.898107 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:44:16.898114 kernel: No NUMA configuration found Feb 13 15:44:16.898122 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:44:16.899304 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:44:16.899313 kernel: Zone ranges: Feb 13 15:44:16.899321 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:44:16.899332 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:44:16.899340 kernel: Normal empty Feb 13 15:44:16.899347 kernel: Movable zone start for each node Feb 13 15:44:16.899355 kernel: Early memory node ranges Feb 13 15:44:16.899362 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:44:16.899370 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:44:16.899377 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:44:16.899384 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:44:16.899392 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:44:16.899401 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:44:16.899409 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:44:16.899416 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:44:16.899424 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:44:16.899431 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:44:16.899439 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:44:16.899453 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:44:16.899463 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:44:16.899471 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:44:16.899478 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:44:16.899486 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:44:16.899494 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:44:16.899502 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:44:16.899511 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:44:16.899519 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:44:16.899527 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:44:16.899535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:44:16.899545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:44:16.899553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:44:16.899561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:44:16.899568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:44:16.899576 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:44:16.899584 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:44:16.899592 kernel: TSC deadline timer available Feb 13 15:44:16.899599 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:44:16.899607 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:44:16.899615 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:44:16.899625 kernel: kvm-guest: setup PV sched yield Feb 13 15:44:16.899632 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:44:16.899640 kernel: Booting paravirtualized kernel on KVM Feb 13 15:44:16.899648 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:44:16.899656 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:44:16.899664 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:44:16.899672 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:44:16.899679 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:44:16.899687 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:44:16.899697 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:44:16.899706 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:44:16.899714 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:44:16.899722 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:44:16.899730 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:44:16.899738 kernel: Fallback order for Node 0: 0 Feb 13 15:44:16.899746 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:44:16.899754 kernel: Policy zone: DMA32 Feb 13 15:44:16.899764 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:44:16.899772 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 177824K reserved, 0K cma-reserved) Feb 13 15:44:16.899780 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:44:16.899787 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 15:44:16.899795 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:44:16.899803 kernel: Dynamic Preempt: voluntary Feb 13 15:44:16.899822 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:44:16.899830 kernel: rcu: RCU event tracing is enabled. Feb 13 15:44:16.899838 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:44:16.899849 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:44:16.899857 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:44:16.899864 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:44:16.899872 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:44:16.899880 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:44:16.899888 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:44:16.899896 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:44:16.899903 kernel: Console: colour dummy device 80x25 Feb 13 15:44:16.899912 kernel: printk: console [ttyS0] enabled Feb 13 15:44:16.899923 kernel: ACPI: Core revision 20230628 Feb 13 15:44:16.899933 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:44:16.899942 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:44:16.899949 kernel: x2apic enabled Feb 13 15:44:16.899957 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:44:16.899965 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:44:16.899973 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:44:16.899981 kernel: kvm-guest: setup PV IPIs Feb 13 15:44:16.899989 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:44:16.899998 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:44:16.900006 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 15:44:16.900014 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:44:16.900022 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:44:16.900029 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:44:16.900037 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:44:16.900045 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:44:16.900059 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:44:16.900067 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:44:16.900130 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:44:16.900176 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:44:16.900186 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:44:16.900194 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:44:16.900203 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:44:16.900214 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:44:16.900222 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:44:16.900231 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:44:16.900239 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:44:16.900262 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:44:16.900270 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:44:16.900278 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:44:16.900286 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:44:16.900294 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:44:16.900303 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:44:16.900311 kernel: landlock: Up and running. Feb 13 15:44:16.900319 kernel: SELinux: Initializing. Feb 13 15:44:16.900327 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:44:16.900338 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:44:16.900346 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:44:16.900355 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:44:16.900363 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:44:16.900371 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:44:16.900379 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:44:16.900387 kernel: ... version: 0 Feb 13 15:44:16.900395 kernel: ... bit width: 48 Feb 13 15:44:16.900406 kernel: ... generic registers: 6 Feb 13 15:44:16.900414 kernel: ... value mask: 0000ffffffffffff Feb 13 15:44:16.900422 kernel: ... max period: 00007fffffffffff Feb 13 15:44:16.900430 kernel: ... fixed-purpose events: 0 Feb 13 15:44:16.900438 kernel: ... event mask: 000000000000003f Feb 13 15:44:16.900446 kernel: signal: max sigframe size: 1776 Feb 13 15:44:16.900454 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:44:16.900463 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:44:16.900471 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:44:16.900479 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:44:16.900490 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:44:16.900498 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:44:16.900506 kernel: smpboot: Max logical packages: 1 Feb 13 15:44:16.900514 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 15:44:16.900522 kernel: devtmpfs: initialized Feb 13 15:44:16.900530 kernel: x86/mm: Memory block size: 128MB Feb 13 15:44:16.900538 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:44:16.900546 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:44:16.900554 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:44:16.900565 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:44:16.900574 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:44:16.900582 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:44:16.900590 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:44:16.900598 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:44:16.900606 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:44:16.900614 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:44:16.900622 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:44:16.900633 kernel: audit: type=2000 audit(1739461457.148:1): state=initialized audit_enabled=0 res=1 Feb 13 15:44:16.900642 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:44:16.900650 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:44:16.900658 kernel: cpuidle: using governor menu Feb 13 15:44:16.900666 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:44:16.900674 kernel: dca service started, version 1.12.1 Feb 13 15:44:16.900682 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:44:16.900690 kernel: PCI: Using configuration type 1 for base access Feb 13 15:44:16.900698 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:44:16.900709 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:44:16.900717 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:44:16.900725 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:44:16.900733 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:44:16.900742 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:44:16.900749 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:44:16.900758 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:44:16.900766 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:44:16.900774 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:44:16.900784 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:44:16.900792 kernel: ACPI: Interpreter enabled Feb 13 15:44:16.900800 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:44:16.900843 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:44:16.900882 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:44:16.900890 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:44:16.900898 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:44:16.900906 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:44:16.901190 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:44:16.901332 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:44:16.901459 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:44:16.901470 kernel: PCI host bridge to bus 0000:00 Feb 13 15:44:16.901599 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:44:16.901722 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:44:16.901861 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:44:16.901995 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:44:16.902124 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:44:16.902238 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:44:16.902382 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:44:16.902536 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:44:16.902674 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:44:16.902841 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:44:16.902977 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:44:16.903112 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:44:16.903236 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:44:16.903363 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:44:16.903500 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:44:16.903629 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:44:16.903754 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:44:16.903917 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:44:16.904066 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:44:16.904196 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:44:16.904324 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:44:16.904452 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:44:16.904586 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:44:16.904720 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:44:16.904893 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:44:16.905021 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:44:16.905151 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:44:16.905281 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:44:16.905403 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:44:16.905531 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:44:16.905660 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:44:16.905781 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:44:16.905929 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:44:16.906061 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:44:16.906073 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:44:16.906081 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:44:16.906090 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:44:16.906098 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:44:16.906110 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:44:16.906118 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:44:16.906126 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:44:16.906134 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:44:16.906142 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:44:16.906150 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:44:16.906158 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:44:16.906167 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:44:16.906178 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:44:16.906186 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:44:16.906194 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:44:16.906202 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:44:16.906210 kernel: iommu: Default domain type: Translated Feb 13 15:44:16.906218 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:44:16.906226 kernel: efivars: Registered efivars operations Feb 13 15:44:16.906234 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:44:16.906242 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:44:16.906251 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:44:16.906261 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:44:16.906269 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:44:16.906277 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:44:16.906285 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:44:16.906293 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:44:16.906302 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:44:16.906310 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:44:16.906436 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:44:16.906565 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:44:16.906689 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:44:16.906699 kernel: vgaarb: loaded Feb 13 15:44:16.906708 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:44:16.906716 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:44:16.906724 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:44:16.906733 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:44:16.906741 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:44:16.906749 kernel: pnp: PnP ACPI init Feb 13 15:44:16.906921 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:44:16.906936 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:44:16.906945 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:44:16.906955 kernel: NET: Registered PF_INET protocol family Feb 13 15:44:16.906983 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:44:16.906994 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:44:16.907002 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:44:16.907011 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:44:16.907021 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:44:16.907030 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:44:16.907039 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:44:16.907057 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:44:16.907065 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:44:16.907074 kernel: NET: Registered PF_XDP protocol family Feb 13 15:44:16.907203 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:44:16.907356 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:44:16.907478 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:44:16.907593 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:44:16.907706 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:44:16.907860 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:44:16.907978 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:44:16.908098 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:44:16.908110 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:44:16.908119 kernel: Initialise system trusted keyrings Feb 13 15:44:16.908131 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:44:16.908139 kernel: Key type asymmetric registered Feb 13 15:44:16.908147 kernel: Asymmetric key parser 'x509' registered Feb 13 15:44:16.908156 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:44:16.908164 kernel: io scheduler mq-deadline registered Feb 13 15:44:16.908173 kernel: io scheduler kyber registered Feb 13 15:44:16.908181 kernel: io scheduler bfq registered Feb 13 15:44:16.908189 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:44:16.908198 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:44:16.908207 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:44:16.908219 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:44:16.908227 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:44:16.908236 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:44:16.908247 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:44:16.908255 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:44:16.908266 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:44:16.908392 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:44:16.909667 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:44:16.909831 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:44:16.909949 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:44:16 UTC (1739461456) Feb 13 15:44:16.910074 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:44:16.910086 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:44:16.910094 kernel: efifb: probing for efifb Feb 13 15:44:16.910108 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:44:16.910117 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:44:16.910125 kernel: efifb: scrolling: redraw Feb 13 15:44:16.910133 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:44:16.910141 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:44:16.910150 kernel: fb0: EFI VGA frame buffer device Feb 13 15:44:16.910158 kernel: pstore: Using crash dump compression: deflate Feb 13 15:44:16.910167 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:44:16.910175 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:44:16.910186 kernel: Segment Routing with IPv6 Feb 13 15:44:16.910194 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:44:16.910202 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:44:16.910211 kernel: Key type dns_resolver registered Feb 13 15:44:16.910219 kernel: IPI shorthand broadcast: enabled Feb 13 15:44:16.910227 kernel: sched_clock: Marking stable (620003031, 154394039)->(792851367, -18454297) Feb 13 15:44:16.910236 kernel: registered taskstats version 1 Feb 13 15:44:16.910244 kernel: Loading compiled-in X.509 certificates Feb 13 15:44:16.910253 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: a260c8876205efb4ca2ab3eb040cd310ec7afd21' Feb 13 15:44:16.910263 kernel: Key type .fscrypt registered Feb 13 15:44:16.910271 kernel: Key type fscrypt-provisioning registered Feb 13 15:44:16.910280 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:44:16.910288 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:44:16.910296 kernel: ima: No architecture policies found Feb 13 15:44:16.910304 kernel: clk: Disabling unused clocks Feb 13 15:44:16.910313 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 15:44:16.910321 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:44:16.910329 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 15:44:16.910340 kernel: Run /init as init process Feb 13 15:44:16.910349 kernel: with arguments: Feb 13 15:44:16.910357 kernel: /init Feb 13 15:44:16.910365 kernel: with environment: Feb 13 15:44:16.910373 kernel: HOME=/ Feb 13 15:44:16.910384 kernel: TERM=linux Feb 13 15:44:16.910392 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:44:16.910402 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:44:16.910415 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:44:16.910427 systemd[1]: Detected virtualization kvm. Feb 13 15:44:16.910436 systemd[1]: Detected architecture x86-64. Feb 13 15:44:16.910444 systemd[1]: Running in initrd. Feb 13 15:44:16.910453 systemd[1]: No hostname configured, using default hostname. Feb 13 15:44:16.910462 systemd[1]: Hostname set to . Feb 13 15:44:16.910471 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:44:16.910480 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:44:16.910492 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:44:16.910501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:44:16.910511 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:44:16.910520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:44:16.910529 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:44:16.910538 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:44:16.910549 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:44:16.910561 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:44:16.910570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:44:16.910579 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:44:16.910587 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:44:16.910596 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:44:16.910605 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:44:16.910614 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:44:16.910622 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:44:16.910634 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:44:16.910643 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:44:16.910652 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:44:16.910661 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:44:16.910669 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:44:16.910678 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:44:16.910687 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:44:16.910696 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:44:16.910705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:44:16.910716 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:44:16.910725 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:44:16.910733 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:44:16.910742 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:44:16.910751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:44:16.910759 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:44:16.910768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:44:16.910780 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:44:16.910789 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:44:16.910798 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:44:16.910846 systemd-journald[193]: Collecting audit messages is disabled. Feb 13 15:44:16.910873 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:44:16.910882 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:44:16.910891 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:44:16.910900 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:44:16.910910 systemd-journald[193]: Journal started Feb 13 15:44:16.910932 systemd-journald[193]: Runtime Journal (/run/log/journal/4458a9d19f4747cf870b0a85728c0fb7) is 6M, max 48.2M, 42.2M free. Feb 13 15:44:16.888902 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:44:16.913331 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:44:16.914556 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:44:16.919837 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:44:16.921946 kernel: Bridge firewalling registered Feb 13 15:44:16.921853 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:44:16.921855 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:44:16.932034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:44:16.935977 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:44:16.940176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:44:16.944425 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:44:16.948571 dracut-cmdline[220]: dracut-dracut-053 Feb 13 15:44:16.951732 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:44:16.960002 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:44:16.965941 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:44:17.000478 systemd-resolved[248]: Positive Trust Anchors: Feb 13 15:44:17.000494 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:44:17.000524 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:44:17.002931 systemd-resolved[248]: Defaulting to hostname 'linux'. Feb 13 15:44:17.004074 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:44:17.009731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:44:17.057848 kernel: SCSI subsystem initialized Feb 13 15:44:17.068840 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:44:17.082864 kernel: iscsi: registered transport (tcp) Feb 13 15:44:17.106846 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:44:17.106899 kernel: QLogic iSCSI HBA Driver Feb 13 15:44:17.156308 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:44:17.169931 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:44:17.197174 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:44:17.197212 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:44:17.198218 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:44:17.240852 kernel: raid6: avx2x4 gen() 30489 MB/s Feb 13 15:44:17.257835 kernel: raid6: avx2x2 gen() 31092 MB/s Feb 13 15:44:17.274917 kernel: raid6: avx2x1 gen() 25684 MB/s Feb 13 15:44:17.274951 kernel: raid6: using algorithm avx2x2 gen() 31092 MB/s Feb 13 15:44:17.292954 kernel: raid6: .... xor() 18121 MB/s, rmw enabled Feb 13 15:44:17.292976 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:44:17.316836 kernel: xor: automatically using best checksumming function avx Feb 13 15:44:17.467853 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:44:17.481836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:44:17.490080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:44:17.505881 systemd-udevd[416]: Using default interface naming scheme 'v255'. Feb 13 15:44:17.512237 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:44:17.528002 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:44:17.541878 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Feb 13 15:44:17.576494 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:44:17.590020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:44:17.656902 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:44:17.666987 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:44:17.682395 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:44:17.684757 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:44:17.687309 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:44:17.688589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:44:17.696055 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:44:17.702619 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:44:17.702863 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:44:17.702875 kernel: GPT:9289727 != 19775487 Feb 13 15:44:17.702885 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:44:17.702901 kernel: GPT:9289727 != 19775487 Feb 13 15:44:17.702911 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:44:17.702921 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:44:17.706899 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:44:17.705936 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:44:17.716830 kernel: libata version 3.00 loaded. Feb 13 15:44:17.718186 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:44:17.728888 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:44:17.759265 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:44:17.759288 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:44:17.760469 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:44:17.760620 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:44:17.760632 kernel: AES CTR mode by8 optimization enabled Feb 13 15:44:17.760643 kernel: BTRFS: device fsid 506754f7-5ef1-4c63-ad2a-b7b855a48f85 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (463) Feb 13 15:44:17.760654 kernel: scsi host0: ahci Feb 13 15:44:17.760832 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (466) Feb 13 15:44:17.760844 kernel: scsi host1: ahci Feb 13 15:44:17.760998 kernel: scsi host2: ahci Feb 13 15:44:17.761165 kernel: scsi host3: ahci Feb 13 15:44:17.761315 kernel: scsi host4: ahci Feb 13 15:44:17.761461 kernel: scsi host5: ahci Feb 13 15:44:17.761612 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:44:17.761624 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:44:17.761635 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:44:17.761645 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:44:17.761655 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:44:17.761666 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:44:17.731019 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:44:17.731186 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:44:17.732874 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:44:17.734220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:44:17.734408 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:44:17.738573 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:44:17.746070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:44:17.774849 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:44:17.808098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:44:17.818858 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:44:17.828137 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:44:17.831399 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:44:17.843932 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:44:17.846244 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:44:17.846299 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:44:17.849963 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:44:17.853013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:44:17.855309 disk-uuid[556]: Primary Header is updated. Feb 13 15:44:17.855309 disk-uuid[556]: Secondary Entries is updated. Feb 13 15:44:17.855309 disk-uuid[556]: Secondary Header is updated. Feb 13 15:44:17.859832 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:44:17.863843 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:44:17.869931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:44:17.881078 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:44:17.900648 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:44:18.064843 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:44:18.064909 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:44:18.065840 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:44:18.065856 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:44:18.067075 kernel: ata3.00: applying bridge limits Feb 13 15:44:18.067840 kernel: ata3.00: configured for UDMA/100 Feb 13 15:44:18.072843 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:44:18.073842 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:44:18.073855 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:44:18.075491 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:44:18.124844 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:44:18.139925 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:44:18.139960 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:44:18.864849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:44:18.865519 disk-uuid[558]: The operation has completed successfully. Feb 13 15:44:18.896526 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:44:18.896649 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:44:18.935001 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:44:18.938483 sh[598]: Success Feb 13 15:44:18.950835 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:44:18.986901 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:44:19.002377 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:44:19.006099 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:44:19.020884 kernel: BTRFS info (device dm-0): first mount of filesystem 506754f7-5ef1-4c63-ad2a-b7b855a48f85 Feb 13 15:44:19.020928 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:44:19.020940 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:44:19.021903 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:44:19.023284 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:44:19.027322 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:44:19.028351 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:44:19.037082 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:44:19.038185 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:44:19.055444 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:44:19.055479 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:44:19.055490 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:44:19.059668 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:44:19.068559 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:44:19.070157 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:44:19.080071 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:44:19.087023 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:44:19.140924 ignition[701]: Ignition 2.20.0 Feb 13 15:44:19.140936 ignition[701]: Stage: fetch-offline Feb 13 15:44:19.140978 ignition[701]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:44:19.140998 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:44:19.141099 ignition[701]: parsed url from cmdline: "" Feb 13 15:44:19.141103 ignition[701]: no config URL provided Feb 13 15:44:19.141109 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:44:19.141118 ignition[701]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:44:19.141145 ignition[701]: op(1): [started] loading QEMU firmware config module Feb 13 15:44:19.141150 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:44:19.150124 ignition[701]: op(1): [finished] loading QEMU firmware config module Feb 13 15:44:19.159775 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:44:19.170965 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:44:19.195362 ignition[701]: parsing config with SHA512: 20e39e6e8137298937ff3d42899eb2ede62168fc1bcdce9cda8c018dd61bef856391125d8e1a14e47cb0c9adaef87c348c9587fe4304803e2258f07bb5f7f8a7 Feb 13 15:44:19.197836 systemd-networkd[787]: lo: Link UP Feb 13 15:44:19.197847 systemd-networkd[787]: lo: Gained carrier Feb 13 15:44:19.199559 systemd-networkd[787]: Enumeration completed Feb 13 15:44:19.199915 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:44:19.199919 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:44:19.200110 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:44:19.200676 systemd-networkd[787]: eth0: Link UP Feb 13 15:44:19.200680 systemd-networkd[787]: eth0: Gained carrier Feb 13 15:44:19.200686 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:44:19.201550 systemd[1]: Reached target network.target - Network. Feb 13 15:44:19.214482 unknown[701]: fetched base config from "system" Feb 13 15:44:19.214495 unknown[701]: fetched user config from "qemu" Feb 13 15:44:19.216542 ignition[701]: fetch-offline: fetch-offline passed Feb 13 15:44:19.217449 ignition[701]: Ignition finished successfully Feb 13 15:44:19.220705 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:44:19.221529 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:44:19.227042 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:44:19.227970 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:44:19.243730 ignition[792]: Ignition 2.20.0 Feb 13 15:44:19.243742 ignition[792]: Stage: kargs Feb 13 15:44:19.243919 ignition[792]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:44:19.243931 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:44:19.244740 ignition[792]: kargs: kargs passed Feb 13 15:44:19.244775 ignition[792]: Ignition finished successfully Feb 13 15:44:19.251531 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:44:19.264947 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:44:19.275828 ignition[801]: Ignition 2.20.0 Feb 13 15:44:19.275838 ignition[801]: Stage: disks Feb 13 15:44:19.276007 ignition[801]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:44:19.276019 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:44:19.276867 ignition[801]: disks: disks passed Feb 13 15:44:19.276905 ignition[801]: Ignition finished successfully Feb 13 15:44:19.282514 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:44:19.283769 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:44:19.285578 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:44:19.287834 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:44:19.288934 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:44:19.291150 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:44:19.299990 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:44:19.311534 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:44:19.328704 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:44:20.027888 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:44:20.111833 kernel: EXT4-fs (vda9): mounted filesystem 8023eced-1511-4e72-a58a-db1b8cb3210e r/w with ordered data mode. Quota mode: none. Feb 13 15:44:20.112040 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:44:20.112917 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:44:20.122908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:44:20.124283 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:44:20.125859 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:44:20.125900 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:44:20.133577 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (820) Feb 13 15:44:20.133595 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:44:20.125923 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:44:20.140135 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:44:20.140150 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:44:20.140160 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:44:20.133138 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:44:20.136934 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:44:20.141260 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:44:20.174524 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:44:20.180023 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:44:20.185119 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:44:20.190036 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:44:20.272846 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:44:20.286896 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:44:20.288035 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:44:20.298834 kernel: BTRFS info (device vda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:44:20.314553 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:44:20.319524 ignition[934]: INFO : Ignition 2.20.0 Feb 13 15:44:20.319524 ignition[934]: INFO : Stage: mount Feb 13 15:44:20.321218 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:44:20.321218 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:44:20.324029 ignition[934]: INFO : mount: mount passed Feb 13 15:44:20.324781 ignition[934]: INFO : Ignition finished successfully Feb 13 15:44:20.327629 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:44:20.340081 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:44:20.897992 systemd-networkd[787]: eth0: Gained IPv6LL Feb 13 15:44:21.020048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:44:21.030100 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:44:21.037244 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (947) Feb 13 15:44:21.037284 kernel: BTRFS info (device vda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:44:21.037301 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:44:21.038118 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:44:21.040833 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:44:21.042760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:44:21.076210 ignition[964]: INFO : Ignition 2.20.0 Feb 13 15:44:21.076210 ignition[964]: INFO : Stage: files Feb 13 15:44:21.077822 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:44:21.077822 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:44:21.080482 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:44:21.081753 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:44:21.081753 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:44:21.084648 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:44:21.086063 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:44:21.086063 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:44:21.085195 unknown[964]: wrote ssh authorized keys file for user: core Feb 13 15:44:21.089852 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 15:44:21.089852 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 15:44:21.123529 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:44:21.221368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 15:44:21.221368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:44:21.225205 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:44:21.720250 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:44:21.821224 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:44:21.821224 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:44:21.825083 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 15:44:22.315237 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:44:22.697979 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 15:44:22.697979 ignition[964]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:44:22.701910 ignition[964]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:44:22.704095 ignition[964]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:44:22.704095 ignition[964]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:44:22.707124 ignition[964]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:44:22.707124 ignition[964]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:44:22.710222 ignition[964]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:44:22.710222 ignition[964]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:44:22.710222 ignition[964]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:44:22.727533 ignition[964]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:44:22.731150 ignition[964]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:44:22.732770 ignition[964]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:44:22.732770 ignition[964]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:44:22.735561 ignition[964]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:44:22.735561 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:44:22.735561 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:44:22.735561 ignition[964]: INFO : files: files passed Feb 13 15:44:22.735561 ignition[964]: INFO : Ignition finished successfully Feb 13 15:44:22.743752 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:44:22.750992 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:44:22.753781 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:44:22.756427 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:44:22.757433 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:44:22.762357 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:44:22.766050 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:44:22.767686 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:44:22.770509 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:44:22.768126 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:44:22.770712 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:44:22.779929 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:44:22.802666 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:44:22.802782 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:44:22.805048 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:44:22.807092 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:44:22.809059 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:44:22.817936 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:44:22.829421 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:44:22.837979 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:44:22.847968 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:44:22.849229 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:44:22.851414 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:44:22.853382 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:44:22.853515 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:44:22.855666 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:44:22.857368 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:44:22.859350 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:44:22.861344 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:44:22.863332 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:44:22.865458 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:44:22.867587 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:44:22.869801 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:44:22.871773 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:44:22.873961 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:44:22.875664 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:44:22.875797 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:44:22.877877 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:44:22.879499 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:44:22.881596 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:44:22.881718 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:44:22.883901 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:44:22.884042 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:44:22.886408 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:44:22.886536 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:44:22.888605 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:44:22.890548 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:44:22.894876 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:44:22.896440 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:44:22.898419 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:44:22.900213 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:44:22.900304 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:44:22.902242 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:44:22.902340 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:44:22.904752 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:44:22.904878 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:44:22.906865 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:44:22.906979 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:44:22.916957 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:44:22.917874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:44:22.918007 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:44:22.920974 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:44:22.921912 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:44:22.922129 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:44:22.924491 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:44:22.924640 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:44:22.930945 ignition[1019]: INFO : Ignition 2.20.0 Feb 13 15:44:22.930945 ignition[1019]: INFO : Stage: umount Feb 13 15:44:22.933837 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:44:22.933837 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:44:22.933837 ignition[1019]: INFO : umount: umount passed Feb 13 15:44:22.933837 ignition[1019]: INFO : Ignition finished successfully Feb 13 15:44:22.931356 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:44:22.931465 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:44:22.934906 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:44:22.935014 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:44:22.937870 systemd[1]: Stopped target network.target - Network. Feb 13 15:44:22.939133 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:44:22.939198 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:44:22.941068 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:44:22.941115 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:44:22.943015 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:44:22.943064 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:44:22.943487 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:44:22.943532 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:44:22.943935 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:44:22.944205 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:44:22.951246 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:44:22.951875 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:44:22.952022 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:44:22.956505 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:44:22.956719 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:44:22.956920 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:44:22.959663 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:44:22.960636 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:44:22.960694 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:44:22.973926 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:44:22.975446 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:44:22.975500 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:44:22.977911 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:44:22.977960 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:44:22.980252 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:44:22.980299 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:44:22.982307 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:44:22.982357 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:44:22.984632 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:44:22.988238 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:44:22.988303 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:44:22.995614 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:44:22.995726 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:44:23.005520 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:44:23.005723 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:44:23.009233 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:44:23.009282 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:44:23.011284 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:44:23.011320 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:44:23.013241 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:44:23.013288 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:44:23.015542 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:44:23.015590 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:44:23.017481 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:44:23.017527 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:44:23.032071 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:44:23.034327 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:44:23.034405 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:44:23.038067 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:44:23.038125 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:44:23.041216 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:44:23.041281 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:44:23.041670 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:44:23.041796 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:44:23.211010 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:44:23.211137 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:44:23.213131 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:44:23.214908 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:44:23.214964 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:44:23.228091 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:44:23.234894 systemd[1]: Switching root. Feb 13 15:44:23.273299 systemd-journald[193]: Journal stopped Feb 13 15:44:24.582854 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Feb 13 15:44:24.582913 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:44:24.582932 kernel: SELinux: policy capability open_perms=1 Feb 13 15:44:24.582943 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:44:24.582960 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:44:24.582976 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:44:24.582988 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:44:24.582999 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:44:24.583010 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:44:24.583026 kernel: audit: type=1403 audit(1739461463.814:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:44:24.583038 systemd[1]: Successfully loaded SELinux policy in 39.853ms. Feb 13 15:44:24.583063 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.911ms. Feb 13 15:44:24.583077 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:44:24.583090 systemd[1]: Detected virtualization kvm. Feb 13 15:44:24.583102 systemd[1]: Detected architecture x86-64. Feb 13 15:44:24.583114 systemd[1]: Detected first boot. Feb 13 15:44:24.583126 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:44:24.583139 zram_generator::config[1066]: No configuration found. Feb 13 15:44:24.583152 kernel: Guest personality initialized and is inactive Feb 13 15:44:24.583166 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 15:44:24.583183 kernel: Initialized host personality Feb 13 15:44:24.583194 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:44:24.583206 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:44:24.583220 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:44:24.583233 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:44:24.583245 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:44:24.583257 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:44:24.583269 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:44:24.583284 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:44:24.583296 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:44:24.583309 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:44:24.583321 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:44:24.583333 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:44:24.583345 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:44:24.583358 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:44:24.583374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:44:24.583389 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:44:24.583401 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:44:24.583413 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:44:24.583425 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:44:24.583438 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:44:24.583450 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:44:24.583462 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:44:24.583475 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:44:24.583490 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:44:24.583502 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:44:24.583514 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:44:24.583526 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:44:24.583538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:44:24.583550 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:44:24.583562 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:44:24.583574 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:44:24.583586 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:44:24.583601 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:44:24.583614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:44:24.583626 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:44:24.583638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:44:24.583650 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:44:24.583662 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:44:24.583674 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:44:24.583686 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:44:24.583698 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:44:24.583713 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:44:24.583726 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:44:24.583738 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:44:24.583751 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:44:24.583763 systemd[1]: Reached target machines.target - Containers. Feb 13 15:44:24.583775 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:44:24.583788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:44:24.583800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:44:24.583921 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:44:24.583936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:44:24.583948 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:44:24.583960 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:44:24.583972 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:44:24.583984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:44:24.583996 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:44:24.584008 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:44:24.584020 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:44:24.584035 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:44:24.584047 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:44:24.584060 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:44:24.584072 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:44:24.584085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:44:24.584096 kernel: fuse: init (API version 7.39) Feb 13 15:44:24.584108 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:44:24.584120 kernel: loop: module loaded Feb 13 15:44:24.584134 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:44:24.584146 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:44:24.584158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:44:24.584187 systemd-journald[1131]: Collecting audit messages is disabled. Feb 13 15:44:24.584211 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:44:24.584224 systemd[1]: Stopped verity-setup.service. Feb 13 15:44:24.584237 systemd-journald[1131]: Journal started Feb 13 15:44:24.584259 systemd-journald[1131]: Runtime Journal (/run/log/journal/4458a9d19f4747cf870b0a85728c0fb7) is 6M, max 48.2M, 42.2M free. Feb 13 15:44:24.375829 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:44:24.390650 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:44:24.391145 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:44:24.589686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:44:24.600830 kernel: ACPI: bus type drm_connector registered Feb 13 15:44:24.610923 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:44:24.612081 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:44:24.613214 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:44:24.614381 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:44:24.615438 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:44:24.616595 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:44:24.617783 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:44:24.619051 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:44:24.620538 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:44:24.620756 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:44:24.622243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:44:24.622453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:44:24.623872 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:44:24.624083 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:44:24.625400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:44:24.625605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:44:24.627173 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:44:24.627379 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:44:24.628746 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:44:24.629135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:44:24.630539 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:44:24.631956 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:44:24.633725 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:44:24.635365 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:44:24.649948 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:44:24.664432 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:44:24.667425 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:44:24.668772 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:44:24.668807 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:44:24.671715 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:44:24.674000 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:44:24.678887 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:44:24.680052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:44:24.696876 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:44:24.701000 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:44:24.702714 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:44:24.706580 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:44:24.708109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:44:24.709348 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:44:24.711541 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:44:24.712401 systemd-journald[1131]: Time spent on flushing to /var/log/journal/4458a9d19f4747cf870b0a85728c0fb7 is 13.249ms for 1059 entries. Feb 13 15:44:24.712401 systemd-journald[1131]: System Journal (/var/log/journal/4458a9d19f4747cf870b0a85728c0fb7) is 8M, max 195.6M, 187.6M free. Feb 13 15:44:25.013707 systemd-journald[1131]: Received client request to flush runtime journal. Feb 13 15:44:25.013764 kernel: loop0: detected capacity change from 0 to 218376 Feb 13 15:44:25.013788 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:44:25.013833 kernel: loop1: detected capacity change from 0 to 147912 Feb 13 15:44:25.013861 kernel: loop2: detected capacity change from 0 to 138176 Feb 13 15:44:25.013880 kernel: loop3: detected capacity change from 0 to 218376 Feb 13 15:44:25.013903 kernel: loop4: detected capacity change from 0 to 147912 Feb 13 15:44:25.013922 kernel: loop5: detected capacity change from 0 to 138176 Feb 13 15:44:24.715984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:44:24.718098 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:44:24.719993 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:44:24.726964 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:44:24.728540 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:44:24.741985 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:44:24.746996 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:44:24.749450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:44:24.757477 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:44:24.811859 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:44:24.824045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:44:24.844542 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Feb 13 15:44:24.844556 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Feb 13 15:44:24.850649 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:44:24.960974 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:44:24.963517 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:44:24.973089 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:44:24.998407 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:44:24.999342 (sd-merge)[1204]: Merged extensions into '/usr'. Feb 13 15:44:25.004192 systemd[1]: Reload requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:44:25.004203 systemd[1]: Reloading... Feb 13 15:44:25.062733 zram_generator::config[1232]: No configuration found. Feb 13 15:44:25.173393 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:44:25.186507 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:44:25.252753 systemd[1]: Reloading finished in 248 ms. Feb 13 15:44:25.275966 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:44:25.277791 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:44:25.290371 systemd[1]: Starting ensure-sysext.service... Feb 13 15:44:25.292356 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:44:25.304531 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:44:25.304546 systemd[1]: Reloading... Feb 13 15:44:25.313352 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:44:25.313624 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:44:25.314560 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:44:25.314863 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Feb 13 15:44:25.314939 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Feb 13 15:44:25.319886 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:44:25.320408 systemd-tmpfiles[1274]: Skipping /boot Feb 13 15:44:25.333271 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:44:25.333346 systemd-tmpfiles[1274]: Skipping /boot Feb 13 15:44:25.369911 zram_generator::config[1305]: No configuration found. Feb 13 15:44:25.482678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:44:25.547944 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:44:25.548298 systemd[1]: Reloading finished in 243 ms. Feb 13 15:44:25.572184 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:44:25.573772 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:44:25.597645 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:44:25.607539 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:44:25.610077 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:44:25.612467 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:44:25.617080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:44:25.622052 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:44:25.626766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:44:25.626960 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:44:25.628300 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:44:25.632174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:44:25.647308 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:44:25.648519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:44:25.648629 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:44:25.651452 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:44:25.669733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:44:25.671572 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:44:25.673689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:44:25.673922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:44:25.675918 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:44:25.678789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:44:25.679027 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:44:25.682266 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:44:25.682499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:44:25.694717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:44:25.694957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:44:25.699030 augenrules[1379]: No rules Feb 13 15:44:25.702186 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:44:25.705581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:44:25.714263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:44:25.716539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:44:25.716690 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:44:25.727730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:44:25.731831 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:44:25.732900 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:44:25.737714 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:44:25.757549 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:44:25.758004 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:44:25.759549 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:44:25.761512 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:44:25.763471 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:44:25.763754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:44:25.765475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:44:25.765800 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:44:25.768002 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:44:25.768264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:44:25.770933 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:44:25.773386 systemd-udevd[1390]: Using default interface naming scheme 'v255'. Feb 13 15:44:25.780182 systemd-resolved[1348]: Positive Trust Anchors: Feb 13 15:44:25.780205 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:44:25.780237 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:44:25.784269 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:44:25.784900 systemd-resolved[1348]: Defaulting to hostname 'linux'. Feb 13 15:44:25.790221 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:44:25.791450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:44:25.792725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:44:25.796400 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:44:25.799217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:44:25.830090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:44:25.831355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:44:25.833566 augenrules[1400]: /sbin/augenrules: No change Feb 13 15:44:25.831482 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:44:25.831622 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:44:25.831700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:44:25.832893 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:44:25.834328 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:44:25.842604 augenrules[1440]: No rules Feb 13 15:44:25.863372 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:44:25.863900 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:44:25.865623 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:44:25.866936 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:44:25.889231 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1419) Feb 13 15:44:25.880471 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:44:25.880689 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:44:25.882763 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:44:25.883038 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:44:25.888412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:44:25.888623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:44:25.896426 systemd[1]: Finished ensure-sysext.service. Feb 13 15:44:25.918760 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:44:25.919839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:44:25.927895 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:44:25.936260 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:44:25.940972 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:44:25.941164 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:44:25.941349 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:44:25.944761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:44:25.947787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:44:25.959013 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:44:25.972663 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:44:25.974004 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:44:25.977930 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:44:25.977991 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:44:25.990943 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:44:25.995896 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:44:25.997442 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:44:26.023211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:44:26.080466 kernel: kvm_amd: TSC scaling supported Feb 13 15:44:26.080515 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:44:26.080528 kernel: kvm_amd: Nested Paging enabled Feb 13 15:44:26.080561 kernel: kvm_amd: LBR virtualization supported Feb 13 15:44:26.081378 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:44:26.081479 kernel: kvm_amd: Virtual GIF supported Feb 13 15:44:26.113838 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:44:26.116983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:44:26.123440 systemd-networkd[1463]: lo: Link UP Feb 13 15:44:26.123451 systemd-networkd[1463]: lo: Gained carrier Feb 13 15:44:26.125275 systemd-networkd[1463]: Enumeration completed Feb 13 15:44:26.125411 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:44:26.125649 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:44:26.125654 systemd-networkd[1463]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:44:26.126519 systemd-networkd[1463]: eth0: Link UP Feb 13 15:44:26.126525 systemd-networkd[1463]: eth0: Gained carrier Feb 13 15:44:26.126537 systemd-networkd[1463]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:44:26.126790 systemd[1]: Reached target network.target - Network. Feb 13 15:44:26.134909 systemd-networkd[1463]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:44:26.134947 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:44:26.137584 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Feb 13 15:44:27.441383 systemd-resolved[1348]: Clock change detected. Flushing caches. Feb 13 15:44:27.441419 systemd-timesyncd[1465]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:44:27.441465 systemd-timesyncd[1465]: Initial clock synchronization to Thu 2025-02-13 15:44:27.441351 UTC. Feb 13 15:44:27.441695 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:44:27.443003 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:44:27.444617 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:44:27.447049 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:44:27.449232 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:44:27.458545 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:44:27.462839 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:44:27.499791 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:44:27.501298 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:44:27.502428 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:44:27.503597 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:44:27.504868 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:44:27.506322 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:44:27.507623 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:44:27.508916 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:44:27.510200 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:44:27.510228 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:44:27.511157 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:44:27.512874 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:44:27.515445 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:44:27.518960 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:44:27.537848 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:44:27.539142 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:44:27.542652 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:44:27.544216 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:44:27.555551 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:44:27.557221 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:44:27.558400 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:44:27.559392 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:44:27.559677 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:44:27.559707 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:44:27.560675 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:44:27.562696 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:44:27.565967 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:44:27.566265 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:44:27.570014 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:44:27.571222 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:44:27.574810 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:44:27.575230 jq[1488]: false Feb 13 15:44:27.577280 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:44:27.581102 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:44:27.584069 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:44:27.595160 dbus-daemon[1487]: [system] SELinux support is enabled Feb 13 15:44:27.599342 extend-filesystems[1489]: Found loop3 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found loop4 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found loop5 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found sr0 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found vda Feb 13 15:44:27.599342 extend-filesystems[1489]: Found vda1 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found vda2 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found vda3 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found usr Feb 13 15:44:27.599342 extend-filesystems[1489]: Found vda4 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found vda6 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found vda7 Feb 13 15:44:27.599342 extend-filesystems[1489]: Found vda9 Feb 13 15:44:27.599342 extend-filesystems[1489]: Checking size of /dev/vda9 Feb 13 15:44:27.599193 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:44:27.610086 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:44:27.610621 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:44:27.611302 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:44:27.645675 update_engine[1503]: I20250213 15:44:27.644655 1503 main.cc:92] Flatcar Update Engine starting Feb 13 15:44:27.616014 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:44:27.647752 update_engine[1503]: I20250213 15:44:27.646302 1503 update_check_scheduler.cc:74] Next update check in 7m50s Feb 13 15:44:27.647785 jq[1504]: true Feb 13 15:44:27.632203 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:44:27.635210 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:44:27.649192 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:44:27.649629 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:44:27.650126 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:44:27.650438 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:44:27.653026 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:44:27.653342 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:44:27.664709 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:44:27.671682 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:44:27.671720 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:44:27.674847 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:44:27.674873 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:44:27.676877 extend-filesystems[1489]: Resized partition /dev/vda9 Feb 13 15:44:27.678980 extend-filesystems[1523]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:44:27.689009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1417) Feb 13 15:44:27.689044 tar[1509]: linux-amd64/LICENSE Feb 13 15:44:27.689044 tar[1509]: linux-amd64/helm Feb 13 15:44:27.685390 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:44:27.696159 jq[1510]: true Feb 13 15:44:27.698081 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:44:27.756993 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:44:27.762356 systemd-logind[1500]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:44:27.765104 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:44:27.762380 systemd-logind[1500]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:44:27.764373 systemd-logind[1500]: New seat seat0. Feb 13 15:44:27.767477 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:44:27.788421 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:44:27.797127 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:44:27.805038 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:44:27.805344 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:44:27.812137 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:44:27.903460 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:44:27.915174 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:44:27.929300 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:44:27.930872 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:44:28.033914 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:44:28.139954 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:44:28.326171 containerd[1511]: time="2025-02-13T15:44:28.325956481Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:44:28.326910 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:44:28.326910 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:44:28.326910 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:44:28.331522 extend-filesystems[1489]: Resized filesystem in /dev/vda9 Feb 13 15:44:28.333352 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:44:28.332684 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:44:28.333026 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:44:28.335558 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:44:28.339374 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:44:28.352611 containerd[1511]: time="2025-02-13T15:44:28.352544654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:44:28.354458 containerd[1511]: time="2025-02-13T15:44:28.354411274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:44:28.354458 containerd[1511]: time="2025-02-13T15:44:28.354444817Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:44:28.354513 containerd[1511]: time="2025-02-13T15:44:28.354463201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:44:28.354695 containerd[1511]: time="2025-02-13T15:44:28.354668045Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:44:28.354718 containerd[1511]: time="2025-02-13T15:44:28.354693573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:44:28.354798 containerd[1511]: time="2025-02-13T15:44:28.354776649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:44:28.354821 containerd[1511]: time="2025-02-13T15:44:28.354797328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:44:28.355140 containerd[1511]: time="2025-02-13T15:44:28.355108882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:44:28.355140 containerd[1511]: time="2025-02-13T15:44:28.355132596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:44:28.355182 containerd[1511]: time="2025-02-13T15:44:28.355149528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:44:28.355182 containerd[1511]: time="2025-02-13T15:44:28.355162042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:44:28.355288 containerd[1511]: time="2025-02-13T15:44:28.355268822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:44:28.355565 containerd[1511]: time="2025-02-13T15:44:28.355538167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:44:28.355759 containerd[1511]: time="2025-02-13T15:44:28.355733112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:44:28.355759 containerd[1511]: time="2025-02-13T15:44:28.355752929Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:44:28.355893 containerd[1511]: time="2025-02-13T15:44:28.355861643Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:44:28.355983 containerd[1511]: time="2025-02-13T15:44:28.355958054Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:44:28.361844 tar[1509]: linux-amd64/README.md Feb 13 15:44:28.382100 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:44:28.416979 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:44:28.419779 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:60944.service - OpenSSH per-connection server daemon (10.0.0.1:60944). Feb 13 15:44:28.506695 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 60944 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:44:28.508995 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:28.521228 systemd-logind[1500]: New session 1 of user core. Feb 13 15:44:28.522768 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:44:28.540429 containerd[1511]: time="2025-02-13T15:44:28.540221147Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:44:28.540429 containerd[1511]: time="2025-02-13T15:44:28.540278665Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:44:28.540429 containerd[1511]: time="2025-02-13T15:44:28.540300145Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:44:28.540429 containerd[1511]: time="2025-02-13T15:44:28.540318440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:44:28.540429 containerd[1511]: time="2025-02-13T15:44:28.540331885Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:44:28.540371 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540501723Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540751191Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540858663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540872338Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540899579Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540912834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540944223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540959832Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540976303Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.540995429Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.541008143Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.541020767Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.541032699Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:44:28.541994 containerd[1511]: time="2025-02-13T15:44:28.541053268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541066633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541084897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541097581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541109623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541127076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541138437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541150219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541162092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541176168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541187279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541198690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541210372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541224218Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541241711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542311 containerd[1511]: time="2025-02-13T15:44:28.541253794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542628 containerd[1511]: time="2025-02-13T15:44:28.541264193Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:44:28.542628 containerd[1511]: time="2025-02-13T15:44:28.541979144Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:44:28.542628 containerd[1511]: time="2025-02-13T15:44:28.542001926Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:44:28.542628 containerd[1511]: time="2025-02-13T15:44:28.542012997Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:44:28.542628 containerd[1511]: time="2025-02-13T15:44:28.542024629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:44:28.542628 containerd[1511]: time="2025-02-13T15:44:28.542033686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542628 containerd[1511]: time="2025-02-13T15:44:28.542045358Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:44:28.542628 containerd[1511]: time="2025-02-13T15:44:28.542055667Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:44:28.542628 containerd[1511]: time="2025-02-13T15:44:28.542065065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:44:28.542812 containerd[1511]: time="2025-02-13T15:44:28.542337816Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:44:28.542812 containerd[1511]: time="2025-02-13T15:44:28.542384223Z" level=info msg="Connect containerd service" Feb 13 15:44:28.542812 containerd[1511]: time="2025-02-13T15:44:28.542423627Z" level=info msg="using legacy CRI server" Feb 13 15:44:28.542812 containerd[1511]: time="2025-02-13T15:44:28.542430169Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:44:28.542812 containerd[1511]: time="2025-02-13T15:44:28.542523294Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:44:28.543156 containerd[1511]: time="2025-02-13T15:44:28.543122177Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:44:28.543749 containerd[1511]: time="2025-02-13T15:44:28.543312674Z" level=info msg="Start subscribing containerd event" Feb 13 15:44:28.544540 containerd[1511]: time="2025-02-13T15:44:28.544490663Z" level=info msg="Start recovering state" Feb 13 15:44:28.544600 containerd[1511]: time="2025-02-13T15:44:28.544575842Z" level=info msg="Start event monitor" Feb 13 15:44:28.544600 containerd[1511]: time="2025-02-13T15:44:28.544591392Z" level=info msg="Start snapshots syncer" Feb 13 15:44:28.544600 containerd[1511]: time="2025-02-13T15:44:28.544601390Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:44:28.544675 containerd[1511]: time="2025-02-13T15:44:28.544610337Z" level=info msg="Start streaming server" Feb 13 15:44:28.544776 containerd[1511]: time="2025-02-13T15:44:28.543896789Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:44:28.544837 containerd[1511]: time="2025-02-13T15:44:28.544821433Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:44:28.544975 containerd[1511]: time="2025-02-13T15:44:28.544901313Z" level=info msg="containerd successfully booted in 0.220108s" Feb 13 15:44:28.545006 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:44:28.552415 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:44:28.564366 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:44:28.568491 (systemd)[1583]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:44:28.570921 systemd-logind[1500]: New session c1 of user core. Feb 13 15:44:28.722281 systemd[1583]: Queued start job for default target default.target. Feb 13 15:44:28.734475 systemd[1583]: Created slice app.slice - User Application Slice. Feb 13 15:44:28.734511 systemd[1583]: Reached target paths.target - Paths. Feb 13 15:44:28.734562 systemd[1583]: Reached target timers.target - Timers. Feb 13 15:44:28.736325 systemd[1583]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:44:28.749240 systemd[1583]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:44:28.749380 systemd[1583]: Reached target sockets.target - Sockets. Feb 13 15:44:28.749434 systemd[1583]: Reached target basic.target - Basic System. Feb 13 15:44:28.749493 systemd[1583]: Reached target default.target - Main User Target. Feb 13 15:44:28.749534 systemd[1583]: Startup finished in 171ms. Feb 13 15:44:28.750095 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:44:28.752961 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:44:28.830241 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:60950.service - OpenSSH per-connection server daemon (10.0.0.1:60950). Feb 13 15:44:28.864743 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 60950 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:44:28.866342 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:28.871011 systemd-logind[1500]: New session 2 of user core. Feb 13 15:44:28.881084 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:44:28.937968 sshd[1596]: Connection closed by 10.0.0.1 port 60950 Feb 13 15:44:28.938376 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:28.951792 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:60950.service: Deactivated successfully. Feb 13 15:44:28.953791 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:44:28.955279 systemd-logind[1500]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:44:28.956571 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:60966.service - OpenSSH per-connection server daemon (10.0.0.1:60966). Feb 13 15:44:28.958704 systemd-logind[1500]: Removed session 2. Feb 13 15:44:28.993983 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 60966 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:44:28.995262 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:28.999603 systemd-logind[1500]: New session 3 of user core. Feb 13 15:44:29.015101 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:44:29.069136 sshd[1604]: Connection closed by 10.0.0.1 port 60966 Feb 13 15:44:29.069485 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:29.073371 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:60966.service: Deactivated successfully. Feb 13 15:44:29.075332 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:44:29.075974 systemd-logind[1500]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:44:29.076747 systemd-logind[1500]: Removed session 3. Feb 13 15:44:29.305119 systemd-networkd[1463]: eth0: Gained IPv6LL Feb 13 15:44:29.308496 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:44:29.310236 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:44:29.319166 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:44:29.321769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:44:29.323973 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:44:29.340378 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:44:29.340687 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:44:29.349205 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:44:29.354242 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:44:30.010287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:44:30.012458 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:44:30.014072 systemd[1]: Startup finished in 755ms (kernel) + 7.110s (initrd) + 4.934s (userspace) = 12.800s. Feb 13 15:44:30.015390 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:44:30.421245 kubelet[1631]: E0213 15:44:30.421112 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:44:30.425194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:44:30.425400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:44:30.425774 systemd[1]: kubelet.service: Consumed 961ms CPU time, 255.1M memory peak. Feb 13 15:44:39.082965 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). Feb 13 15:44:39.121819 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:44:39.123756 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:39.129009 systemd-logind[1500]: New session 4 of user core. Feb 13 15:44:39.139193 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:44:39.193542 sshd[1646]: Connection closed by 10.0.0.1 port 45342 Feb 13 15:44:39.194004 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:39.205306 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:45342.service: Deactivated successfully. Feb 13 15:44:39.207309 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:44:39.209165 systemd-logind[1500]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:44:39.222509 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:45354.service - OpenSSH per-connection server daemon (10.0.0.1:45354). Feb 13 15:44:39.224107 systemd-logind[1500]: Removed session 4. Feb 13 15:44:39.255849 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 45354 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:44:39.257498 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:39.262225 systemd-logind[1500]: New session 5 of user core. Feb 13 15:44:39.272100 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:44:39.323687 sshd[1654]: Connection closed by 10.0.0.1 port 45354 Feb 13 15:44:39.324186 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:39.336061 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:45354.service: Deactivated successfully. Feb 13 15:44:39.338234 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:44:39.339851 systemd-logind[1500]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:44:39.353291 systemd[1]: Started sshd@5-10.0.0.58:22-10.0.0.1:55190.service - OpenSSH per-connection server daemon (10.0.0.1:55190). Feb 13 15:44:39.354608 systemd-logind[1500]: Removed session 5. Feb 13 15:44:39.388437 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 55190 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:44:39.390599 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:39.396981 systemd-logind[1500]: New session 6 of user core. Feb 13 15:44:39.406141 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:44:39.461609 sshd[1662]: Connection closed by 10.0.0.1 port 55190 Feb 13 15:44:39.462075 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:39.481228 systemd[1]: sshd@5-10.0.0.58:22-10.0.0.1:55190.service: Deactivated successfully. Feb 13 15:44:39.483171 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:44:39.484922 systemd-logind[1500]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:44:39.494180 systemd[1]: Started sshd@6-10.0.0.58:22-10.0.0.1:55196.service - OpenSSH per-connection server daemon (10.0.0.1:55196). Feb 13 15:44:39.495162 systemd-logind[1500]: Removed session 6. Feb 13 15:44:39.529202 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 55196 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:44:39.530777 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:39.536487 systemd-logind[1500]: New session 7 of user core. Feb 13 15:44:39.546113 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:44:39.606641 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:44:39.607006 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:44:39.624527 sudo[1671]: pam_unix(sudo:session): session closed for user root Feb 13 15:44:39.626201 sshd[1670]: Connection closed by 10.0.0.1 port 55196 Feb 13 15:44:39.626734 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:39.636736 systemd[1]: sshd@6-10.0.0.58:22-10.0.0.1:55196.service: Deactivated successfully. Feb 13 15:44:39.638741 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:44:39.640331 systemd-logind[1500]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:44:39.651252 systemd[1]: Started sshd@7-10.0.0.58:22-10.0.0.1:55210.service - OpenSSH per-connection server daemon (10.0.0.1:55210). Feb 13 15:44:39.652671 systemd-logind[1500]: Removed session 7. Feb 13 15:44:39.684821 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 55210 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:44:39.686322 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:39.691001 systemd-logind[1500]: New session 8 of user core. Feb 13 15:44:39.704066 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:44:39.758166 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:44:39.758503 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:44:39.762601 sudo[1681]: pam_unix(sudo:session): session closed for user root Feb 13 15:44:39.768525 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:44:39.768875 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:44:39.788263 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:44:39.821107 augenrules[1703]: No rules Feb 13 15:44:39.823227 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:44:39.823599 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:44:39.824847 sudo[1680]: pam_unix(sudo:session): session closed for user root Feb 13 15:44:39.826593 sshd[1679]: Connection closed by 10.0.0.1 port 55210 Feb 13 15:44:39.826995 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Feb 13 15:44:39.835421 systemd[1]: sshd@7-10.0.0.58:22-10.0.0.1:55210.service: Deactivated successfully. Feb 13 15:44:39.837448 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:44:39.839244 systemd-logind[1500]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:44:39.852217 systemd[1]: Started sshd@8-10.0.0.58:22-10.0.0.1:55218.service - OpenSSH per-connection server daemon (10.0.0.1:55218). Feb 13 15:44:39.853167 systemd-logind[1500]: Removed session 8. Feb 13 15:44:39.886749 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 55218 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:44:39.888287 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:44:39.893080 systemd-logind[1500]: New session 9 of user core. Feb 13 15:44:39.903080 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:44:39.960357 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:44:39.960797 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:44:40.261192 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:44:40.261353 (dockerd)[1736]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:44:40.538263 dockerd[1736]: time="2025-02-13T15:44:40.537810063Z" level=info msg="Starting up" Feb 13 15:44:40.539636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:44:40.547226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:44:40.839864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:44:40.844383 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:44:40.996028 kubelet[1768]: E0213 15:44:40.995919 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:44:41.003238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:44:41.003474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:44:41.003898 systemd[1]: kubelet.service: Consumed 216ms CPU time, 104.6M memory peak. Feb 13 15:44:41.084614 dockerd[1736]: time="2025-02-13T15:44:41.084562975Z" level=info msg="Loading containers: start." Feb 13 15:44:41.255968 kernel: Initializing XFRM netlink socket Feb 13 15:44:41.340466 systemd-networkd[1463]: docker0: Link UP Feb 13 15:44:41.388663 dockerd[1736]: time="2025-02-13T15:44:41.388592711Z" level=info msg="Loading containers: done." Feb 13 15:44:41.404185 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck462477522-merged.mount: Deactivated successfully. Feb 13 15:44:41.404717 dockerd[1736]: time="2025-02-13T15:44:41.404652583Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:44:41.404815 dockerd[1736]: time="2025-02-13T15:44:41.404781965Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:44:41.404976 dockerd[1736]: time="2025-02-13T15:44:41.404954499Z" level=info msg="Daemon has completed initialization" Feb 13 15:44:41.445704 dockerd[1736]: time="2025-02-13T15:44:41.445635746Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:44:41.445875 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:44:41.970293 containerd[1511]: time="2025-02-13T15:44:41.970241292Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 15:44:42.649710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1105771270.mount: Deactivated successfully. Feb 13 15:44:45.025316 containerd[1511]: time="2025-02-13T15:44:45.025246425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:45.040520 containerd[1511]: time="2025-02-13T15:44:45.040448569Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 15:44:45.056552 containerd[1511]: time="2025-02-13T15:44:45.056490698Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:45.089494 containerd[1511]: time="2025-02-13T15:44:45.089399052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:45.090948 containerd[1511]: time="2025-02-13T15:44:45.090873336Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 3.120586458s" Feb 13 15:44:45.090948 containerd[1511]: time="2025-02-13T15:44:45.090940392Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 15:44:45.091653 containerd[1511]: time="2025-02-13T15:44:45.091544464Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 15:44:46.274729 containerd[1511]: time="2025-02-13T15:44:46.274672128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:46.275464 containerd[1511]: time="2025-02-13T15:44:46.275416584Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 15:44:46.276558 containerd[1511]: time="2025-02-13T15:44:46.276531044Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:46.279250 containerd[1511]: time="2025-02-13T15:44:46.279230686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:46.280104 containerd[1511]: time="2025-02-13T15:44:46.280071031Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.188494407s" Feb 13 15:44:46.280104 containerd[1511]: time="2025-02-13T15:44:46.280099284Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 15:44:46.282764 containerd[1511]: time="2025-02-13T15:44:46.282719377Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 15:44:48.295091 containerd[1511]: time="2025-02-13T15:44:48.295007376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:48.312538 containerd[1511]: time="2025-02-13T15:44:48.312469297Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 15:44:48.326421 containerd[1511]: time="2025-02-13T15:44:48.326354943Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:48.351661 containerd[1511]: time="2025-02-13T15:44:48.351620105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:48.353055 containerd[1511]: time="2025-02-13T15:44:48.353018337Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 2.070257633s" Feb 13 15:44:48.353055 containerd[1511]: time="2025-02-13T15:44:48.353053533Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 15:44:48.353564 containerd[1511]: time="2025-02-13T15:44:48.353520218Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 15:44:50.495336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215686451.mount: Deactivated successfully. Feb 13 15:44:51.141480 containerd[1511]: time="2025-02-13T15:44:51.141424628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:51.142488 containerd[1511]: time="2025-02-13T15:44:51.142445262Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 15:44:51.143593 containerd[1511]: time="2025-02-13T15:44:51.143559882Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:51.146951 containerd[1511]: time="2025-02-13T15:44:51.146890336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:51.147589 containerd[1511]: time="2025-02-13T15:44:51.147550394Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.793987085s" Feb 13 15:44:51.147589 containerd[1511]: time="2025-02-13T15:44:51.147579238Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 15:44:51.148016 containerd[1511]: time="2025-02-13T15:44:51.147995889Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 15:44:51.253798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:44:51.263077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:44:51.412147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:44:51.416241 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:44:51.761831 kubelet[2028]: E0213 15:44:51.761646 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:44:51.766123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:44:51.766400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:44:51.766775 systemd[1]: kubelet.service: Consumed 195ms CPU time, 106.1M memory peak. Feb 13 15:44:51.980708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858190532.mount: Deactivated successfully. Feb 13 15:44:52.946580 containerd[1511]: time="2025-02-13T15:44:52.946500915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:52.954713 containerd[1511]: time="2025-02-13T15:44:52.954647009Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 15:44:52.955988 containerd[1511]: time="2025-02-13T15:44:52.955957717Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:52.959516 containerd[1511]: time="2025-02-13T15:44:52.959456688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:52.960572 containerd[1511]: time="2025-02-13T15:44:52.960533156Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.812444323s" Feb 13 15:44:52.960616 containerd[1511]: time="2025-02-13T15:44:52.960571428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 15:44:52.961039 containerd[1511]: time="2025-02-13T15:44:52.961017945Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:44:53.434426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900826882.mount: Deactivated successfully. Feb 13 15:44:53.439825 containerd[1511]: time="2025-02-13T15:44:53.439782258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:53.440648 containerd[1511]: time="2025-02-13T15:44:53.440614669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 15:44:53.441860 containerd[1511]: time="2025-02-13T15:44:53.441821271Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:53.444285 containerd[1511]: time="2025-02-13T15:44:53.444243903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:53.445182 containerd[1511]: time="2025-02-13T15:44:53.445149742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 484.102763ms" Feb 13 15:44:53.445254 containerd[1511]: time="2025-02-13T15:44:53.445184267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 15:44:53.445679 containerd[1511]: time="2025-02-13T15:44:53.445655099Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 15:44:53.899368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498288787.mount: Deactivated successfully. Feb 13 15:44:56.144963 containerd[1511]: time="2025-02-13T15:44:56.144895074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:56.146094 containerd[1511]: time="2025-02-13T15:44:56.146017849Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 15:44:56.147142 containerd[1511]: time="2025-02-13T15:44:56.147071665Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:56.151215 containerd[1511]: time="2025-02-13T15:44:56.151172293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:44:56.152194 containerd[1511]: time="2025-02-13T15:44:56.152167870Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.7064858s" Feb 13 15:44:56.152194 containerd[1511]: time="2025-02-13T15:44:56.152190603Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 15:44:58.233436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:44:58.233591 systemd[1]: kubelet.service: Consumed 195ms CPU time, 106.1M memory peak. Feb 13 15:44:58.243113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:44:58.266518 systemd[1]: Reload requested from client PID 2176 ('systemctl') (unit session-9.scope)... Feb 13 15:44:58.266535 systemd[1]: Reloading... Feb 13 15:44:58.343950 zram_generator::config[2223]: No configuration found. Feb 13 15:44:58.598004 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:44:58.699103 systemd[1]: Reloading finished in 432 ms. Feb 13 15:44:58.750721 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:44:58.754862 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:44:58.755140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:44:58.755177 systemd[1]: kubelet.service: Consumed 153ms CPU time, 91.9M memory peak. Feb 13 15:44:58.756641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:44:58.913868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:44:58.917661 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:44:58.950800 kubelet[2270]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:44:58.950800 kubelet[2270]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 15:44:58.950800 kubelet[2270]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:44:58.951200 kubelet[2270]: I0213 15:44:58.950858 2270 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:44:59.159653 kubelet[2270]: I0213 15:44:59.159603 2270 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 15:44:59.159653 kubelet[2270]: I0213 15:44:59.159634 2270 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:44:59.161346 kubelet[2270]: I0213 15:44:59.160508 2270 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 15:44:59.184015 kubelet[2270]: E0213 15:44:59.183899 2270 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:44:59.185062 kubelet[2270]: I0213 15:44:59.185033 2270 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:44:59.192106 kubelet[2270]: E0213 15:44:59.192077 2270 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:44:59.192106 kubelet[2270]: I0213 15:44:59.192102 2270 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:44:59.197390 kubelet[2270]: I0213 15:44:59.197358 2270 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:44:59.198427 kubelet[2270]: I0213 15:44:59.198393 2270 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:44:59.198580 kubelet[2270]: I0213 15:44:59.198435 2270 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:44:59.198580 kubelet[2270]: I0213 15:44:59.198575 2270 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:44:59.198695 kubelet[2270]: I0213 15:44:59.198584 2270 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 15:44:59.198720 kubelet[2270]: I0213 15:44:59.198700 2270 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:44:59.201423 kubelet[2270]: I0213 15:44:59.201403 2270 kubelet.go:446] "Attempting to sync node with API server" Feb 13 15:44:59.201423 kubelet[2270]: I0213 15:44:59.201421 2270 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:44:59.201490 kubelet[2270]: I0213 15:44:59.201436 2270 kubelet.go:352] "Adding apiserver pod source" Feb 13 15:44:59.201490 kubelet[2270]: I0213 15:44:59.201445 2270 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:44:59.207103 kubelet[2270]: W0213 15:44:59.206534 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Feb 13 15:44:59.207103 kubelet[2270]: E0213 15:44:59.206584 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:44:59.207103 kubelet[2270]: I0213 15:44:59.206663 2270 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:44:59.207103 kubelet[2270]: I0213 15:44:59.207033 2270 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:44:59.207623 kubelet[2270]: W0213 15:44:59.207604 2270 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:44:59.208214 kubelet[2270]: W0213 15:44:59.208144 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Feb 13 15:44:59.208263 kubelet[2270]: E0213 15:44:59.208229 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:44:59.209549 kubelet[2270]: I0213 15:44:59.209519 2270 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 15:44:59.209593 kubelet[2270]: I0213 15:44:59.209555 2270 server.go:1287] "Started kubelet" Feb 13 15:44:59.212920 kubelet[2270]: I0213 15:44:59.212882 2270 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:44:59.213477 kubelet[2270]: I0213 15:44:59.213421 2270 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:44:59.214113 kubelet[2270]: I0213 15:44:59.213738 2270 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:44:59.214113 kubelet[2270]: I0213 15:44:59.213810 2270 server.go:490] "Adding debug handlers to kubelet server" Feb 13 15:44:59.215358 kubelet[2270]: E0213 15:44:59.214422 2270 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cf0478da5c82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:44:59.20953869 +0000 UTC m=+0.288376677,LastTimestamp:2025-02-13 15:44:59.20953869 +0000 UTC m=+0.288376677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:44:59.215529 kubelet[2270]: I0213 15:44:59.215509 2270 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:44:59.215676 kubelet[2270]: E0213 15:44:59.215657 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:44:59.215713 kubelet[2270]: I0213 15:44:59.215685 2270 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 15:44:59.215737 kubelet[2270]: I0213 15:44:59.215708 2270 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:44:59.215879 kubelet[2270]: I0213 15:44:59.215840 2270 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:44:59.215945 kubelet[2270]: I0213 15:44:59.215897 2270 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:44:59.216501 kubelet[2270]: E0213 15:44:59.216109 2270 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:44:59.216501 kubelet[2270]: W0213 15:44:59.216141 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Feb 13 15:44:59.216501 kubelet[2270]: E0213 15:44:59.216171 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:44:59.216501 kubelet[2270]: E0213 15:44:59.216304 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="200ms" Feb 13 15:44:59.216895 kubelet[2270]: I0213 15:44:59.216872 2270 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:44:59.217243 kubelet[2270]: I0213 15:44:59.217220 2270 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:44:59.218144 kubelet[2270]: I0213 15:44:59.218124 2270 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:44:59.230872 kubelet[2270]: I0213 15:44:59.230818 2270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:44:59.232106 kubelet[2270]: I0213 15:44:59.232076 2270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:44:59.232106 kubelet[2270]: I0213 15:44:59.232102 2270 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 15:44:59.232174 kubelet[2270]: I0213 15:44:59.232124 2270 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 15:44:59.232174 kubelet[2270]: I0213 15:44:59.232132 2270 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 15:44:59.232233 kubelet[2270]: E0213 15:44:59.232172 2270 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:44:59.236211 kubelet[2270]: W0213 15:44:59.236163 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Feb 13 15:44:59.236263 kubelet[2270]: E0213 15:44:59.236209 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:44:59.236897 kubelet[2270]: I0213 15:44:59.236879 2270 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 15:44:59.236897 kubelet[2270]: I0213 15:44:59.236895 2270 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 15:44:59.236965 kubelet[2270]: I0213 15:44:59.236912 2270 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:44:59.316244 kubelet[2270]: E0213 15:44:59.316192 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:44:59.332561 kubelet[2270]: E0213 15:44:59.332540 2270 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:44:59.416984 kubelet[2270]: E0213 15:44:59.416910 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:44:59.417299 kubelet[2270]: E0213 15:44:59.417258 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="400ms" Feb 13 15:44:59.517544 kubelet[2270]: E0213 15:44:59.517517 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:44:59.532660 kubelet[2270]: E0213 15:44:59.532612 2270 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:44:59.614975 kubelet[2270]: I0213 15:44:59.614913 2270 policy_none.go:49] "None policy: Start" Feb 13 15:44:59.614975 kubelet[2270]: I0213 15:44:59.614975 2270 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 15:44:59.615072 kubelet[2270]: I0213 15:44:59.614992 2270 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:44:59.617669 kubelet[2270]: E0213 15:44:59.617627 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:44:59.621782 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:44:59.637040 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:44:59.640057 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:44:59.650951 kubelet[2270]: I0213 15:44:59.650911 2270 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:44:59.651294 kubelet[2270]: I0213 15:44:59.651179 2270 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:44:59.651294 kubelet[2270]: I0213 15:44:59.651195 2270 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:44:59.651601 kubelet[2270]: I0213 15:44:59.651392 2270 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:44:59.652005 kubelet[2270]: E0213 15:44:59.651985 2270 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 15:44:59.652060 kubelet[2270]: E0213 15:44:59.652042 2270 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:44:59.752664 kubelet[2270]: I0213 15:44:59.752629 2270 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:44:59.753234 kubelet[2270]: E0213 15:44:59.753211 2270 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Feb 13 15:44:59.817986 kubelet[2270]: E0213 15:44:59.817855 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="800ms" Feb 13 15:44:59.941462 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 15:44:59.953832 kubelet[2270]: E0213 15:44:59.953800 2270 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:44:59.954463 kubelet[2270]: I0213 15:44:59.954437 2270 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:44:59.954791 kubelet[2270]: E0213 15:44:59.954742 2270 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Feb 13 15:44:59.956199 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 15:44:59.964036 kubelet[2270]: E0213 15:44:59.963997 2270 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:44:59.966698 systemd[1]: Created slice kubepods-burstable-podd85890ddc32140095099c8534ffff634.slice - libcontainer container kubepods-burstable-podd85890ddc32140095099c8534ffff634.slice. Feb 13 15:44:59.968164 kubelet[2270]: E0213 15:44:59.968135 2270 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:45:00.020563 kubelet[2270]: I0213 15:45:00.020519 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d85890ddc32140095099c8534ffff634-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d85890ddc32140095099c8534ffff634\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:00.020563 kubelet[2270]: I0213 15:45:00.020552 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d85890ddc32140095099c8534ffff634-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d85890ddc32140095099c8534ffff634\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:00.020661 kubelet[2270]: I0213 15:45:00.020574 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d85890ddc32140095099c8534ffff634-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d85890ddc32140095099c8534ffff634\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:00.020661 kubelet[2270]: I0213 15:45:00.020604 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:00.020661 kubelet[2270]: I0213 15:45:00.020620 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:00.020661 kubelet[2270]: I0213 15:45:00.020635 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:00.020661 kubelet[2270]: I0213 15:45:00.020650 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:45:00.020776 kubelet[2270]: I0213 15:45:00.020664 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:00.020776 kubelet[2270]: I0213 15:45:00.020677 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:00.208113 kubelet[2270]: W0213 15:45:00.207981 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Feb 13 15:45:00.208113 kubelet[2270]: E0213 15:45:00.208044 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:45:00.241300 kubelet[2270]: W0213 15:45:00.241256 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Feb 13 15:45:00.241300 kubelet[2270]: E0213 15:45:00.241298 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:45:00.254744 kubelet[2270]: E0213 15:45:00.254720 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:00.255265 containerd[1511]: time="2025-02-13T15:45:00.255214670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 15:45:00.264355 kubelet[2270]: E0213 15:45:00.264326 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:00.264597 containerd[1511]: time="2025-02-13T15:45:00.264572985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 15:45:00.268870 kubelet[2270]: E0213 15:45:00.268840 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:00.269118 containerd[1511]: time="2025-02-13T15:45:00.269095040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d85890ddc32140095099c8534ffff634,Namespace:kube-system,Attempt:0,}" Feb 13 15:45:00.356530 kubelet[2270]: I0213 15:45:00.356507 2270 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:45:00.356790 kubelet[2270]: E0213 15:45:00.356763 2270 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Feb 13 15:45:00.416242 kubelet[2270]: W0213 15:45:00.416183 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Feb 13 15:45:00.416309 kubelet[2270]: E0213 15:45:00.416238 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:45:00.618919 kubelet[2270]: E0213 15:45:00.618853 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="1.6s" Feb 13 15:45:00.711818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149028913.mount: Deactivated successfully. Feb 13 15:45:00.716653 containerd[1511]: time="2025-02-13T15:45:00.716582933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:45:00.722539 containerd[1511]: time="2025-02-13T15:45:00.722486319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:45:00.723472 containerd[1511]: time="2025-02-13T15:45:00.723429287Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:45:00.724348 containerd[1511]: time="2025-02-13T15:45:00.724315875Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:45:00.725235 containerd[1511]: time="2025-02-13T15:45:00.725203145Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:45:00.726127 containerd[1511]: time="2025-02-13T15:45:00.726015811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:45:00.727047 containerd[1511]: time="2025-02-13T15:45:00.727006580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:45:00.728530 containerd[1511]: time="2025-02-13T15:45:00.728505980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:45:00.730491 containerd[1511]: time="2025-02-13T15:45:00.730466308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 465.836423ms" Feb 13 15:45:00.731122 containerd[1511]: time="2025-02-13T15:45:00.731079199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.769245ms" Feb 13 15:45:00.733478 containerd[1511]: time="2025-02-13T15:45:00.733436882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.290724ms" Feb 13 15:45:00.805409 kubelet[2270]: W0213 15:45:00.805161 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Feb 13 15:45:00.805409 kubelet[2270]: E0213 15:45:00.805228 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:45:00.873899 containerd[1511]: time="2025-02-13T15:45:00.873698129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:45:00.874361 containerd[1511]: time="2025-02-13T15:45:00.874320719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:45:00.874427 containerd[1511]: time="2025-02-13T15:45:00.874374041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:00.874522 containerd[1511]: time="2025-02-13T15:45:00.874490355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:00.875881 containerd[1511]: time="2025-02-13T15:45:00.875806472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:45:00.875881 containerd[1511]: time="2025-02-13T15:45:00.873596464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:45:00.875881 containerd[1511]: time="2025-02-13T15:45:00.875867820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:45:00.876010 containerd[1511]: time="2025-02-13T15:45:00.875883240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:00.877724 containerd[1511]: time="2025-02-13T15:45:00.877059417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:00.877910 containerd[1511]: time="2025-02-13T15:45:00.877846263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:45:00.878075 containerd[1511]: time="2025-02-13T15:45:00.878009277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:00.878977 containerd[1511]: time="2025-02-13T15:45:00.878288085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:00.901080 systemd[1]: Started cri-containerd-0f86334e1cb5c6cdbfb07dc2bd1d7c642b691221835e42afc6eb9e276c41ebbe.scope - libcontainer container 0f86334e1cb5c6cdbfb07dc2bd1d7c642b691221835e42afc6eb9e276c41ebbe. Feb 13 15:45:00.902661 systemd[1]: Started cri-containerd-c913bb87163d0fda3ad1ec19976557bfd9e580b1b9333f8eb17a8240d50ffe34.scope - libcontainer container c913bb87163d0fda3ad1ec19976557bfd9e580b1b9333f8eb17a8240d50ffe34. Feb 13 15:45:00.904296 systemd[1]: Started cri-containerd-da3c70a79b488082bc3417a171878ba19d8bbb4279f99d05f4809af51fed0d8c.scope - libcontainer container da3c70a79b488082bc3417a171878ba19d8bbb4279f99d05f4809af51fed0d8c. Feb 13 15:45:00.938961 containerd[1511]: time="2025-02-13T15:45:00.938853951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f86334e1cb5c6cdbfb07dc2bd1d7c642b691221835e42afc6eb9e276c41ebbe\"" Feb 13 15:45:00.940046 kubelet[2270]: E0213 15:45:00.940014 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:00.943816 containerd[1511]: time="2025-02-13T15:45:00.943752772Z" level=info msg="CreateContainer within sandbox \"0f86334e1cb5c6cdbfb07dc2bd1d7c642b691221835e42afc6eb9e276c41ebbe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:45:00.946731 containerd[1511]: time="2025-02-13T15:45:00.946646328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d85890ddc32140095099c8534ffff634,Namespace:kube-system,Attempt:0,} returns sandbox id \"da3c70a79b488082bc3417a171878ba19d8bbb4279f99d05f4809af51fed0d8c\"" Feb 13 15:45:00.947150 kubelet[2270]: E0213 15:45:00.947132 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:00.948518 containerd[1511]: time="2025-02-13T15:45:00.948484350Z" level=info msg="CreateContainer within sandbox \"da3c70a79b488082bc3417a171878ba19d8bbb4279f99d05f4809af51fed0d8c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:45:00.951236 containerd[1511]: time="2025-02-13T15:45:00.951201335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"c913bb87163d0fda3ad1ec19976557bfd9e580b1b9333f8eb17a8240d50ffe34\"" Feb 13 15:45:00.951681 kubelet[2270]: E0213 15:45:00.951656 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:00.952800 containerd[1511]: time="2025-02-13T15:45:00.952739380Z" level=info msg="CreateContainer within sandbox \"c913bb87163d0fda3ad1ec19976557bfd9e580b1b9333f8eb17a8240d50ffe34\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:45:00.968704 containerd[1511]: time="2025-02-13T15:45:00.968662014Z" level=info msg="CreateContainer within sandbox \"0f86334e1cb5c6cdbfb07dc2bd1d7c642b691221835e42afc6eb9e276c41ebbe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce0c13dab5d538100247bba297197b962e3b4f329a81329ea927c8571cbfa407\"" Feb 13 15:45:00.969099 containerd[1511]: time="2025-02-13T15:45:00.969074729Z" level=info msg="StartContainer for \"ce0c13dab5d538100247bba297197b962e3b4f329a81329ea927c8571cbfa407\"" Feb 13 15:45:00.972604 containerd[1511]: time="2025-02-13T15:45:00.972572490Z" level=info msg="CreateContainer within sandbox \"da3c70a79b488082bc3417a171878ba19d8bbb4279f99d05f4809af51fed0d8c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5133903b5bc7202e2f8578ba93753c3c51858be939b7efaa8026132fbc0e97ca\"" Feb 13 15:45:00.973051 containerd[1511]: time="2025-02-13T15:45:00.973015914Z" level=info msg="StartContainer for \"5133903b5bc7202e2f8578ba93753c3c51858be939b7efaa8026132fbc0e97ca\"" Feb 13 15:45:00.978082 containerd[1511]: time="2025-02-13T15:45:00.978037381Z" level=info msg="CreateContainer within sandbox \"c913bb87163d0fda3ad1ec19976557bfd9e580b1b9333f8eb17a8240d50ffe34\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2a42edc576958dd209642c464f2865024c8869273d97eb9d7784c6aeda127b0d\"" Feb 13 15:45:00.979176 containerd[1511]: time="2025-02-13T15:45:00.978808647Z" level=info msg="StartContainer for \"2a42edc576958dd209642c464f2865024c8869273d97eb9d7784c6aeda127b0d\"" Feb 13 15:45:00.997130 systemd[1]: Started cri-containerd-ce0c13dab5d538100247bba297197b962e3b4f329a81329ea927c8571cbfa407.scope - libcontainer container ce0c13dab5d538100247bba297197b962e3b4f329a81329ea927c8571cbfa407. Feb 13 15:45:01.002709 systemd[1]: Started cri-containerd-5133903b5bc7202e2f8578ba93753c3c51858be939b7efaa8026132fbc0e97ca.scope - libcontainer container 5133903b5bc7202e2f8578ba93753c3c51858be939b7efaa8026132fbc0e97ca. Feb 13 15:45:01.006826 systemd[1]: Started cri-containerd-2a42edc576958dd209642c464f2865024c8869273d97eb9d7784c6aeda127b0d.scope - libcontainer container 2a42edc576958dd209642c464f2865024c8869273d97eb9d7784c6aeda127b0d. Feb 13 15:45:01.041091 containerd[1511]: time="2025-02-13T15:45:01.041058459Z" level=info msg="StartContainer for \"ce0c13dab5d538100247bba297197b962e3b4f329a81329ea927c8571cbfa407\" returns successfully" Feb 13 15:45:01.055790 containerd[1511]: time="2025-02-13T15:45:01.055744121Z" level=info msg="StartContainer for \"5133903b5bc7202e2f8578ba93753c3c51858be939b7efaa8026132fbc0e97ca\" returns successfully" Feb 13 15:45:01.062030 containerd[1511]: time="2025-02-13T15:45:01.061973305Z" level=info msg="StartContainer for \"2a42edc576958dd209642c464f2865024c8869273d97eb9d7784c6aeda127b0d\" returns successfully" Feb 13 15:45:01.158863 kubelet[2270]: I0213 15:45:01.158733 2270 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:45:01.244495 kubelet[2270]: E0213 15:45:01.244456 2270 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:45:01.244654 kubelet[2270]: E0213 15:45:01.244566 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:01.246544 kubelet[2270]: E0213 15:45:01.246520 2270 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:45:01.246623 kubelet[2270]: E0213 15:45:01.246603 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:01.248852 kubelet[2270]: E0213 15:45:01.248829 2270 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:45:01.248969 kubelet[2270]: E0213 15:45:01.248948 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:02.227142 kubelet[2270]: E0213 15:45:02.226422 2270 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:45:02.238411 kubelet[2270]: I0213 15:45:02.238191 2270 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 15:45:02.238411 kubelet[2270]: E0213 15:45:02.238243 2270 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 15:45:02.243563 kubelet[2270]: E0213 15:45:02.243519 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:02.248592 kubelet[2270]: E0213 15:45:02.248566 2270 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:45:02.249015 kubelet[2270]: E0213 15:45:02.248719 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:02.249200 kubelet[2270]: E0213 15:45:02.249175 2270 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:45:02.249289 kubelet[2270]: E0213 15:45:02.249280 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:02.249725 kubelet[2270]: E0213 15:45:02.249711 2270 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 15:45:02.249860 kubelet[2270]: E0213 15:45:02.249799 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:02.343915 kubelet[2270]: E0213 15:45:02.343865 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:02.444343 kubelet[2270]: E0213 15:45:02.444284 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:02.545183 kubelet[2270]: E0213 15:45:02.545135 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:02.645828 kubelet[2270]: E0213 15:45:02.645780 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:02.746231 kubelet[2270]: E0213 15:45:02.746184 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:02.846890 kubelet[2270]: E0213 15:45:02.846760 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:02.947355 kubelet[2270]: E0213 15:45:02.947312 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:03.048001 kubelet[2270]: E0213 15:45:03.047960 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:03.148673 kubelet[2270]: E0213 15:45:03.148552 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:03.248653 kubelet[2270]: E0213 15:45:03.248619 2270 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:03.417351 kubelet[2270]: I0213 15:45:03.417254 2270 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:03.448742 kubelet[2270]: I0213 15:45:03.448715 2270 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:03.530989 kubelet[2270]: I0213 15:45:03.530952 2270 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 15:45:04.205512 kubelet[2270]: I0213 15:45:04.205479 2270 apiserver.go:52] "Watching apiserver" Feb 13 15:45:04.207274 kubelet[2270]: E0213 15:45:04.207251 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:04.207412 kubelet[2270]: E0213 15:45:04.207395 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:04.207664 kubelet[2270]: E0213 15:45:04.207639 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:04.216859 kubelet[2270]: I0213 15:45:04.216834 2270 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:45:05.887315 systemd[1]: Reload requested from client PID 2550 ('systemctl') (unit session-9.scope)... Feb 13 15:45:05.887334 systemd[1]: Reloading... Feb 13 15:45:05.965043 zram_generator::config[2597]: No configuration found. Feb 13 15:45:06.075354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:45:06.195565 systemd[1]: Reloading finished in 307 ms. Feb 13 15:45:06.223676 kubelet[2270]: I0213 15:45:06.223620 2270 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:45:06.223779 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:45:06.249203 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:45:06.249479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:45:06.249528 systemd[1]: kubelet.service: Consumed 774ms CPU time, 127.3M memory peak. Feb 13 15:45:06.258116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:45:06.426118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:45:06.431198 (kubelet)[2639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:45:06.472101 kubelet[2639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:45:06.472101 kubelet[2639]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 15:45:06.472101 kubelet[2639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:45:06.472451 kubelet[2639]: I0213 15:45:06.472119 2639 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:45:06.477713 kubelet[2639]: I0213 15:45:06.477689 2639 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 15:45:06.477713 kubelet[2639]: I0213 15:45:06.477705 2639 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:45:06.477895 kubelet[2639]: I0213 15:45:06.477879 2639 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 15:45:06.478866 kubelet[2639]: I0213 15:45:06.478847 2639 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:45:06.480638 kubelet[2639]: I0213 15:45:06.480617 2639 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:45:06.484173 kubelet[2639]: E0213 15:45:06.484141 2639 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:45:06.484173 kubelet[2639]: I0213 15:45:06.484170 2639 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:45:06.488651 kubelet[2639]: I0213 15:45:06.488630 2639 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:45:06.488891 kubelet[2639]: I0213 15:45:06.488862 2639 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:45:06.489062 kubelet[2639]: I0213 15:45:06.488887 2639 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:45:06.489156 kubelet[2639]: I0213 15:45:06.489065 2639 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:45:06.489156 kubelet[2639]: I0213 15:45:06.489073 2639 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 15:45:06.489156 kubelet[2639]: I0213 15:45:06.489108 2639 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:45:06.489268 kubelet[2639]: I0213 15:45:06.489251 2639 kubelet.go:446] "Attempting to sync node with API server" Feb 13 15:45:06.489293 kubelet[2639]: I0213 15:45:06.489266 2639 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:45:06.489293 kubelet[2639]: I0213 15:45:06.489284 2639 kubelet.go:352] "Adding apiserver pod source" Feb 13 15:45:06.489344 kubelet[2639]: I0213 15:45:06.489295 2639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:45:06.492945 kubelet[2639]: I0213 15:45:06.489727 2639 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:45:06.492945 kubelet[2639]: I0213 15:45:06.490185 2639 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:45:06.492945 kubelet[2639]: I0213 15:45:06.490629 2639 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 15:45:06.492945 kubelet[2639]: I0213 15:45:06.490657 2639 server.go:1287] "Started kubelet" Feb 13 15:45:06.495372 kubelet[2639]: I0213 15:45:06.493368 2639 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:45:06.495372 kubelet[2639]: E0213 15:45:06.494017 2639 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:45:06.495372 kubelet[2639]: I0213 15:45:06.494201 2639 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:45:06.495372 kubelet[2639]: I0213 15:45:06.494276 2639 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:45:06.495372 kubelet[2639]: I0213 15:45:06.494669 2639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:45:06.496454 kubelet[2639]: I0213 15:45:06.496429 2639 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:45:06.500416 kubelet[2639]: I0213 15:45:06.499192 2639 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 15:45:06.500416 kubelet[2639]: E0213 15:45:06.499325 2639 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:45:06.500416 kubelet[2639]: I0213 15:45:06.499754 2639 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:45:06.500416 kubelet[2639]: I0213 15:45:06.499887 2639 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:45:06.500588 kubelet[2639]: I0213 15:45:06.500460 2639 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:45:06.500588 kubelet[2639]: I0213 15:45:06.500569 2639 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:45:06.502234 kubelet[2639]: I0213 15:45:06.502072 2639 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:45:06.509246 kubelet[2639]: I0213 15:45:06.508976 2639 server.go:490] "Adding debug handlers to kubelet server" Feb 13 15:45:06.509825 kubelet[2639]: I0213 15:45:06.509775 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:45:06.512021 kubelet[2639]: I0213 15:45:06.511954 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:45:06.512021 kubelet[2639]: I0213 15:45:06.511984 2639 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 15:45:06.512021 kubelet[2639]: I0213 15:45:06.512009 2639 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 15:45:06.512021 kubelet[2639]: I0213 15:45:06.512019 2639 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 15:45:06.512155 kubelet[2639]: E0213 15:45:06.512070 2639 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:45:06.535759 kubelet[2639]: I0213 15:45:06.535720 2639 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 15:45:06.535759 kubelet[2639]: I0213 15:45:06.535738 2639 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 15:45:06.535759 kubelet[2639]: I0213 15:45:06.535755 2639 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:45:06.535919 kubelet[2639]: I0213 15:45:06.535897 2639 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:45:06.536045 kubelet[2639]: I0213 15:45:06.535909 2639 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:45:06.536045 kubelet[2639]: I0213 15:45:06.536037 2639 policy_none.go:49] "None policy: Start" Feb 13 15:45:06.536103 kubelet[2639]: I0213 15:45:06.536047 2639 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 15:45:06.536103 kubelet[2639]: I0213 15:45:06.536059 2639 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:45:06.536181 kubelet[2639]: I0213 15:45:06.536166 2639 state_mem.go:75] "Updated machine memory state" Feb 13 15:45:06.539886 kubelet[2639]: I0213 15:45:06.539863 2639 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:45:06.540156 kubelet[2639]: I0213 15:45:06.540138 2639 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:45:06.540213 kubelet[2639]: I0213 15:45:06.540152 2639 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:45:06.540362 kubelet[2639]: I0213 15:45:06.540308 2639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:45:06.540968 kubelet[2639]: E0213 15:45:06.540915 2639 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 15:45:06.613653 kubelet[2639]: I0213 15:45:06.613602 2639 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:06.613771 kubelet[2639]: I0213 15:45:06.613690 2639 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:06.613771 kubelet[2639]: I0213 15:45:06.613738 2639 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 15:45:06.618649 kubelet[2639]: E0213 15:45:06.618603 2639 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:45:06.618974 kubelet[2639]: E0213 15:45:06.618953 2639 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:06.619011 kubelet[2639]: E0213 15:45:06.618955 2639 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:06.644862 kubelet[2639]: I0213 15:45:06.644841 2639 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 15:45:06.651165 kubelet[2639]: I0213 15:45:06.651144 2639 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 15:45:06.651266 kubelet[2639]: I0213 15:45:06.651207 2639 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 15:45:06.801524 kubelet[2639]: I0213 15:45:06.801466 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:06.801524 kubelet[2639]: I0213 15:45:06.801496 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:06.801524 kubelet[2639]: I0213 15:45:06.801517 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:06.801524 kubelet[2639]: I0213 15:45:06.801535 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:06.801957 kubelet[2639]: I0213 15:45:06.801550 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:45:06.801957 kubelet[2639]: I0213 15:45:06.801565 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d85890ddc32140095099c8534ffff634-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d85890ddc32140095099c8534ffff634\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:06.801957 kubelet[2639]: I0213 15:45:06.801579 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d85890ddc32140095099c8534ffff634-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d85890ddc32140095099c8534ffff634\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:06.801957 kubelet[2639]: I0213 15:45:06.801594 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:45:06.801957 kubelet[2639]: I0213 15:45:06.801608 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d85890ddc32140095099c8534ffff634-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d85890ddc32140095099c8534ffff634\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:06.862548 sudo[2676]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:45:06.863048 sudo[2676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:45:06.919762 kubelet[2639]: E0213 15:45:06.919655 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:06.919762 kubelet[2639]: E0213 15:45:06.919720 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:06.919918 kubelet[2639]: E0213 15:45:06.919884 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:07.325524 sudo[2676]: pam_unix(sudo:session): session closed for user root Feb 13 15:45:07.490091 kubelet[2639]: I0213 15:45:07.490061 2639 apiserver.go:52] "Watching apiserver" Feb 13 15:45:07.500032 kubelet[2639]: I0213 15:45:07.500008 2639 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:45:07.522238 kubelet[2639]: I0213 15:45:07.521573 2639 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:07.522238 kubelet[2639]: E0213 15:45:07.521679 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:07.522696 kubelet[2639]: I0213 15:45:07.522663 2639 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 15:45:07.527216 kubelet[2639]: E0213 15:45:07.527184 2639 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:45:07.527377 kubelet[2639]: E0213 15:45:07.527331 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:07.532946 kubelet[2639]: E0213 15:45:07.529696 2639 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:45:07.532946 kubelet[2639]: E0213 15:45:07.529809 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:07.558090 kubelet[2639]: I0213 15:45:07.558028 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.557989993 podStartE2EDuration="4.557989993s" podCreationTimestamp="2025-02-13 15:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:45:07.557813115 +0000 UTC m=+1.122544887" watchObservedRunningTime="2025-02-13 15:45:07.557989993 +0000 UTC m=+1.122721765" Feb 13 15:45:07.580429 kubelet[2639]: I0213 15:45:07.580125 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.580107667 podStartE2EDuration="4.580107667s" podCreationTimestamp="2025-02-13 15:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:45:07.572779645 +0000 UTC m=+1.137511417" watchObservedRunningTime="2025-02-13 15:45:07.580107667 +0000 UTC m=+1.144839439" Feb 13 15:45:07.580429 kubelet[2639]: I0213 15:45:07.580221 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.580216505 podStartE2EDuration="4.580216505s" podCreationTimestamp="2025-02-13 15:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:45:07.57938601 +0000 UTC m=+1.144117782" watchObservedRunningTime="2025-02-13 15:45:07.580216505 +0000 UTC m=+1.144948277" Feb 13 15:45:08.523198 kubelet[2639]: E0213 15:45:08.523161 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:08.523882 kubelet[2639]: E0213 15:45:08.523866 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:08.704593 sudo[1715]: pam_unix(sudo:session): session closed for user root Feb 13 15:45:08.706403 sshd[1714]: Connection closed by 10.0.0.1 port 55218 Feb 13 15:45:08.706883 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Feb 13 15:45:08.711802 systemd[1]: sshd@8-10.0.0.58:22-10.0.0.1:55218.service: Deactivated successfully. Feb 13 15:45:08.713859 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:45:08.714094 systemd[1]: session-9.scope: Consumed 4.305s CPU time, 252.9M memory peak. Feb 13 15:45:08.715365 systemd-logind[1500]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:45:08.716260 systemd-logind[1500]: Removed session 9. Feb 13 15:45:09.524829 kubelet[2639]: E0213 15:45:09.524784 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:10.802476 kubelet[2639]: E0213 15:45:10.802384 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:10.945889 kubelet[2639]: I0213 15:45:10.945850 2639 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:45:10.946208 containerd[1511]: time="2025-02-13T15:45:10.946172585Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:45:10.946600 kubelet[2639]: I0213 15:45:10.946337 2639 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:45:12.204320 kubelet[2639]: E0213 15:45:12.204250 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:12.528910 kubelet[2639]: E0213 15:45:12.528883 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:13.206536 systemd[1]: Created slice kubepods-besteffort-pod7c96a5e3_9305_43be_b4a9_e7d242c20457.slice - libcontainer container kubepods-besteffort-pod7c96a5e3_9305_43be_b4a9_e7d242c20457.slice. Feb 13 15:45:13.217786 update_engine[1503]: I20250213 15:45:13.217726 1503 update_attempter.cc:509] Updating boot flags... Feb 13 15:45:13.219838 systemd[1]: Created slice kubepods-burstable-pode8791479_15a6_4ce3_a79d_d262fda3b77b.slice - libcontainer container kubepods-burstable-pode8791479_15a6_4ce3_a79d_d262fda3b77b.slice. Feb 13 15:45:13.225474 systemd[1]: Created slice kubepods-besteffort-pod435b95a0_1761_444d_b40a_c01d0d80bca5.slice - libcontainer container kubepods-besteffort-pod435b95a0_1761_444d_b40a_c01d0d80bca5.slice. Feb 13 15:45:13.246050 kubelet[2639]: I0213 15:45:13.244439 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-xtables-lock\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246050 kubelet[2639]: I0213 15:45:13.244473 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c96a5e3-9305-43be-b4a9-e7d242c20457-xtables-lock\") pod \"kube-proxy-sg2cj\" (UID: \"7c96a5e3-9305-43be-b4a9-e7d242c20457\") " pod="kube-system/kube-proxy-sg2cj" Feb 13 15:45:13.246050 kubelet[2639]: I0213 15:45:13.244489 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-hostproc\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246050 kubelet[2639]: I0213 15:45:13.244504 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cni-path\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246050 kubelet[2639]: I0213 15:45:13.244518 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c96a5e3-9305-43be-b4a9-e7d242c20457-kube-proxy\") pod \"kube-proxy-sg2cj\" (UID: \"7c96a5e3-9305-43be-b4a9-e7d242c20457\") " pod="kube-system/kube-proxy-sg2cj" Feb 13 15:45:13.246050 kubelet[2639]: I0213 15:45:13.244531 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c96a5e3-9305-43be-b4a9-e7d242c20457-lib-modules\") pod \"kube-proxy-sg2cj\" (UID: \"7c96a5e3-9305-43be-b4a9-e7d242c20457\") " pod="kube-system/kube-proxy-sg2cj" Feb 13 15:45:13.246507 kubelet[2639]: I0213 15:45:13.244543 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-host-proc-sys-net\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246507 kubelet[2639]: I0213 15:45:13.244558 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svgk2\" (UniqueName: \"kubernetes.io/projected/e8791479-15a6-4ce3-a79d-d262fda3b77b-kube-api-access-svgk2\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246507 kubelet[2639]: I0213 15:45:13.244573 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8791479-15a6-4ce3-a79d-d262fda3b77b-hubble-tls\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246507 kubelet[2639]: I0213 15:45:13.244586 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29ktn\" (UniqueName: \"kubernetes.io/projected/435b95a0-1761-444d-b40a-c01d0d80bca5-kube-api-access-29ktn\") pod \"cilium-operator-6c4d7847fc-vqgf2\" (UID: \"435b95a0-1761-444d-b40a-c01d0d80bca5\") " pod="kube-system/cilium-operator-6c4d7847fc-vqgf2" Feb 13 15:45:13.246507 kubelet[2639]: I0213 15:45:13.244600 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-lib-modules\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246625 kubelet[2639]: I0213 15:45:13.244613 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-config-path\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246625 kubelet[2639]: I0213 15:45:13.244639 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/435b95a0-1761-444d-b40a-c01d0d80bca5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vqgf2\" (UID: \"435b95a0-1761-444d-b40a-c01d0d80bca5\") " pod="kube-system/cilium-operator-6c4d7847fc-vqgf2" Feb 13 15:45:13.246625 kubelet[2639]: I0213 15:45:13.244652 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-etc-cni-netd\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246625 kubelet[2639]: I0213 15:45:13.244673 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-host-proc-sys-kernel\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246625 kubelet[2639]: I0213 15:45:13.244689 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrslt\" (UniqueName: \"kubernetes.io/projected/7c96a5e3-9305-43be-b4a9-e7d242c20457-kube-api-access-wrslt\") pod \"kube-proxy-sg2cj\" (UID: \"7c96a5e3-9305-43be-b4a9-e7d242c20457\") " pod="kube-system/kube-proxy-sg2cj" Feb 13 15:45:13.246739 kubelet[2639]: I0213 15:45:13.244702 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-cgroup\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246739 kubelet[2639]: I0213 15:45:13.244717 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8791479-15a6-4ce3-a79d-d262fda3b77b-clustermesh-secrets\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246739 kubelet[2639]: I0213 15:45:13.244729 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-run\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.246739 kubelet[2639]: I0213 15:45:13.244743 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-bpf-maps\") pod \"cilium-b7454\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " pod="kube-system/cilium-b7454" Feb 13 15:45:13.257954 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2727) Feb 13 15:45:13.293998 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2727) Feb 13 15:45:13.336966 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2727) Feb 13 15:45:13.519085 kubelet[2639]: E0213 15:45:13.519058 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:13.519678 containerd[1511]: time="2025-02-13T15:45:13.519642523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sg2cj,Uid:7c96a5e3-9305-43be-b4a9-e7d242c20457,Namespace:kube-system,Attempt:0,}" Feb 13 15:45:13.524078 kubelet[2639]: E0213 15:45:13.524059 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:13.524679 containerd[1511]: time="2025-02-13T15:45:13.524653130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7454,Uid:e8791479-15a6-4ce3-a79d-d262fda3b77b,Namespace:kube-system,Attempt:0,}" Feb 13 15:45:13.527499 kubelet[2639]: E0213 15:45:13.527482 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:13.527830 containerd[1511]: time="2025-02-13T15:45:13.527686988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vqgf2,Uid:435b95a0-1761-444d-b40a-c01d0d80bca5,Namespace:kube-system,Attempt:0,}" Feb 13 15:45:13.545412 containerd[1511]: time="2025-02-13T15:45:13.545325898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:45:13.545412 containerd[1511]: time="2025-02-13T15:45:13.545372857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:45:13.545412 containerd[1511]: time="2025-02-13T15:45:13.545383216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:13.545571 containerd[1511]: time="2025-02-13T15:45:13.545454602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:13.567126 systemd[1]: Started cri-containerd-be5dc0c100d43e06dc697b2202c586c5cd0ae71847db8a80b55c30be6472e65f.scope - libcontainer container be5dc0c100d43e06dc697b2202c586c5cd0ae71847db8a80b55c30be6472e65f. Feb 13 15:45:13.580488 containerd[1511]: time="2025-02-13T15:45:13.579233467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:45:13.580488 containerd[1511]: time="2025-02-13T15:45:13.579305093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:45:13.580488 containerd[1511]: time="2025-02-13T15:45:13.579314591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:13.580488 containerd[1511]: time="2025-02-13T15:45:13.579380407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:13.582113 containerd[1511]: time="2025-02-13T15:45:13.581821318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:45:13.582113 containerd[1511]: time="2025-02-13T15:45:13.581876292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:45:13.582113 containerd[1511]: time="2025-02-13T15:45:13.581890118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:13.582113 containerd[1511]: time="2025-02-13T15:45:13.581989097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:13.600183 systemd[1]: Started cri-containerd-b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20.scope - libcontainer container b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20. Feb 13 15:45:13.601804 containerd[1511]: time="2025-02-13T15:45:13.601751115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sg2cj,Uid:7c96a5e3-9305-43be-b4a9-e7d242c20457,Namespace:kube-system,Attempt:0,} returns sandbox id \"be5dc0c100d43e06dc697b2202c586c5cd0ae71847db8a80b55c30be6472e65f\"" Feb 13 15:45:13.602459 kubelet[2639]: E0213 15:45:13.602431 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:13.605484 systemd[1]: Started cri-containerd-d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd.scope - libcontainer container d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd. Feb 13 15:45:13.607475 containerd[1511]: time="2025-02-13T15:45:13.607437235Z" level=info msg="CreateContainer within sandbox \"be5dc0c100d43e06dc697b2202c586c5cd0ae71847db8a80b55c30be6472e65f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:45:13.632514 containerd[1511]: time="2025-02-13T15:45:13.632465797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7454,Uid:e8791479-15a6-4ce3-a79d-d262fda3b77b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\"" Feb 13 15:45:13.633317 kubelet[2639]: E0213 15:45:13.633279 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:13.634648 containerd[1511]: time="2025-02-13T15:45:13.634627148Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:45:13.645071 containerd[1511]: time="2025-02-13T15:45:13.645018896Z" level=info msg="CreateContainer within sandbox \"be5dc0c100d43e06dc697b2202c586c5cd0ae71847db8a80b55c30be6472e65f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac3b34e2fbb3da68f8838fe17df57dfc3b7802a4db77b0c5ef047d6aa6f055dc\"" Feb 13 15:45:13.645976 containerd[1511]: time="2025-02-13T15:45:13.645941397Z" level=info msg="StartContainer for \"ac3b34e2fbb3da68f8838fe17df57dfc3b7802a4db77b0c5ef047d6aa6f055dc\"" Feb 13 15:45:13.651631 containerd[1511]: time="2025-02-13T15:45:13.651599233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vqgf2,Uid:435b95a0-1761-444d-b40a-c01d0d80bca5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd\"" Feb 13 15:45:13.652649 kubelet[2639]: E0213 15:45:13.652589 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:13.678048 systemd[1]: Started cri-containerd-ac3b34e2fbb3da68f8838fe17df57dfc3b7802a4db77b0c5ef047d6aa6f055dc.scope - libcontainer container ac3b34e2fbb3da68f8838fe17df57dfc3b7802a4db77b0c5ef047d6aa6f055dc. Feb 13 15:45:13.713492 containerd[1511]: time="2025-02-13T15:45:13.713441640Z" level=info msg="StartContainer for \"ac3b34e2fbb3da68f8838fe17df57dfc3b7802a4db77b0c5ef047d6aa6f055dc\" returns successfully" Feb 13 15:45:14.534417 kubelet[2639]: E0213 15:45:14.534390 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:14.544944 kubelet[2639]: I0213 15:45:14.544872 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sg2cj" podStartSLOduration=2.54485832 podStartE2EDuration="2.54485832s" podCreationTimestamp="2025-02-13 15:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:45:14.544739795 +0000 UTC m=+8.109471597" watchObservedRunningTime="2025-02-13 15:45:14.54485832 +0000 UTC m=+8.109590092" Feb 13 15:45:18.428849 kubelet[2639]: E0213 15:45:18.428815 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:18.540767 kubelet[2639]: E0213 15:45:18.540722 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:20.132784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171913228.mount: Deactivated successfully. Feb 13 15:45:20.865966 kubelet[2639]: E0213 15:45:20.865898 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:23.251476 containerd[1511]: time="2025-02-13T15:45:23.251409283Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:45:23.253784 containerd[1511]: time="2025-02-13T15:45:23.253736273Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:45:23.255379 containerd[1511]: time="2025-02-13T15:45:23.255337535Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:45:23.256987 containerd[1511]: time="2025-02-13T15:45:23.256950128Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.62229114s" Feb 13 15:45:23.257037 containerd[1511]: time="2025-02-13T15:45:23.256988160Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:45:23.264756 containerd[1511]: time="2025-02-13T15:45:23.264697185Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:45:23.266556 containerd[1511]: time="2025-02-13T15:45:23.266494366Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:45:23.288812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135459208.mount: Deactivated successfully. Feb 13 15:45:23.293386 containerd[1511]: time="2025-02-13T15:45:23.293334819Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\"" Feb 13 15:45:23.296686 containerd[1511]: time="2025-02-13T15:45:23.296644904Z" level=info msg="StartContainer for \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\"" Feb 13 15:45:23.325206 systemd[1]: Started cri-containerd-adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a.scope - libcontainer container adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a. Feb 13 15:45:23.367602 systemd[1]: cri-containerd-adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a.scope: Deactivated successfully. Feb 13 15:45:23.390469 containerd[1511]: time="2025-02-13T15:45:23.390418056Z" level=info msg="StartContainer for \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\" returns successfully" Feb 13 15:45:23.563855 kubelet[2639]: E0213 15:45:23.563398 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:23.911250 containerd[1511]: time="2025-02-13T15:45:23.911101316Z" level=info msg="shim disconnected" id=adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a namespace=k8s.io Feb 13 15:45:23.911250 containerd[1511]: time="2025-02-13T15:45:23.911173933Z" level=warning msg="cleaning up after shim disconnected" id=adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a namespace=k8s.io Feb 13 15:45:23.911250 containerd[1511]: time="2025-02-13T15:45:23.911185445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:45:24.286037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a-rootfs.mount: Deactivated successfully. Feb 13 15:45:24.566683 kubelet[2639]: E0213 15:45:24.566541 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:24.569277 containerd[1511]: time="2025-02-13T15:45:24.569230411Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:45:24.650286 containerd[1511]: time="2025-02-13T15:45:24.650239169Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\"" Feb 13 15:45:24.651021 containerd[1511]: time="2025-02-13T15:45:24.650977662Z" level=info msg="StartContainer for \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\"" Feb 13 15:45:24.684084 systemd[1]: Started cri-containerd-f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f.scope - libcontainer container f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f. Feb 13 15:45:24.709687 containerd[1511]: time="2025-02-13T15:45:24.709646385Z" level=info msg="StartContainer for \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\" returns successfully" Feb 13 15:45:24.721591 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:45:24.721896 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:45:24.722474 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:45:24.728482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:45:24.728766 systemd[1]: cri-containerd-f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f.scope: Deactivated successfully. Feb 13 15:45:24.751550 containerd[1511]: time="2025-02-13T15:45:24.751489234Z" level=info msg="shim disconnected" id=f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f namespace=k8s.io Feb 13 15:45:24.751550 containerd[1511]: time="2025-02-13T15:45:24.751542975Z" level=warning msg="cleaning up after shim disconnected" id=f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f namespace=k8s.io Feb 13 15:45:24.751550 containerd[1511]: time="2025-02-13T15:45:24.751554186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:45:24.756302 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:45:25.286077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f-rootfs.mount: Deactivated successfully. Feb 13 15:45:25.452612 containerd[1511]: time="2025-02-13T15:45:25.452557435Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:45:25.453427 containerd[1511]: time="2025-02-13T15:45:25.453383512Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:45:25.454488 containerd[1511]: time="2025-02-13T15:45:25.454465342Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:45:25.456005 containerd[1511]: time="2025-02-13T15:45:25.455952686Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.191217048s" Feb 13 15:45:25.456058 containerd[1511]: time="2025-02-13T15:45:25.456005555Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:45:25.458075 containerd[1511]: time="2025-02-13T15:45:25.458053067Z" level=info msg="CreateContainer within sandbox \"d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:45:25.471963 containerd[1511]: time="2025-02-13T15:45:25.471914020Z" level=info msg="CreateContainer within sandbox \"d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\"" Feb 13 15:45:25.472397 containerd[1511]: time="2025-02-13T15:45:25.472357435Z" level=info msg="StartContainer for \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\"" Feb 13 15:45:25.501074 systemd[1]: Started cri-containerd-96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53.scope - libcontainer container 96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53. Feb 13 15:45:25.525654 containerd[1511]: time="2025-02-13T15:45:25.525585941Z" level=info msg="StartContainer for \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\" returns successfully" Feb 13 15:45:25.570834 kubelet[2639]: E0213 15:45:25.570724 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:25.572431 kubelet[2639]: E0213 15:45:25.572411 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:25.574412 containerd[1511]: time="2025-02-13T15:45:25.574358496Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:45:25.596666 containerd[1511]: time="2025-02-13T15:45:25.596621160Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\"" Feb 13 15:45:25.597385 containerd[1511]: time="2025-02-13T15:45:25.597344062Z" level=info msg="StartContainer for \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\"" Feb 13 15:45:25.639415 systemd[1]: Started cri-containerd-7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91.scope - libcontainer container 7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91. Feb 13 15:45:25.705400 systemd[1]: cri-containerd-7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91.scope: Deactivated successfully. Feb 13 15:45:25.911898 containerd[1511]: time="2025-02-13T15:45:25.911015943Z" level=info msg="StartContainer for \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\" returns successfully" Feb 13 15:45:25.937267 containerd[1511]: time="2025-02-13T15:45:25.937196072Z" level=info msg="shim disconnected" id=7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91 namespace=k8s.io Feb 13 15:45:25.937267 containerd[1511]: time="2025-02-13T15:45:25.937251417Z" level=warning msg="cleaning up after shim disconnected" id=7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91 namespace=k8s.io Feb 13 15:45:25.937267 containerd[1511]: time="2025-02-13T15:45:25.937260764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:45:26.578124 kubelet[2639]: E0213 15:45:26.577477 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:26.578124 kubelet[2639]: E0213 15:45:26.577572 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:26.580692 containerd[1511]: time="2025-02-13T15:45:26.580648336Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:45:26.633043 kubelet[2639]: I0213 15:45:26.632963 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vqgf2" podStartSLOduration=2.8293049420000003 podStartE2EDuration="14.632903073s" podCreationTimestamp="2025-02-13 15:45:12 +0000 UTC" firstStartedPulling="2025-02-13 15:45:13.653109639 +0000 UTC m=+7.217841411" lastFinishedPulling="2025-02-13 15:45:25.45670777 +0000 UTC m=+19.021439542" observedRunningTime="2025-02-13 15:45:25.592916565 +0000 UTC m=+19.157648337" watchObservedRunningTime="2025-02-13 15:45:26.632903073 +0000 UTC m=+20.197634865" Feb 13 15:45:26.639009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2746960797.mount: Deactivated successfully. Feb 13 15:45:26.640277 containerd[1511]: time="2025-02-13T15:45:26.640231635Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\"" Feb 13 15:45:26.642223 containerd[1511]: time="2025-02-13T15:45:26.641443048Z" level=info msg="StartContainer for \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\"" Feb 13 15:45:26.681094 systemd[1]: Started cri-containerd-80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5.scope - libcontainer container 80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5. Feb 13 15:45:26.706592 systemd[1]: cri-containerd-80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5.scope: Deactivated successfully. Feb 13 15:45:26.711870 containerd[1511]: time="2025-02-13T15:45:26.711822945Z" level=info msg="StartContainer for \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\" returns successfully" Feb 13 15:45:26.735597 containerd[1511]: time="2025-02-13T15:45:26.735531810Z" level=info msg="shim disconnected" id=80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5 namespace=k8s.io Feb 13 15:45:26.735597 containerd[1511]: time="2025-02-13T15:45:26.735593667Z" level=warning msg="cleaning up after shim disconnected" id=80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5 namespace=k8s.io Feb 13 15:45:26.735597 containerd[1511]: time="2025-02-13T15:45:26.735604687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:45:27.285722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5-rootfs.mount: Deactivated successfully. Feb 13 15:45:27.582350 kubelet[2639]: E0213 15:45:27.582027 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:27.584087 containerd[1511]: time="2025-02-13T15:45:27.584053429Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:45:27.727314 containerd[1511]: time="2025-02-13T15:45:27.727274695Z" level=info msg="CreateContainer within sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\"" Feb 13 15:45:27.727615 containerd[1511]: time="2025-02-13T15:45:27.727574480Z" level=info msg="StartContainer for \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\"" Feb 13 15:45:27.775083 systemd[1]: Started cri-containerd-a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215.scope - libcontainer container a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215. Feb 13 15:45:27.848670 containerd[1511]: time="2025-02-13T15:45:27.848547528Z" level=info msg="StartContainer for \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\" returns successfully" Feb 13 15:45:27.933373 kubelet[2639]: I0213 15:45:27.933331 2639 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 15:45:28.132239 systemd[1]: Created slice kubepods-burstable-podd571c51c_52d9_41be_ad07_69afc93857ba.slice - libcontainer container kubepods-burstable-podd571c51c_52d9_41be_ad07_69afc93857ba.slice. Feb 13 15:45:28.162275 systemd[1]: Created slice kubepods-burstable-pod61b9afbf_de34_4b70_ae46_53cabd24e26a.slice - libcontainer container kubepods-burstable-pod61b9afbf_de34_4b70_ae46_53cabd24e26a.slice. Feb 13 15:45:28.252100 kubelet[2639]: I0213 15:45:28.252053 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61b9afbf-de34-4b70-ae46-53cabd24e26a-config-volume\") pod \"coredns-668d6bf9bc-rq8ml\" (UID: \"61b9afbf-de34-4b70-ae46-53cabd24e26a\") " pod="kube-system/coredns-668d6bf9bc-rq8ml" Feb 13 15:45:28.252100 kubelet[2639]: I0213 15:45:28.252101 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmz4\" (UniqueName: \"kubernetes.io/projected/d571c51c-52d9-41be-ad07-69afc93857ba-kube-api-access-zdmz4\") pod \"coredns-668d6bf9bc-bbvsv\" (UID: \"d571c51c-52d9-41be-ad07-69afc93857ba\") " pod="kube-system/coredns-668d6bf9bc-bbvsv" Feb 13 15:45:28.252379 kubelet[2639]: I0213 15:45:28.252126 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d571c51c-52d9-41be-ad07-69afc93857ba-config-volume\") pod \"coredns-668d6bf9bc-bbvsv\" (UID: \"d571c51c-52d9-41be-ad07-69afc93857ba\") " pod="kube-system/coredns-668d6bf9bc-bbvsv" Feb 13 15:45:28.352920 kubelet[2639]: I0213 15:45:28.352747 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqcwr\" (UniqueName: \"kubernetes.io/projected/61b9afbf-de34-4b70-ae46-53cabd24e26a-kube-api-access-zqcwr\") pod \"coredns-668d6bf9bc-rq8ml\" (UID: \"61b9afbf-de34-4b70-ae46-53cabd24e26a\") " pod="kube-system/coredns-668d6bf9bc-rq8ml" Feb 13 15:45:28.437493 kubelet[2639]: E0213 15:45:28.435552 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:28.437612 containerd[1511]: time="2025-02-13T15:45:28.436490325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbvsv,Uid:d571c51c-52d9-41be-ad07-69afc93857ba,Namespace:kube-system,Attempt:0,}" Feb 13 15:45:28.598614 kubelet[2639]: E0213 15:45:28.598562 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:28.765536 kubelet[2639]: E0213 15:45:28.765494 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:28.766157 containerd[1511]: time="2025-02-13T15:45:28.766116623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rq8ml,Uid:61b9afbf-de34-4b70-ae46-53cabd24e26a,Namespace:kube-system,Attempt:0,}" Feb 13 15:45:28.783564 kubelet[2639]: I0213 15:45:28.783452 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b7454" podStartSLOduration=7.153027634 podStartE2EDuration="16.783435815s" podCreationTimestamp="2025-02-13 15:45:12 +0000 UTC" firstStartedPulling="2025-02-13 15:45:13.63410133 +0000 UTC m=+7.198833102" lastFinishedPulling="2025-02-13 15:45:23.264509491 +0000 UTC m=+16.829241283" observedRunningTime="2025-02-13 15:45:28.783299587 +0000 UTC m=+22.348031359" watchObservedRunningTime="2025-02-13 15:45:28.783435815 +0000 UTC m=+22.348167577" Feb 13 15:45:29.614300 kubelet[2639]: E0213 15:45:29.614271 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:29.987443 systemd-networkd[1463]: cilium_host: Link UP Feb 13 15:45:29.987651 systemd-networkd[1463]: cilium_net: Link UP Feb 13 15:45:29.987871 systemd-networkd[1463]: cilium_net: Gained carrier Feb 13 15:45:29.988125 systemd-networkd[1463]: cilium_host: Gained carrier Feb 13 15:45:30.073119 systemd-networkd[1463]: cilium_host: Gained IPv6LL Feb 13 15:45:30.106407 systemd-networkd[1463]: cilium_vxlan: Link UP Feb 13 15:45:30.106422 systemd-networkd[1463]: cilium_vxlan: Gained carrier Feb 13 15:45:30.314966 kernel: NET: Registered PF_ALG protocol family Feb 13 15:45:30.608713 kubelet[2639]: E0213 15:45:30.608665 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:30.873112 systemd-networkd[1463]: cilium_net: Gained IPv6LL Feb 13 15:45:30.992074 systemd-networkd[1463]: lxc_health: Link UP Feb 13 15:45:31.001406 systemd-networkd[1463]: lxc_health: Gained carrier Feb 13 15:45:31.230357 systemd-networkd[1463]: lxcc3783fa4546c: Link UP Feb 13 15:45:31.232004 kernel: eth0: renamed from tmpe4edd Feb 13 15:45:31.240954 kernel: eth0: renamed from tmp9c3b7 Feb 13 15:45:31.251137 systemd-networkd[1463]: lxcff3fe981c8fa: Link UP Feb 13 15:45:31.251464 systemd-networkd[1463]: lxcc3783fa4546c: Gained carrier Feb 13 15:45:31.251840 systemd-networkd[1463]: lxcff3fe981c8fa: Gained carrier Feb 13 15:45:31.611503 kubelet[2639]: E0213 15:45:31.611450 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:31.833125 systemd-networkd[1463]: cilium_vxlan: Gained IPv6LL Feb 13 15:45:32.281130 systemd-networkd[1463]: lxc_health: Gained IPv6LL Feb 13 15:45:32.473224 systemd-networkd[1463]: lxcff3fe981c8fa: Gained IPv6LL Feb 13 15:45:32.614017 kubelet[2639]: E0213 15:45:32.613873 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:32.921168 systemd-networkd[1463]: lxcc3783fa4546c: Gained IPv6LL Feb 13 15:45:33.615512 kubelet[2639]: E0213 15:45:33.615478 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:34.752770 containerd[1511]: time="2025-02-13T15:45:34.752661442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:45:34.752770 containerd[1511]: time="2025-02-13T15:45:34.752721635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:45:34.752770 containerd[1511]: time="2025-02-13T15:45:34.752731443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:34.753248 containerd[1511]: time="2025-02-13T15:45:34.752815362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:34.779200 systemd[1]: Started cri-containerd-9c3b7b402b73ae5d60742a4e81c319db62ed7632a7f1d05fb12f91e7892ea410.scope - libcontainer container 9c3b7b402b73ae5d60742a4e81c319db62ed7632a7f1d05fb12f91e7892ea410. Feb 13 15:45:34.791610 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:45:34.812413 containerd[1511]: time="2025-02-13T15:45:34.811778169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:45:34.812510 containerd[1511]: time="2025-02-13T15:45:34.812451897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:45:34.812510 containerd[1511]: time="2025-02-13T15:45:34.812483496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:34.812997 containerd[1511]: time="2025-02-13T15:45:34.812585538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:45:34.817888 containerd[1511]: time="2025-02-13T15:45:34.817858866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbvsv,Uid:d571c51c-52d9-41be-ad07-69afc93857ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c3b7b402b73ae5d60742a4e81c319db62ed7632a7f1d05fb12f91e7892ea410\"" Feb 13 15:45:34.818700 kubelet[2639]: E0213 15:45:34.818673 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:34.821030 containerd[1511]: time="2025-02-13T15:45:34.820985868Z" level=info msg="CreateContainer within sandbox \"9c3b7b402b73ae5d60742a4e81c319db62ed7632a7f1d05fb12f91e7892ea410\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:45:34.843187 systemd[1]: Started cri-containerd-e4eddd086b809cdb082c7df87b48ae023da1a0631ea7f9aa90df10aa035b715a.scope - libcontainer container e4eddd086b809cdb082c7df87b48ae023da1a0631ea7f9aa90df10aa035b715a. Feb 13 15:45:34.856473 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:45:34.881817 containerd[1511]: time="2025-02-13T15:45:34.881761815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rq8ml,Uid:61b9afbf-de34-4b70-ae46-53cabd24e26a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4eddd086b809cdb082c7df87b48ae023da1a0631ea7f9aa90df10aa035b715a\"" Feb 13 15:45:34.882875 kubelet[2639]: E0213 15:45:34.882504 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:34.884767 containerd[1511]: time="2025-02-13T15:45:34.884726360Z" level=info msg="CreateContainer within sandbox \"e4eddd086b809cdb082c7df87b48ae023da1a0631ea7f9aa90df10aa035b715a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:45:34.916173 containerd[1511]: time="2025-02-13T15:45:34.916114275Z" level=info msg="CreateContainer within sandbox \"9c3b7b402b73ae5d60742a4e81c319db62ed7632a7f1d05fb12f91e7892ea410\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d72d8bf6ed145e2dfed4b86346f9e5569b53941ae1b8530290927e144f2a066e\"" Feb 13 15:45:34.916703 containerd[1511]: time="2025-02-13T15:45:34.916609888Z" level=info msg="StartContainer for \"d72d8bf6ed145e2dfed4b86346f9e5569b53941ae1b8530290927e144f2a066e\"" Feb 13 15:45:34.924839 containerd[1511]: time="2025-02-13T15:45:34.924712797Z" level=info msg="CreateContainer within sandbox \"e4eddd086b809cdb082c7df87b48ae023da1a0631ea7f9aa90df10aa035b715a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0baf0e4d2ecbb8b28c691d367894be4561b7047906cccaf21f808cd06aa3a9bf\"" Feb 13 15:45:34.925211 containerd[1511]: time="2025-02-13T15:45:34.925142605Z" level=info msg="StartContainer for \"0baf0e4d2ecbb8b28c691d367894be4561b7047906cccaf21f808cd06aa3a9bf\"" Feb 13 15:45:34.946177 systemd[1]: Started cri-containerd-d72d8bf6ed145e2dfed4b86346f9e5569b53941ae1b8530290927e144f2a066e.scope - libcontainer container d72d8bf6ed145e2dfed4b86346f9e5569b53941ae1b8530290927e144f2a066e. Feb 13 15:45:34.949822 systemd[1]: Started cri-containerd-0baf0e4d2ecbb8b28c691d367894be4561b7047906cccaf21f808cd06aa3a9bf.scope - libcontainer container 0baf0e4d2ecbb8b28c691d367894be4561b7047906cccaf21f808cd06aa3a9bf. Feb 13 15:45:34.980326 containerd[1511]: time="2025-02-13T15:45:34.980282713Z" level=info msg="StartContainer for \"d72d8bf6ed145e2dfed4b86346f9e5569b53941ae1b8530290927e144f2a066e\" returns successfully" Feb 13 15:45:34.980557 containerd[1511]: time="2025-02-13T15:45:34.980345361Z" level=info msg="StartContainer for \"0baf0e4d2ecbb8b28c691d367894be4561b7047906cccaf21f808cd06aa3a9bf\" returns successfully" Feb 13 15:45:35.622129 kubelet[2639]: E0213 15:45:35.621706 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:35.623834 kubelet[2639]: E0213 15:45:35.623778 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:35.632079 kubelet[2639]: I0213 15:45:35.631950 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rq8ml" podStartSLOduration=23.631911201 podStartE2EDuration="23.631911201s" podCreationTimestamp="2025-02-13 15:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:45:35.631422942 +0000 UTC m=+29.196154725" watchObservedRunningTime="2025-02-13 15:45:35.631911201 +0000 UTC m=+29.196642973" Feb 13 15:45:35.654152 kubelet[2639]: I0213 15:45:35.654077 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bbvsv" podStartSLOduration=23.654054951 podStartE2EDuration="23.654054951s" podCreationTimestamp="2025-02-13 15:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:45:35.642576096 +0000 UTC m=+29.207307868" watchObservedRunningTime="2025-02-13 15:45:35.654054951 +0000 UTC m=+29.218786723" Feb 13 15:45:36.625106 kubelet[2639]: E0213 15:45:36.625075 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:36.625500 kubelet[2639]: E0213 15:45:36.625116 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:36.832656 systemd[1]: Started sshd@9-10.0.0.58:22-10.0.0.1:57672.service - OpenSSH per-connection server daemon (10.0.0.1:57672). Feb 13 15:45:36.879495 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 57672 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:45:36.881646 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:45:36.886746 systemd-logind[1500]: New session 10 of user core. Feb 13 15:45:36.896080 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:45:37.031007 sshd[4047]: Connection closed by 10.0.0.1 port 57672 Feb 13 15:45:37.031380 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Feb 13 15:45:37.035236 systemd[1]: sshd@9-10.0.0.58:22-10.0.0.1:57672.service: Deactivated successfully. Feb 13 15:45:37.037454 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:45:37.038374 systemd-logind[1500]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:45:37.039789 systemd-logind[1500]: Removed session 10. Feb 13 15:45:37.627328 kubelet[2639]: E0213 15:45:37.627301 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:37.627760 kubelet[2639]: E0213 15:45:37.627537 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:45:42.043041 systemd[1]: Started sshd@10-10.0.0.58:22-10.0.0.1:57800.service - OpenSSH per-connection server daemon (10.0.0.1:57800). Feb 13 15:45:42.079382 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 57800 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:45:42.080653 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:45:42.084538 systemd-logind[1500]: New session 11 of user core. Feb 13 15:45:42.094043 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:45:42.205705 sshd[4063]: Connection closed by 10.0.0.1 port 57800 Feb 13 15:45:42.206070 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Feb 13 15:45:42.209468 systemd[1]: sshd@10-10.0.0.58:22-10.0.0.1:57800.service: Deactivated successfully. Feb 13 15:45:42.211435 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:45:42.212155 systemd-logind[1500]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:45:42.212994 systemd-logind[1500]: Removed session 11. Feb 13 15:45:47.221949 systemd[1]: Started sshd@11-10.0.0.58:22-10.0.0.1:57808.service - OpenSSH per-connection server daemon (10.0.0.1:57808). Feb 13 15:45:47.259801 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 57808 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:45:47.261444 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:45:47.266059 systemd-logind[1500]: New session 12 of user core. Feb 13 15:45:47.276049 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:45:47.381522 sshd[4083]: Connection closed by 10.0.0.1 port 57808 Feb 13 15:45:47.381850 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Feb 13 15:45:47.386259 systemd[1]: sshd@11-10.0.0.58:22-10.0.0.1:57808.service: Deactivated successfully. Feb 13 15:45:47.388242 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:45:47.389153 systemd-logind[1500]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:45:47.390045 systemd-logind[1500]: Removed session 12. Feb 13 15:45:52.393441 systemd[1]: Started sshd@12-10.0.0.58:22-10.0.0.1:43556.service - OpenSSH per-connection server daemon (10.0.0.1:43556). Feb 13 15:45:52.432890 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 43556 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:45:52.434486 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:45:52.438711 systemd-logind[1500]: New session 13 of user core. Feb 13 15:45:52.453165 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:45:52.567777 sshd[4100]: Connection closed by 10.0.0.1 port 43556 Feb 13 15:45:52.568137 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Feb 13 15:45:52.580036 systemd[1]: sshd@12-10.0.0.58:22-10.0.0.1:43556.service: Deactivated successfully. Feb 13 15:45:52.581982 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:45:52.583307 systemd-logind[1500]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:45:52.591164 systemd[1]: Started sshd@13-10.0.0.58:22-10.0.0.1:43564.service - OpenSSH per-connection server daemon (10.0.0.1:43564). Feb 13 15:45:52.592338 systemd-logind[1500]: Removed session 13. Feb 13 15:45:52.624130 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 43564 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:45:52.625623 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:45:52.630242 systemd-logind[1500]: New session 14 of user core. Feb 13 15:45:52.646164 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:45:52.825472 sshd[4116]: Connection closed by 10.0.0.1 port 43564 Feb 13 15:45:52.825830 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Feb 13 15:45:52.836500 systemd[1]: sshd@13-10.0.0.58:22-10.0.0.1:43564.service: Deactivated successfully. Feb 13 15:45:52.838344 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:45:52.839727 systemd-logind[1500]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:45:52.852146 systemd[1]: Started sshd@14-10.0.0.58:22-10.0.0.1:43576.service - OpenSSH per-connection server daemon (10.0.0.1:43576). Feb 13 15:45:52.853090 systemd-logind[1500]: Removed session 14. Feb 13 15:45:52.885677 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 43576 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:45:52.887101 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:45:52.891063 systemd-logind[1500]: New session 15 of user core. Feb 13 15:45:52.901051 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:45:53.109741 sshd[4129]: Connection closed by 10.0.0.1 port 43576 Feb 13 15:45:53.111163 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Feb 13 15:45:53.117996 systemd[1]: sshd@14-10.0.0.58:22-10.0.0.1:43576.service: Deactivated successfully. Feb 13 15:45:53.122231 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:45:53.123881 systemd-logind[1500]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:45:53.127122 systemd-logind[1500]: Removed session 15. Feb 13 15:45:58.122780 systemd[1]: Started sshd@15-10.0.0.58:22-10.0.0.1:43586.service - OpenSSH per-connection server daemon (10.0.0.1:43586). Feb 13 15:45:58.159082 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 43586 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:45:58.160368 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:45:58.164736 systemd-logind[1500]: New session 16 of user core. Feb 13 15:45:58.173065 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:45:58.281885 sshd[4144]: Connection closed by 10.0.0.1 port 43586 Feb 13 15:45:58.282357 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Feb 13 15:45:58.286945 systemd[1]: sshd@15-10.0.0.58:22-10.0.0.1:43586.service: Deactivated successfully. Feb 13 15:45:58.289614 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:45:58.290411 systemd-logind[1500]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:45:58.291621 systemd-logind[1500]: Removed session 16. Feb 13 15:46:03.295746 systemd[1]: Started sshd@16-10.0.0.58:22-10.0.0.1:41702.service - OpenSSH per-connection server daemon (10.0.0.1:41702). Feb 13 15:46:03.334545 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 41702 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:03.336346 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:03.341108 systemd-logind[1500]: New session 17 of user core. Feb 13 15:46:03.352280 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:46:03.465130 sshd[4159]: Connection closed by 10.0.0.1 port 41702 Feb 13 15:46:03.465504 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:03.470281 systemd[1]: sshd@16-10.0.0.58:22-10.0.0.1:41702.service: Deactivated successfully. Feb 13 15:46:03.472726 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:46:03.473533 systemd-logind[1500]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:46:03.474362 systemd-logind[1500]: Removed session 17. Feb 13 15:46:08.487455 systemd[1]: Started sshd@17-10.0.0.58:22-10.0.0.1:41714.service - OpenSSH per-connection server daemon (10.0.0.1:41714). Feb 13 15:46:08.530940 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 41714 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:08.532796 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:08.537675 systemd-logind[1500]: New session 18 of user core. Feb 13 15:46:08.547101 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:46:08.657703 sshd[4177]: Connection closed by 10.0.0.1 port 41714 Feb 13 15:46:08.658255 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:08.668316 systemd[1]: sshd@17-10.0.0.58:22-10.0.0.1:41714.service: Deactivated successfully. Feb 13 15:46:08.670397 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:46:08.673095 systemd-logind[1500]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:46:08.681320 systemd[1]: Started sshd@18-10.0.0.58:22-10.0.0.1:41726.service - OpenSSH per-connection server daemon (10.0.0.1:41726). Feb 13 15:46:08.682268 systemd-logind[1500]: Removed session 18. Feb 13 15:46:08.716406 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 41726 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:08.717846 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:08.722270 systemd-logind[1500]: New session 19 of user core. Feb 13 15:46:08.732086 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:46:08.980411 sshd[4192]: Connection closed by 10.0.0.1 port 41726 Feb 13 15:46:08.980865 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:08.995910 systemd[1]: sshd@18-10.0.0.58:22-10.0.0.1:41726.service: Deactivated successfully. Feb 13 15:46:08.997825 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:46:08.999364 systemd-logind[1500]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:46:09.000632 systemd[1]: Started sshd@19-10.0.0.58:22-10.0.0.1:41734.service - OpenSSH per-connection server daemon (10.0.0.1:41734). Feb 13 15:46:09.001628 systemd-logind[1500]: Removed session 19. Feb 13 15:46:09.043279 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 41734 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:09.044915 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:09.049798 systemd-logind[1500]: New session 20 of user core. Feb 13 15:46:09.058074 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:46:09.971632 sshd[4206]: Connection closed by 10.0.0.1 port 41734 Feb 13 15:46:09.972196 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:09.985259 systemd[1]: sshd@19-10.0.0.58:22-10.0.0.1:41734.service: Deactivated successfully. Feb 13 15:46:09.987623 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:46:09.991131 systemd-logind[1500]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:46:09.996307 systemd[1]: Started sshd@20-10.0.0.58:22-10.0.0.1:57618.service - OpenSSH per-connection server daemon (10.0.0.1:57618). Feb 13 15:46:09.998406 systemd-logind[1500]: Removed session 20. Feb 13 15:46:10.035387 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 57618 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:10.037598 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:10.042922 systemd-logind[1500]: New session 21 of user core. Feb 13 15:46:10.053130 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:46:10.307291 sshd[4227]: Connection closed by 10.0.0.1 port 57618 Feb 13 15:46:10.307714 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:10.319245 systemd[1]: sshd@20-10.0.0.58:22-10.0.0.1:57618.service: Deactivated successfully. Feb 13 15:46:10.322138 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:46:10.324916 systemd-logind[1500]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:46:10.333407 systemd[1]: Started sshd@21-10.0.0.58:22-10.0.0.1:57620.service - OpenSSH per-connection server daemon (10.0.0.1:57620). Feb 13 15:46:10.334677 systemd-logind[1500]: Removed session 21. Feb 13 15:46:10.367833 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 57620 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:10.369611 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:10.374215 systemd-logind[1500]: New session 22 of user core. Feb 13 15:46:10.384087 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:46:10.495035 sshd[4240]: Connection closed by 10.0.0.1 port 57620 Feb 13 15:46:10.495447 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:10.499218 systemd[1]: sshd@21-10.0.0.58:22-10.0.0.1:57620.service: Deactivated successfully. Feb 13 15:46:10.501310 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:46:10.502108 systemd-logind[1500]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:46:10.503050 systemd-logind[1500]: Removed session 22. Feb 13 15:46:15.514320 systemd[1]: Started sshd@22-10.0.0.58:22-10.0.0.1:57624.service - OpenSSH per-connection server daemon (10.0.0.1:57624). Feb 13 15:46:15.556128 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 57624 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:15.557877 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:15.563161 systemd-logind[1500]: New session 23 of user core. Feb 13 15:46:15.570093 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:46:15.683277 sshd[4259]: Connection closed by 10.0.0.1 port 57624 Feb 13 15:46:15.683755 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:15.688656 systemd[1]: sshd@22-10.0.0.58:22-10.0.0.1:57624.service: Deactivated successfully. Feb 13 15:46:15.690839 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:46:15.691878 systemd-logind[1500]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:46:15.692855 systemd-logind[1500]: Removed session 23. Feb 13 15:46:20.513680 kubelet[2639]: E0213 15:46:20.513643 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:20.700814 systemd[1]: Started sshd@23-10.0.0.58:22-10.0.0.1:35200.service - OpenSSH per-connection server daemon (10.0.0.1:35200). Feb 13 15:46:20.736881 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 35200 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:20.738435 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:20.742741 systemd-logind[1500]: New session 24 of user core. Feb 13 15:46:20.761086 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:46:20.895601 sshd[4276]: Connection closed by 10.0.0.1 port 35200 Feb 13 15:46:20.895953 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:20.900457 systemd[1]: sshd@23-10.0.0.58:22-10.0.0.1:35200.service: Deactivated successfully. Feb 13 15:46:20.903244 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:46:20.904085 systemd-logind[1500]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:46:20.904947 systemd-logind[1500]: Removed session 24. Feb 13 15:46:23.513108 kubelet[2639]: E0213 15:46:23.513052 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:25.908786 systemd[1]: Started sshd@24-10.0.0.58:22-10.0.0.1:35212.service - OpenSSH per-connection server daemon (10.0.0.1:35212). Feb 13 15:46:25.944646 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 35212 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:25.946003 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:25.950045 systemd-logind[1500]: New session 25 of user core. Feb 13 15:46:25.963129 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:46:26.090861 sshd[4291]: Connection closed by 10.0.0.1 port 35212 Feb 13 15:46:26.091319 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:26.094667 systemd[1]: sshd@24-10.0.0.58:22-10.0.0.1:35212.service: Deactivated successfully. Feb 13 15:46:26.096450 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:46:26.097080 systemd-logind[1500]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:46:26.097993 systemd-logind[1500]: Removed session 25. Feb 13 15:46:29.513344 kubelet[2639]: E0213 15:46:29.513299 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:31.111668 systemd[1]: Started sshd@25-10.0.0.58:22-10.0.0.1:51620.service - OpenSSH per-connection server daemon (10.0.0.1:51620). Feb 13 15:46:31.148072 sshd[4304]: Accepted publickey for core from 10.0.0.1 port 51620 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:31.149506 sshd-session[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:31.153502 systemd-logind[1500]: New session 26 of user core. Feb 13 15:46:31.170050 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:46:31.274031 sshd[4306]: Connection closed by 10.0.0.1 port 51620 Feb 13 15:46:31.274377 sshd-session[4304]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:31.292481 systemd[1]: sshd@25-10.0.0.58:22-10.0.0.1:51620.service: Deactivated successfully. Feb 13 15:46:31.294199 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:46:31.295543 systemd-logind[1500]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:46:31.301182 systemd[1]: Started sshd@26-10.0.0.58:22-10.0.0.1:51622.service - OpenSSH per-connection server daemon (10.0.0.1:51622). Feb 13 15:46:31.302352 systemd-logind[1500]: Removed session 26. Feb 13 15:46:31.333619 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 51622 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:31.335260 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:31.340607 systemd-logind[1500]: New session 27 of user core. Feb 13 15:46:31.348120 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:46:32.825999 containerd[1511]: time="2025-02-13T15:46:32.825948834Z" level=info msg="StopContainer for \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\" with timeout 30 (s)" Feb 13 15:46:32.826492 containerd[1511]: time="2025-02-13T15:46:32.826454717Z" level=info msg="Stop container \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\" with signal terminated" Feb 13 15:46:32.837719 systemd[1]: cri-containerd-96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53.scope: Deactivated successfully. Feb 13 15:46:32.848919 systemd[1]: run-containerd-runc-k8s.io-a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215-runc.X80cWK.mount: Deactivated successfully. Feb 13 15:46:32.859555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53-rootfs.mount: Deactivated successfully. Feb 13 15:46:32.866534 containerd[1511]: time="2025-02-13T15:46:32.866483864Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:46:32.867098 containerd[1511]: time="2025-02-13T15:46:32.867064850Z" level=info msg="StopContainer for \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\" with timeout 2 (s)" Feb 13 15:46:32.867283 containerd[1511]: time="2025-02-13T15:46:32.867261134Z" level=info msg="Stop container \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\" with signal terminated" Feb 13 15:46:32.874660 systemd-networkd[1463]: lxc_health: Link DOWN Feb 13 15:46:32.874668 systemd-networkd[1463]: lxc_health: Lost carrier Feb 13 15:46:32.880708 containerd[1511]: time="2025-02-13T15:46:32.879003243Z" level=info msg="shim disconnected" id=96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53 namespace=k8s.io Feb 13 15:46:32.880708 containerd[1511]: time="2025-02-13T15:46:32.879102482Z" level=warning msg="cleaning up after shim disconnected" id=96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53 namespace=k8s.io Feb 13 15:46:32.880708 containerd[1511]: time="2025-02-13T15:46:32.879111088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:46:32.895083 systemd[1]: cri-containerd-a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215.scope: Deactivated successfully. Feb 13 15:46:32.895435 systemd[1]: cri-containerd-a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215.scope: Consumed 7.071s CPU time, 126.3M memory peak, 208K read from disk, 13.3M written to disk. Feb 13 15:46:32.900590 containerd[1511]: time="2025-02-13T15:46:32.900539550Z" level=info msg="StopContainer for \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\" returns successfully" Feb 13 15:46:32.904624 containerd[1511]: time="2025-02-13T15:46:32.904588172Z" level=info msg="StopPodSandbox for \"d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd\"" Feb 13 15:46:32.917282 containerd[1511]: time="2025-02-13T15:46:32.904626967Z" level=info msg="Container to stop \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:46:32.919541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215-rootfs.mount: Deactivated successfully. Feb 13 15:46:32.922667 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd-shm.mount: Deactivated successfully. Feb 13 15:46:32.924413 containerd[1511]: time="2025-02-13T15:46:32.924354668Z" level=info msg="shim disconnected" id=a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215 namespace=k8s.io Feb 13 15:46:32.924413 containerd[1511]: time="2025-02-13T15:46:32.924404853Z" level=warning msg="cleaning up after shim disconnected" id=a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215 namespace=k8s.io Feb 13 15:46:32.924532 containerd[1511]: time="2025-02-13T15:46:32.924437486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:46:32.926993 systemd[1]: cri-containerd-d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd.scope: Deactivated successfully. Feb 13 15:46:32.944084 containerd[1511]: time="2025-02-13T15:46:32.944033005Z" level=info msg="StopContainer for \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\" returns successfully" Feb 13 15:46:32.944609 containerd[1511]: time="2025-02-13T15:46:32.944537686Z" level=info msg="StopPodSandbox for \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\"" Feb 13 15:46:32.944661 containerd[1511]: time="2025-02-13T15:46:32.944605866Z" level=info msg="Container to stop \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:46:32.944661 containerd[1511]: time="2025-02-13T15:46:32.944647436Z" level=info msg="Container to stop \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:46:32.944761 containerd[1511]: time="2025-02-13T15:46:32.944661613Z" level=info msg="Container to stop \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:46:32.944761 containerd[1511]: time="2025-02-13T15:46:32.944673746Z" level=info msg="Container to stop \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:46:32.944761 containerd[1511]: time="2025-02-13T15:46:32.944685729Z" level=info msg="Container to stop \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:46:32.951909 systemd[1]: cri-containerd-b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20.scope: Deactivated successfully. Feb 13 15:46:32.955209 containerd[1511]: time="2025-02-13T15:46:32.955104266Z" level=info msg="shim disconnected" id=d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd namespace=k8s.io Feb 13 15:46:32.955332 containerd[1511]: time="2025-02-13T15:46:32.955219516Z" level=warning msg="cleaning up after shim disconnected" id=d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd namespace=k8s.io Feb 13 15:46:32.955332 containerd[1511]: time="2025-02-13T15:46:32.955230256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:46:32.971806 containerd[1511]: time="2025-02-13T15:46:32.971760406Z" level=info msg="TearDown network for sandbox \"d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd\" successfully" Feb 13 15:46:32.971806 containerd[1511]: time="2025-02-13T15:46:32.971798719Z" level=info msg="StopPodSandbox for \"d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd\" returns successfully" Feb 13 15:46:32.979362 containerd[1511]: time="2025-02-13T15:46:32.979299488Z" level=info msg="shim disconnected" id=b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20 namespace=k8s.io Feb 13 15:46:32.979362 containerd[1511]: time="2025-02-13T15:46:32.979360624Z" level=warning msg="cleaning up after shim disconnected" id=b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20 namespace=k8s.io Feb 13 15:46:32.979719 containerd[1511]: time="2025-02-13T15:46:32.979372097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:46:33.002894 containerd[1511]: time="2025-02-13T15:46:33.002846855Z" level=info msg="TearDown network for sandbox \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" successfully" Feb 13 15:46:33.002894 containerd[1511]: time="2025-02-13T15:46:33.002881681Z" level=info msg="StopPodSandbox for \"b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20\" returns successfully" Feb 13 15:46:33.042475 kubelet[2639]: I0213 15:46:33.042429 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29ktn\" (UniqueName: \"kubernetes.io/projected/435b95a0-1761-444d-b40a-c01d0d80bca5-kube-api-access-29ktn\") pod \"435b95a0-1761-444d-b40a-c01d0d80bca5\" (UID: \"435b95a0-1761-444d-b40a-c01d0d80bca5\") " Feb 13 15:46:33.042475 kubelet[2639]: I0213 15:46:33.042479 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/435b95a0-1761-444d-b40a-c01d0d80bca5-cilium-config-path\") pod \"435b95a0-1761-444d-b40a-c01d0d80bca5\" (UID: \"435b95a0-1761-444d-b40a-c01d0d80bca5\") " Feb 13 15:46:33.045887 kubelet[2639]: I0213 15:46:33.045849 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435b95a0-1761-444d-b40a-c01d0d80bca5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "435b95a0-1761-444d-b40a-c01d0d80bca5" (UID: "435b95a0-1761-444d-b40a-c01d0d80bca5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 15:46:33.046274 kubelet[2639]: I0213 15:46:33.046215 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435b95a0-1761-444d-b40a-c01d0d80bca5-kube-api-access-29ktn" (OuterVolumeSpecName: "kube-api-access-29ktn") pod "435b95a0-1761-444d-b40a-c01d0d80bca5" (UID: "435b95a0-1761-444d-b40a-c01d0d80bca5"). InnerVolumeSpecName "kube-api-access-29ktn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 15:46:33.143677 kubelet[2639]: I0213 15:46:33.143515 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-bpf-maps\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143677 kubelet[2639]: I0213 15:46:33.143569 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-hostproc\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143677 kubelet[2639]: I0213 15:46:33.143593 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-config-path\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143677 kubelet[2639]: I0213 15:46:33.143611 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-cgroup\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143677 kubelet[2639]: I0213 15:46:33.143628 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8791479-15a6-4ce3-a79d-d262fda3b77b-clustermesh-secrets\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143677 kubelet[2639]: I0213 15:46:33.143644 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-lib-modules\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143916 kubelet[2639]: I0213 15:46:33.143659 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-etc-cni-netd\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143916 kubelet[2639]: I0213 15:46:33.143671 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-xtables-lock\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143916 kubelet[2639]: I0213 15:46:33.143685 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cni-path\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143916 kubelet[2639]: I0213 15:46:33.143698 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-host-proc-sys-net\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143916 kubelet[2639]: I0213 15:46:33.143717 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-run\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.143916 kubelet[2639]: I0213 15:46:33.143735 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svgk2\" (UniqueName: \"kubernetes.io/projected/e8791479-15a6-4ce3-a79d-d262fda3b77b-kube-api-access-svgk2\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.144088 kubelet[2639]: I0213 15:46:33.143752 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8791479-15a6-4ce3-a79d-d262fda3b77b-hubble-tls\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.144088 kubelet[2639]: I0213 15:46:33.143765 2639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-host-proc-sys-kernel\") pod \"e8791479-15a6-4ce3-a79d-d262fda3b77b\" (UID: \"e8791479-15a6-4ce3-a79d-d262fda3b77b\") " Feb 13 15:46:33.144088 kubelet[2639]: I0213 15:46:33.143801 2639 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-29ktn\" (UniqueName: \"kubernetes.io/projected/435b95a0-1761-444d-b40a-c01d0d80bca5-kube-api-access-29ktn\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.144088 kubelet[2639]: I0213 15:46:33.143699 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144088 kubelet[2639]: I0213 15:46:33.143844 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144215 kubelet[2639]: I0213 15:46:33.143714 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144215 kubelet[2639]: I0213 15:46:33.143864 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144215 kubelet[2639]: I0213 15:46:33.143735 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144215 kubelet[2639]: I0213 15:46:33.143755 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cni-path" (OuterVolumeSpecName: "cni-path") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144215 kubelet[2639]: I0213 15:46:33.143768 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144334 kubelet[2639]: I0213 15:46:33.143784 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144334 kubelet[2639]: I0213 15:46:33.143801 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-hostproc" (OuterVolumeSpecName: "hostproc") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144334 kubelet[2639]: I0213 15:46:33.143923 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 15:46:33.144334 kubelet[2639]: I0213 15:46:33.143812 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/435b95a0-1761-444d-b40a-c01d0d80bca5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.147586 kubelet[2639]: I0213 15:46:33.147541 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8791479-15a6-4ce3-a79d-d262fda3b77b-kube-api-access-svgk2" (OuterVolumeSpecName: "kube-api-access-svgk2") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "kube-api-access-svgk2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 15:46:33.147878 kubelet[2639]: I0213 15:46:33.147844 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8791479-15a6-4ce3-a79d-d262fda3b77b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 15:46:33.148314 kubelet[2639]: I0213 15:46:33.148285 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 15:46:33.148937 kubelet[2639]: I0213 15:46:33.148900 2639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8791479-15a6-4ce3-a79d-d262fda3b77b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e8791479-15a6-4ce3-a79d-d262fda3b77b" (UID: "e8791479-15a6-4ce3-a79d-d262fda3b77b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 15:46:33.244401 kubelet[2639]: I0213 15:46:33.244343 2639 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244401 kubelet[2639]: I0213 15:46:33.244390 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244401 kubelet[2639]: I0213 15:46:33.244404 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244401 kubelet[2639]: I0213 15:46:33.244416 2639 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8791479-15a6-4ce3-a79d-d262fda3b77b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244652 kubelet[2639]: I0213 15:46:33.244428 2639 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244652 kubelet[2639]: I0213 15:46:33.244439 2639 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244652 kubelet[2639]: I0213 15:46:33.244450 2639 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244652 kubelet[2639]: I0213 15:46:33.244461 2639 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244652 kubelet[2639]: I0213 15:46:33.244471 2639 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244652 kubelet[2639]: I0213 15:46:33.244481 2639 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244652 kubelet[2639]: I0213 15:46:33.244492 2639 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-svgk2\" (UniqueName: \"kubernetes.io/projected/e8791479-15a6-4ce3-a79d-d262fda3b77b-kube-api-access-svgk2\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244652 kubelet[2639]: I0213 15:46:33.244503 2639 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244846 kubelet[2639]: I0213 15:46:33.244513 2639 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8791479-15a6-4ce3-a79d-d262fda3b77b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.244846 kubelet[2639]: I0213 15:46:33.244523 2639 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8791479-15a6-4ce3-a79d-d262fda3b77b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:46:33.723553 kubelet[2639]: I0213 15:46:33.723505 2639 scope.go:117] "RemoveContainer" containerID="96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53" Feb 13 15:46:33.729753 systemd[1]: Removed slice kubepods-besteffort-pod435b95a0_1761_444d_b40a_c01d0d80bca5.slice - libcontainer container kubepods-besteffort-pod435b95a0_1761_444d_b40a_c01d0d80bca5.slice. Feb 13 15:46:33.731773 containerd[1511]: time="2025-02-13T15:46:33.731731220Z" level=info msg="RemoveContainer for \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\"" Feb 13 15:46:33.734863 systemd[1]: Removed slice kubepods-burstable-pode8791479_15a6_4ce3_a79d_d262fda3b77b.slice - libcontainer container kubepods-burstable-pode8791479_15a6_4ce3_a79d_d262fda3b77b.slice. Feb 13 15:46:33.735154 systemd[1]: kubepods-burstable-pode8791479_15a6_4ce3_a79d_d262fda3b77b.slice: Consumed 7.175s CPU time, 126.7M memory peak, 224K read from disk, 13.3M written to disk. Feb 13 15:46:33.736302 containerd[1511]: time="2025-02-13T15:46:33.736273069Z" level=info msg="RemoveContainer for \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\" returns successfully" Feb 13 15:46:33.736901 kubelet[2639]: I0213 15:46:33.736872 2639 scope.go:117] "RemoveContainer" containerID="96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53" Feb 13 15:46:33.737161 containerd[1511]: time="2025-02-13T15:46:33.737106095Z" level=error msg="ContainerStatus for \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\": not found" Feb 13 15:46:33.743727 kubelet[2639]: E0213 15:46:33.743688 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\": not found" containerID="96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53" Feb 13 15:46:33.743878 kubelet[2639]: I0213 15:46:33.743733 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53"} err="failed to get container status \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\": rpc error: code = NotFound desc = an error occurred when try to find container \"96ed453370c861fbd0999bfde58599d4e50a13e396781598fc757f122ec60c53\": not found" Feb 13 15:46:33.743878 kubelet[2639]: I0213 15:46:33.743810 2639 scope.go:117] "RemoveContainer" containerID="a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215" Feb 13 15:46:33.745246 containerd[1511]: time="2025-02-13T15:46:33.744966795Z" level=info msg="RemoveContainer for \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\"" Feb 13 15:46:33.748981 containerd[1511]: time="2025-02-13T15:46:33.748920163Z" level=info msg="RemoveContainer for \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\" returns successfully" Feb 13 15:46:33.749173 kubelet[2639]: I0213 15:46:33.749130 2639 scope.go:117] "RemoveContainer" containerID="80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5" Feb 13 15:46:33.750079 containerd[1511]: time="2025-02-13T15:46:33.750046658Z" level=info msg="RemoveContainer for \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\"" Feb 13 15:46:33.793155 containerd[1511]: time="2025-02-13T15:46:33.793100295Z" level=info msg="RemoveContainer for \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\" returns successfully" Feb 13 15:46:33.793387 kubelet[2639]: I0213 15:46:33.793352 2639 scope.go:117] "RemoveContainer" containerID="7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91" Feb 13 15:46:33.794648 containerd[1511]: time="2025-02-13T15:46:33.794369241Z" level=info msg="RemoveContainer for \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\"" Feb 13 15:46:33.843266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8473c95cfdab2fd0da790f24adef5faad3fa194f96a4feb0c55a6e29ca0f8cd-rootfs.mount: Deactivated successfully. Feb 13 15:46:33.843389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20-rootfs.mount: Deactivated successfully. Feb 13 15:46:33.843497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b348f1ed41cb0895172d761371420054d9af80e467baf4a2c0af61851606ae20-shm.mount: Deactivated successfully. Feb 13 15:46:33.843621 systemd[1]: var-lib-kubelet-pods-e8791479\x2d15a6\x2d4ce3\x2da79d\x2dd262fda3b77b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsvgk2.mount: Deactivated successfully. Feb 13 15:46:33.843734 systemd[1]: var-lib-kubelet-pods-435b95a0\x2d1761\x2d444d\x2db40a\x2dc01d0d80bca5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d29ktn.mount: Deactivated successfully. Feb 13 15:46:33.843843 systemd[1]: var-lib-kubelet-pods-e8791479\x2d15a6\x2d4ce3\x2da79d\x2dd262fda3b77b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:46:33.843966 systemd[1]: var-lib-kubelet-pods-e8791479\x2d15a6\x2d4ce3\x2da79d\x2dd262fda3b77b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:46:33.902868 containerd[1511]: time="2025-02-13T15:46:33.902802309Z" level=info msg="RemoveContainer for \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\" returns successfully" Feb 13 15:46:33.903346 kubelet[2639]: I0213 15:46:33.903069 2639 scope.go:117] "RemoveContainer" containerID="f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f" Feb 13 15:46:33.904167 containerd[1511]: time="2025-02-13T15:46:33.904121070Z" level=info msg="RemoveContainer for \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\"" Feb 13 15:46:33.922923 containerd[1511]: time="2025-02-13T15:46:33.922879731Z" level=info msg="RemoveContainer for \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\" returns successfully" Feb 13 15:46:33.923187 kubelet[2639]: I0213 15:46:33.923139 2639 scope.go:117] "RemoveContainer" containerID="adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a" Feb 13 15:46:33.924320 containerd[1511]: time="2025-02-13T15:46:33.924279146Z" level=info msg="RemoveContainer for \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\"" Feb 13 15:46:33.928445 containerd[1511]: time="2025-02-13T15:46:33.928395054Z" level=info msg="RemoveContainer for \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\" returns successfully" Feb 13 15:46:33.928585 kubelet[2639]: I0213 15:46:33.928554 2639 scope.go:117] "RemoveContainer" containerID="a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215" Feb 13 15:46:33.928823 containerd[1511]: time="2025-02-13T15:46:33.928789947Z" level=error msg="ContainerStatus for \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\": not found" Feb 13 15:46:33.928969 kubelet[2639]: E0213 15:46:33.928943 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\": not found" containerID="a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215" Feb 13 15:46:33.929004 kubelet[2639]: I0213 15:46:33.928970 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215"} err="failed to get container status \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\": rpc error: code = NotFound desc = an error occurred when try to find container \"a87c835f14fe68a362a02f3e84d8135f6f39a041f96762dc453e0f68b967e215\": not found" Feb 13 15:46:33.929004 kubelet[2639]: I0213 15:46:33.928989 2639 scope.go:117] "RemoveContainer" containerID="80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5" Feb 13 15:46:33.929161 containerd[1511]: time="2025-02-13T15:46:33.929129003Z" level=error msg="ContainerStatus for \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\": not found" Feb 13 15:46:33.929294 kubelet[2639]: E0213 15:46:33.929266 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\": not found" containerID="80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5" Feb 13 15:46:33.929402 kubelet[2639]: I0213 15:46:33.929306 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5"} err="failed to get container status \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"80b9e0506527291ab5e695f28c0a0698af80793dfc9f4a2f5810e49b2db820b5\": not found" Feb 13 15:46:33.929402 kubelet[2639]: I0213 15:46:33.929336 2639 scope.go:117] "RemoveContainer" containerID="7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91" Feb 13 15:46:33.929551 containerd[1511]: time="2025-02-13T15:46:33.929511491Z" level=error msg="ContainerStatus for \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\": not found" Feb 13 15:46:33.929665 kubelet[2639]: E0213 15:46:33.929641 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\": not found" containerID="7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91" Feb 13 15:46:33.929703 kubelet[2639]: I0213 15:46:33.929665 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91"} err="failed to get container status \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\": rpc error: code = NotFound desc = an error occurred when try to find container \"7616c9bed65901019517de6f08b69fbebd205a292e3873af584f7cbd729b0d91\": not found" Feb 13 15:46:33.929703 kubelet[2639]: I0213 15:46:33.929679 2639 scope.go:117] "RemoveContainer" containerID="f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f" Feb 13 15:46:33.929841 containerd[1511]: time="2025-02-13T15:46:33.929812224Z" level=error msg="ContainerStatus for \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\": not found" Feb 13 15:46:33.929949 kubelet[2639]: E0213 15:46:33.929919 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\": not found" containerID="f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f" Feb 13 15:46:33.929978 kubelet[2639]: I0213 15:46:33.929948 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f"} err="failed to get container status \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f599a8c950f7d0c11ff40950ce59c9d00b74c65d91eef1f79c8860c14ba2057f\": not found" Feb 13 15:46:33.929978 kubelet[2639]: I0213 15:46:33.929961 2639 scope.go:117] "RemoveContainer" containerID="adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a" Feb 13 15:46:33.930108 containerd[1511]: time="2025-02-13T15:46:33.930081847Z" level=error msg="ContainerStatus for \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\": not found" Feb 13 15:46:33.930211 kubelet[2639]: E0213 15:46:33.930188 2639 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\": not found" containerID="adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a" Feb 13 15:46:33.930237 kubelet[2639]: I0213 15:46:33.930215 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a"} err="failed to get container status \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\": rpc error: code = NotFound desc = an error occurred when try to find container \"adea544b88e3fa35b3f300554f229543138492cf183c98ca103b287254a1d91a\": not found" Feb 13 15:46:34.514974 kubelet[2639]: I0213 15:46:34.514918 2639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435b95a0-1761-444d-b40a-c01d0d80bca5" path="/var/lib/kubelet/pods/435b95a0-1761-444d-b40a-c01d0d80bca5/volumes" Feb 13 15:46:34.515574 kubelet[2639]: I0213 15:46:34.515548 2639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8791479-15a6-4ce3-a79d-d262fda3b77b" path="/var/lib/kubelet/pods/e8791479-15a6-4ce3-a79d-d262fda3b77b/volumes" Feb 13 15:46:34.790977 sshd[4321]: Connection closed by 10.0.0.1 port 51622 Feb 13 15:46:34.792319 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:34.806056 systemd[1]: sshd@26-10.0.0.58:22-10.0.0.1:51622.service: Deactivated successfully. Feb 13 15:46:34.808077 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:46:34.808835 systemd-logind[1500]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:46:34.817213 systemd[1]: Started sshd@27-10.0.0.58:22-10.0.0.1:51626.service - OpenSSH per-connection server daemon (10.0.0.1:51626). Feb 13 15:46:34.817872 systemd-logind[1500]: Removed session 27. Feb 13 15:46:34.849365 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 51626 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:34.850693 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:34.855228 systemd-logind[1500]: New session 28 of user core. Feb 13 15:46:34.866049 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:46:35.327577 sshd[4483]: Connection closed by 10.0.0.1 port 51626 Feb 13 15:46:35.328070 sshd-session[4480]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:35.341267 systemd[1]: sshd@27-10.0.0.58:22-10.0.0.1:51626.service: Deactivated successfully. Feb 13 15:46:35.343285 kubelet[2639]: I0213 15:46:35.343167 2639 memory_manager.go:355] "RemoveStaleState removing state" podUID="435b95a0-1761-444d-b40a-c01d0d80bca5" containerName="cilium-operator" Feb 13 15:46:35.343285 kubelet[2639]: I0213 15:46:35.343201 2639 memory_manager.go:355] "RemoveStaleState removing state" podUID="e8791479-15a6-4ce3-a79d-d262fda3b77b" containerName="cilium-agent" Feb 13 15:46:35.344383 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:46:35.347202 systemd-logind[1500]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:46:35.357333 systemd[1]: Started sshd@28-10.0.0.58:22-10.0.0.1:51630.service - OpenSSH per-connection server daemon (10.0.0.1:51630). Feb 13 15:46:35.361390 systemd-logind[1500]: Removed session 28. Feb 13 15:46:35.369552 systemd[1]: Created slice kubepods-burstable-pod25a34045_6b2a_45c4_b0e7_fcb82328a0d9.slice - libcontainer container kubepods-burstable-pod25a34045_6b2a_45c4_b0e7_fcb82328a0d9.slice. Feb 13 15:46:35.400072 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 51630 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:35.401455 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:35.406181 systemd-logind[1500]: New session 29 of user core. Feb 13 15:46:35.411074 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:46:35.456711 kubelet[2639]: I0213 15:46:35.456581 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-cni-path\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456711 kubelet[2639]: I0213 15:46:35.456624 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-cilium-config-path\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456711 kubelet[2639]: I0213 15:46:35.456650 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tkgr\" (UniqueName: \"kubernetes.io/projected/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-kube-api-access-6tkgr\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456711 kubelet[2639]: I0213 15:46:35.456670 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-hostproc\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456711 kubelet[2639]: I0213 15:46:35.456696 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-host-proc-sys-kernel\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456711 kubelet[2639]: I0213 15:46:35.456714 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-cilium-run\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456975 kubelet[2639]: I0213 15:46:35.456732 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-cilium-ipsec-secrets\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456975 kubelet[2639]: I0213 15:46:35.456784 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-lib-modules\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456975 kubelet[2639]: I0213 15:46:35.456837 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-hubble-tls\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456975 kubelet[2639]: I0213 15:46:35.456855 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-bpf-maps\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456975 kubelet[2639]: I0213 15:46:35.456875 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-cilium-cgroup\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.456975 kubelet[2639]: I0213 15:46:35.456892 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-etc-cni-netd\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.457101 kubelet[2639]: I0213 15:46:35.456909 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-clustermesh-secrets\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.457101 kubelet[2639]: I0213 15:46:35.456952 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-host-proc-sys-net\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.457101 kubelet[2639]: I0213 15:46:35.456970 2639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25a34045-6b2a-45c4-b0e7-fcb82328a0d9-xtables-lock\") pod \"cilium-78k2l\" (UID: \"25a34045-6b2a-45c4-b0e7-fcb82328a0d9\") " pod="kube-system/cilium-78k2l" Feb 13 15:46:35.462798 sshd[4500]: Connection closed by 10.0.0.1 port 51630 Feb 13 15:46:35.463236 sshd-session[4496]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:35.480599 systemd[1]: sshd@28-10.0.0.58:22-10.0.0.1:51630.service: Deactivated successfully. Feb 13 15:46:35.482585 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:46:35.484387 systemd-logind[1500]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:46:35.491285 systemd[1]: Started sshd@29-10.0.0.58:22-10.0.0.1:51638.service - OpenSSH per-connection server daemon (10.0.0.1:51638). Feb 13 15:46:35.492177 systemd-logind[1500]: Removed session 29. Feb 13 15:46:35.523379 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 51638 ssh2: RSA SHA256:qTSy0ChfiFH5vISztGlcAnSrqMn/Km98y5jL35ufI20 Feb 13 15:46:35.524618 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:46:35.529111 systemd-logind[1500]: New session 30 of user core. Feb 13 15:46:35.541095 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 15:46:35.674637 kubelet[2639]: E0213 15:46:35.674475 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:35.675101 containerd[1511]: time="2025-02-13T15:46:35.675075254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-78k2l,Uid:25a34045-6b2a-45c4-b0e7-fcb82328a0d9,Namespace:kube-system,Attempt:0,}" Feb 13 15:46:35.872371 containerd[1511]: time="2025-02-13T15:46:35.871778564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:46:35.872371 containerd[1511]: time="2025-02-13T15:46:35.872338900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:46:35.872371 containerd[1511]: time="2025-02-13T15:46:35.872353237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:46:35.872554 containerd[1511]: time="2025-02-13T15:46:35.872432047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:46:35.892108 systemd[1]: Started cri-containerd-f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4.scope - libcontainer container f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4. Feb 13 15:46:35.914143 containerd[1511]: time="2025-02-13T15:46:35.914095499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-78k2l,Uid:25a34045-6b2a-45c4-b0e7-fcb82328a0d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\"" Feb 13 15:46:35.914833 kubelet[2639]: E0213 15:46:35.914811 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:35.916905 containerd[1511]: time="2025-02-13T15:46:35.916852165Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:46:35.931776 containerd[1511]: time="2025-02-13T15:46:35.931654062Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d087136d64d97142ced40305c5da5a89dc6792a87f16ef88863befa938cb244\"" Feb 13 15:46:35.932286 containerd[1511]: time="2025-02-13T15:46:35.932115880Z" level=info msg="StartContainer for \"9d087136d64d97142ced40305c5da5a89dc6792a87f16ef88863befa938cb244\"" Feb 13 15:46:35.958077 systemd[1]: Started cri-containerd-9d087136d64d97142ced40305c5da5a89dc6792a87f16ef88863befa938cb244.scope - libcontainer container 9d087136d64d97142ced40305c5da5a89dc6792a87f16ef88863befa938cb244. Feb 13 15:46:35.983591 containerd[1511]: time="2025-02-13T15:46:35.983549304Z" level=info msg="StartContainer for \"9d087136d64d97142ced40305c5da5a89dc6792a87f16ef88863befa938cb244\" returns successfully" Feb 13 15:46:35.991485 systemd[1]: cri-containerd-9d087136d64d97142ced40305c5da5a89dc6792a87f16ef88863befa938cb244.scope: Deactivated successfully. Feb 13 15:46:36.032105 containerd[1511]: time="2025-02-13T15:46:36.031866467Z" level=info msg="shim disconnected" id=9d087136d64d97142ced40305c5da5a89dc6792a87f16ef88863befa938cb244 namespace=k8s.io Feb 13 15:46:36.032105 containerd[1511]: time="2025-02-13T15:46:36.031949615Z" level=warning msg="cleaning up after shim disconnected" id=9d087136d64d97142ced40305c5da5a89dc6792a87f16ef88863befa938cb244 namespace=k8s.io Feb 13 15:46:36.032105 containerd[1511]: time="2025-02-13T15:46:36.031960406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:46:36.570161 kubelet[2639]: E0213 15:46:36.570114 2639 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:46:36.737007 kubelet[2639]: E0213 15:46:36.736975 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:36.738687 containerd[1511]: time="2025-02-13T15:46:36.738650115Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:46:36.753183 containerd[1511]: time="2025-02-13T15:46:36.753128486Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"999be0426ea975f858e6d1a736be899b14522ae41de03e1c54d8256f6241df81\"" Feb 13 15:46:36.753893 containerd[1511]: time="2025-02-13T15:46:36.753859767Z" level=info msg="StartContainer for \"999be0426ea975f858e6d1a736be899b14522ae41de03e1c54d8256f6241df81\"" Feb 13 15:46:36.787087 systemd[1]: Started cri-containerd-999be0426ea975f858e6d1a736be899b14522ae41de03e1c54d8256f6241df81.scope - libcontainer container 999be0426ea975f858e6d1a736be899b14522ae41de03e1c54d8256f6241df81. Feb 13 15:46:36.814097 containerd[1511]: time="2025-02-13T15:46:36.814033474Z" level=info msg="StartContainer for \"999be0426ea975f858e6d1a736be899b14522ae41de03e1c54d8256f6241df81\" returns successfully" Feb 13 15:46:36.821062 systemd[1]: cri-containerd-999be0426ea975f858e6d1a736be899b14522ae41de03e1c54d8256f6241df81.scope: Deactivated successfully. Feb 13 15:46:36.958460 containerd[1511]: time="2025-02-13T15:46:36.958400127Z" level=info msg="shim disconnected" id=999be0426ea975f858e6d1a736be899b14522ae41de03e1c54d8256f6241df81 namespace=k8s.io Feb 13 15:46:36.958460 containerd[1511]: time="2025-02-13T15:46:36.958454040Z" level=warning msg="cleaning up after shim disconnected" id=999be0426ea975f858e6d1a736be899b14522ae41de03e1c54d8256f6241df81 namespace=k8s.io Feb 13 15:46:36.958460 containerd[1511]: time="2025-02-13T15:46:36.958462556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:46:37.563297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-999be0426ea975f858e6d1a736be899b14522ae41de03e1c54d8256f6241df81-rootfs.mount: Deactivated successfully. Feb 13 15:46:37.740072 kubelet[2639]: E0213 15:46:37.740040 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:37.741621 containerd[1511]: time="2025-02-13T15:46:37.741584340Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:46:37.811241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141663255.mount: Deactivated successfully. Feb 13 15:46:37.815079 containerd[1511]: time="2025-02-13T15:46:37.814989340Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dfb77bcd8fa0718abb0a2b2031c846b18708a19e4e4553f8a9125224ce33eabe\"" Feb 13 15:46:37.815750 containerd[1511]: time="2025-02-13T15:46:37.815580583Z" level=info msg="StartContainer for \"dfb77bcd8fa0718abb0a2b2031c846b18708a19e4e4553f8a9125224ce33eabe\"" Feb 13 15:46:37.846193 systemd[1]: Started cri-containerd-dfb77bcd8fa0718abb0a2b2031c846b18708a19e4e4553f8a9125224ce33eabe.scope - libcontainer container dfb77bcd8fa0718abb0a2b2031c846b18708a19e4e4553f8a9125224ce33eabe. Feb 13 15:46:37.879451 systemd[1]: cri-containerd-dfb77bcd8fa0718abb0a2b2031c846b18708a19e4e4553f8a9125224ce33eabe.scope: Deactivated successfully. Feb 13 15:46:37.880723 containerd[1511]: time="2025-02-13T15:46:37.880533960Z" level=info msg="StartContainer for \"dfb77bcd8fa0718abb0a2b2031c846b18708a19e4e4553f8a9125224ce33eabe\" returns successfully" Feb 13 15:46:37.904574 containerd[1511]: time="2025-02-13T15:46:37.904492851Z" level=info msg="shim disconnected" id=dfb77bcd8fa0718abb0a2b2031c846b18708a19e4e4553f8a9125224ce33eabe namespace=k8s.io Feb 13 15:46:37.904574 containerd[1511]: time="2025-02-13T15:46:37.904556062Z" level=warning msg="cleaning up after shim disconnected" id=dfb77bcd8fa0718abb0a2b2031c846b18708a19e4e4553f8a9125224ce33eabe namespace=k8s.io Feb 13 15:46:37.904574 containerd[1511]: time="2025-02-13T15:46:37.904572994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:46:38.454580 kubelet[2639]: I0213 15:46:38.454511 2639 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:46:38Z","lastTransitionTime":"2025-02-13T15:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:46:38.562828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfb77bcd8fa0718abb0a2b2031c846b18708a19e4e4553f8a9125224ce33eabe-rootfs.mount: Deactivated successfully. Feb 13 15:46:38.746068 kubelet[2639]: E0213 15:46:38.746032 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:38.748138 containerd[1511]: time="2025-02-13T15:46:38.748099001Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:46:38.763229 containerd[1511]: time="2025-02-13T15:46:38.763187454Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8e1c567f040e0cff5b8983e330a0700e500d3d8551f31d74e5bb2daae9f1d42\"" Feb 13 15:46:38.763699 containerd[1511]: time="2025-02-13T15:46:38.763577105Z" level=info msg="StartContainer for \"f8e1c567f040e0cff5b8983e330a0700e500d3d8551f31d74e5bb2daae9f1d42\"" Feb 13 15:46:38.792059 systemd[1]: Started cri-containerd-f8e1c567f040e0cff5b8983e330a0700e500d3d8551f31d74e5bb2daae9f1d42.scope - libcontainer container f8e1c567f040e0cff5b8983e330a0700e500d3d8551f31d74e5bb2daae9f1d42. Feb 13 15:46:38.814681 systemd[1]: cri-containerd-f8e1c567f040e0cff5b8983e330a0700e500d3d8551f31d74e5bb2daae9f1d42.scope: Deactivated successfully. Feb 13 15:46:38.817387 containerd[1511]: time="2025-02-13T15:46:38.816901736Z" level=info msg="StartContainer for \"f8e1c567f040e0cff5b8983e330a0700e500d3d8551f31d74e5bb2daae9f1d42\" returns successfully" Feb 13 15:46:38.838739 containerd[1511]: time="2025-02-13T15:46:38.838679145Z" level=info msg="shim disconnected" id=f8e1c567f040e0cff5b8983e330a0700e500d3d8551f31d74e5bb2daae9f1d42 namespace=k8s.io Feb 13 15:46:38.838739 containerd[1511]: time="2025-02-13T15:46:38.838727307Z" level=warning msg="cleaning up after shim disconnected" id=f8e1c567f040e0cff5b8983e330a0700e500d3d8551f31d74e5bb2daae9f1d42 namespace=k8s.io Feb 13 15:46:38.838739 containerd[1511]: time="2025-02-13T15:46:38.838737847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:46:39.562900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8e1c567f040e0cff5b8983e330a0700e500d3d8551f31d74e5bb2daae9f1d42-rootfs.mount: Deactivated successfully. Feb 13 15:46:39.767308 kubelet[2639]: E0213 15:46:39.765978 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:39.767900 containerd[1511]: time="2025-02-13T15:46:39.767837174Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:46:40.003018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820556590.mount: Deactivated successfully. Feb 13 15:46:40.078096 containerd[1511]: time="2025-02-13T15:46:40.078020859Z" level=info msg="CreateContainer within sandbox \"f6e24fcfed44e2c43029c4ec1faa9974e6b7d86cee6f58e9f9c640a4d1819af4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"864807dc33f6db57d84adb342c9caa0925e584010d845b8ecf90d8cd86cdde14\"" Feb 13 15:46:40.078624 containerd[1511]: time="2025-02-13T15:46:40.078589820Z" level=info msg="StartContainer for \"864807dc33f6db57d84adb342c9caa0925e584010d845b8ecf90d8cd86cdde14\"" Feb 13 15:46:40.112131 systemd[1]: Started cri-containerd-864807dc33f6db57d84adb342c9caa0925e584010d845b8ecf90d8cd86cdde14.scope - libcontainer container 864807dc33f6db57d84adb342c9caa0925e584010d845b8ecf90d8cd86cdde14. Feb 13 15:46:40.148344 containerd[1511]: time="2025-02-13T15:46:40.148291593Z" level=info msg="StartContainer for \"864807dc33f6db57d84adb342c9caa0925e584010d845b8ecf90d8cd86cdde14\" returns successfully" Feb 13 15:46:40.607968 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:46:40.769397 kubelet[2639]: E0213 15:46:40.769367 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:41.512312 kubelet[2639]: E0213 15:46:41.512248 2639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-bbvsv" podUID="d571c51c-52d9-41be-ad07-69afc93857ba" Feb 13 15:46:41.770828 kubelet[2639]: E0213 15:46:41.770702 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:43.513397 kubelet[2639]: E0213 15:46:43.513344 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:43.599182 systemd-networkd[1463]: lxc_health: Link UP Feb 13 15:46:43.600379 systemd-networkd[1463]: lxc_health: Gained carrier Feb 13 15:46:43.677434 kubelet[2639]: E0213 15:46:43.677387 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:43.703786 kubelet[2639]: I0213 15:46:43.703322 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-78k2l" podStartSLOduration=8.703303673 podStartE2EDuration="8.703303673s" podCreationTimestamp="2025-02-13 15:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:46:40.958805523 +0000 UTC m=+94.523537306" watchObservedRunningTime="2025-02-13 15:46:43.703303673 +0000 UTC m=+97.268035445" Feb 13 15:46:43.773637 kubelet[2639]: E0213 15:46:43.773496 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:44.226539 systemd[1]: run-containerd-runc-k8s.io-864807dc33f6db57d84adb342c9caa0925e584010d845b8ecf90d8cd86cdde14-runc.yg5MuD.mount: Deactivated successfully. Feb 13 15:46:44.775102 kubelet[2639]: E0213 15:46:44.775061 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:45.177158 systemd-networkd[1463]: lxc_health: Gained IPv6LL Feb 13 15:46:50.515067 kubelet[2639]: E0213 15:46:50.515028 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:46:50.582995 sshd[4509]: Connection closed by 10.0.0.1 port 51638 Feb 13 15:46:50.583563 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Feb 13 15:46:50.587582 systemd[1]: sshd@29-10.0.0.58:22-10.0.0.1:51638.service: Deactivated successfully. Feb 13 15:46:50.589630 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 15:46:50.590294 systemd-logind[1500]: Session 30 logged out. Waiting for processes to exit. Feb 13 15:46:50.591186 systemd-logind[1500]: Removed session 30.