Mar 17 17:44:20.919487 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:44:20.919517 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:44:20.919532 kernel: BIOS-provided physical RAM map: Mar 17 17:44:20.919541 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:44:20.919549 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 17:44:20.919558 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 17:44:20.919568 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 17:44:20.919577 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 17:44:20.919586 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 17 17:44:20.919594 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 17 17:44:20.919607 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 17 17:44:20.919620 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 17 17:44:20.919629 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 17 17:44:20.919638 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 17 17:44:20.919652 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 17 17:44:20.919662 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 17:44:20.919676 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 17 17:44:20.919685 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 17 17:44:20.919694 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 17 17:44:20.919703 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 17 17:44:20.919712 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 17 17:44:20.919743 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 17:44:20.919754 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 17 17:44:20.919763 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:44:20.919772 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 17 17:44:20.919781 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:44:20.919791 kernel: NX (Execute Disable) protection: active Mar 17 17:44:20.919805 kernel: APIC: Static calls initialized Mar 17 17:44:20.919815 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 17 17:44:20.919825 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 17 17:44:20.919834 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 17 17:44:20.919843 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 17 17:44:20.919852 kernel: extended physical RAM map: Mar 17 17:44:20.919861 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:44:20.919871 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 17:44:20.919880 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 17:44:20.919889 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 17:44:20.919899 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 17:44:20.919912 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 17 17:44:20.919922 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 17 17:44:20.919937 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Mar 17 17:44:20.919947 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Mar 17 17:44:20.919961 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Mar 17 17:44:20.919971 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Mar 17 17:44:20.919981 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Mar 17 17:44:20.919995 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 17 17:44:20.920005 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 17 17:44:20.920014 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 17 17:44:20.920024 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 17 17:44:20.920034 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 17:44:20.920044 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 17 17:44:20.920054 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 17 17:44:20.920064 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 17 17:44:20.920073 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 17 17:44:20.920088 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 17 17:44:20.920098 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 17:44:20.920108 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 17 17:44:20.920118 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:44:20.920131 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 17 17:44:20.920141 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:44:20.920151 kernel: efi: EFI v2.7 by EDK II Mar 17 17:44:20.920161 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Mar 17 17:44:20.920171 kernel: random: crng init done Mar 17 17:44:20.920181 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 17 17:44:20.920191 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 17 17:44:20.920208 kernel: secureboot: Secure boot disabled Mar 17 17:44:20.920218 kernel: SMBIOS 2.8 present. Mar 17 17:44:20.920228 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 17 17:44:20.920238 kernel: Hypervisor detected: KVM Mar 17 17:44:20.920248 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:44:20.920257 kernel: kvm-clock: using sched offset of 3604828500 cycles Mar 17 17:44:20.920268 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:44:20.920278 kernel: tsc: Detected 2794.746 MHz processor Mar 17 17:44:20.920289 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:44:20.920299 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:44:20.920309 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 17 17:44:20.920323 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 17 17:44:20.920334 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:44:20.920344 kernel: Using GB pages for direct mapping Mar 17 17:44:20.920354 kernel: ACPI: Early table checksum verification disabled Mar 17 17:44:20.920364 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 17 17:44:20.920374 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:44:20.920385 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:44:20.920395 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:44:20.920405 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 17 17:44:20.920419 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:44:20.920429 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:44:20.920439 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:44:20.920450 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:44:20.920460 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:44:20.920481 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 17 17:44:20.920492 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 17 17:44:20.920502 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 17 17:44:20.920517 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 17 17:44:20.920547 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 17 17:44:20.920568 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 17 17:44:20.920604 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 17 17:44:20.920625 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 17 17:44:20.920636 kernel: No NUMA configuration found Mar 17 17:44:20.920646 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 17 17:44:20.920656 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Mar 17 17:44:20.920666 kernel: Zone ranges: Mar 17 17:44:20.920676 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:44:20.920692 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 17 17:44:20.920706 kernel: Normal empty Mar 17 17:44:20.920740 kernel: Movable zone start for each node Mar 17 17:44:20.920750 kernel: Early memory node ranges Mar 17 17:44:20.920760 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 17:44:20.920770 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 17 17:44:20.920780 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 17 17:44:20.920790 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 17 17:44:20.920800 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 17 17:44:20.920816 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 17 17:44:20.920826 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Mar 17 17:44:20.920836 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Mar 17 17:44:20.920846 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 17 17:44:20.920856 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:44:20.920867 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 17:44:20.920888 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 17 17:44:20.920902 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:44:20.920912 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 17 17:44:20.920922 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 17 17:44:20.920932 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 17 17:44:20.920948 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 17 17:44:20.920962 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 17 17:44:20.920972 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:44:20.920983 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:44:20.920994 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:44:20.921005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:44:20.921019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:44:20.921030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:44:20.921041 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:44:20.921051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:44:20.921062 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:44:20.921073 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:44:20.921083 kernel: TSC deadline timer available Mar 17 17:44:20.921093 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:44:20.921104 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:44:20.921118 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:44:20.921128 kernel: kvm-guest: setup PV sched yield Mar 17 17:44:20.921138 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 17 17:44:20.921149 kernel: Booting paravirtualized kernel on KVM Mar 17 17:44:20.921160 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:44:20.921170 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:44:20.921180 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:44:20.921191 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:44:20.921201 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:44:20.921215 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:44:20.921226 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:44:20.921238 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:44:20.921249 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:44:20.921263 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:44:20.921274 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:44:20.921284 kernel: Fallback order for Node 0: 0 Mar 17 17:44:20.921295 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Mar 17 17:44:20.921308 kernel: Policy zone: DMA32 Mar 17 17:44:20.921319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:44:20.921330 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 175776K reserved, 0K cma-reserved) Mar 17 17:44:20.921341 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:44:20.921351 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:44:20.921362 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:44:20.921372 kernel: Dynamic Preempt: voluntary Mar 17 17:44:20.921382 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:44:20.921394 kernel: rcu: RCU event tracing is enabled. Mar 17 17:44:20.921408 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:44:20.921419 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:44:20.921429 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:44:20.921440 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:44:20.921451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:44:20.921462 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:44:20.921483 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:44:20.921494 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:44:20.921504 kernel: Console: colour dummy device 80x25 Mar 17 17:44:20.921517 kernel: printk: console [ttyS0] enabled Mar 17 17:44:20.921527 kernel: ACPI: Core revision 20230628 Mar 17 17:44:20.921538 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:44:20.921548 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:44:20.921558 kernel: x2apic enabled Mar 17 17:44:20.921568 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:44:20.921582 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:44:20.921592 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:44:20.921634 kernel: kvm-guest: setup PV IPIs Mar 17 17:44:20.921651 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:44:20.921661 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:44:20.921672 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Mar 17 17:44:20.921682 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:44:20.921692 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:44:20.921708 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:44:20.921718 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:44:20.921745 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:44:20.921756 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:44:20.921772 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:44:20.921783 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:44:20.921793 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:44:20.921804 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:44:20.921815 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:44:20.921825 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:44:20.921840 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:44:20.921851 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:44:20.921861 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:44:20.921875 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:44:20.921885 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:44:20.921895 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:44:20.921906 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:44:20.921916 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:44:20.921927 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:44:20.921937 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:44:20.921947 kernel: landlock: Up and running. Mar 17 17:44:20.921961 kernel: SELinux: Initializing. Mar 17 17:44:20.921972 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:44:20.921982 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:44:20.921992 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:44:20.922002 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:44:20.922012 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:44:20.922022 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:44:20.922032 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:44:20.922042 kernel: ... version: 0 Mar 17 17:44:20.922056 kernel: ... bit width: 48 Mar 17 17:44:20.922066 kernel: ... generic registers: 6 Mar 17 17:44:20.922076 kernel: ... value mask: 0000ffffffffffff Mar 17 17:44:20.922087 kernel: ... max period: 00007fffffffffff Mar 17 17:44:20.922098 kernel: ... fixed-purpose events: 0 Mar 17 17:44:20.922108 kernel: ... event mask: 000000000000003f Mar 17 17:44:20.922119 kernel: signal: max sigframe size: 1776 Mar 17 17:44:20.922130 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:44:20.922141 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:44:20.922156 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:44:20.922167 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:44:20.922177 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:44:20.922188 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:44:20.922199 kernel: smpboot: Max logical packages: 1 Mar 17 17:44:20.922209 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Mar 17 17:44:20.922220 kernel: devtmpfs: initialized Mar 17 17:44:20.922231 kernel: x86/mm: Memory block size: 128MB Mar 17 17:44:20.922242 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 17 17:44:20.922253 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 17 17:44:20.922267 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 17 17:44:20.922278 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 17 17:44:20.922289 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Mar 17 17:44:20.922300 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 17 17:44:20.922311 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:44:20.922322 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:44:20.922333 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:44:20.922344 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:44:20.922357 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:44:20.922369 kernel: audit: type=2000 audit(1742233460.837:1): state=initialized audit_enabled=0 res=1 Mar 17 17:44:20.922379 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:44:20.922390 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:44:20.922402 kernel: cpuidle: using governor menu Mar 17 17:44:20.922413 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:44:20.922423 kernel: dca service started, version 1.12.1 Mar 17 17:44:20.922434 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 17 17:44:20.922445 kernel: PCI: Using configuration type 1 for base access Mar 17 17:44:20.922459 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:44:20.922479 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:44:20.922491 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:44:20.922502 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:44:20.922512 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:44:20.922523 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:44:20.922534 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:44:20.922545 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:44:20.922556 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:44:20.922571 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:44:20.922582 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:44:20.922592 kernel: ACPI: Interpreter enabled Mar 17 17:44:20.922604 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:44:20.922615 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:44:20.922626 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:44:20.922636 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:44:20.922647 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:44:20.922658 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:44:20.923056 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:44:20.923246 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:44:20.923423 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:44:20.923439 kernel: PCI host bridge to bus 0000:00 Mar 17 17:44:20.923647 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:44:20.923825 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:44:20.923994 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:44:20.924156 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 17 17:44:20.924322 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 17 17:44:20.924486 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 17 17:44:20.924633 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:44:20.924878 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:44:20.925055 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:44:20.925217 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 17 17:44:20.925363 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 17 17:44:20.925512 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 17 17:44:20.925640 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 17 17:44:20.925798 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:44:20.925993 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:44:20.926123 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 17 17:44:20.926256 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 17 17:44:20.926387 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 17 17:44:20.926561 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:44:20.926699 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 17 17:44:20.926857 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 17 17:44:20.927001 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 17 17:44:20.927147 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:44:20.927281 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 17 17:44:20.927408 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 17 17:44:20.927545 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 17 17:44:20.927672 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 17 17:44:20.927842 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:44:20.927974 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:44:20.928121 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:44:20.928262 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 17 17:44:20.928399 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 17 17:44:20.928554 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:44:20.928682 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 17 17:44:20.928692 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:44:20.928700 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:44:20.928713 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:44:20.928739 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:44:20.928751 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:44:20.928762 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:44:20.928772 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:44:20.928783 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:44:20.928794 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:44:20.928804 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:44:20.928815 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:44:20.928828 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:44:20.928836 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:44:20.928844 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:44:20.928852 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:44:20.928860 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:44:20.928867 kernel: iommu: Default domain type: Translated Mar 17 17:44:20.928875 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:44:20.928883 kernel: efivars: Registered efivars operations Mar 17 17:44:20.928891 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:44:20.928899 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:44:20.928909 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 17 17:44:20.928917 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 17 17:44:20.928925 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Mar 17 17:44:20.928932 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Mar 17 17:44:20.928940 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 17 17:44:20.928948 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 17 17:44:20.928956 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Mar 17 17:44:20.928963 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 17 17:44:20.929103 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:44:20.929227 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:44:20.929349 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:44:20.929360 kernel: vgaarb: loaded Mar 17 17:44:20.929368 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:44:20.929376 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:44:20.929384 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:44:20.929391 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:44:20.929399 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:44:20.929411 kernel: pnp: PnP ACPI init Mar 17 17:44:20.929568 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 17 17:44:20.929581 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:44:20.929589 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:44:20.929597 kernel: NET: Registered PF_INET protocol family Mar 17 17:44:20.929625 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:44:20.929636 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:44:20.929644 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:44:20.929655 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:44:20.929666 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:44:20.929674 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:44:20.929682 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:44:20.929690 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:44:20.929698 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:44:20.929707 kernel: NET: Registered PF_XDP protocol family Mar 17 17:44:20.929958 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 17 17:44:20.930097 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 17 17:44:20.930213 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:44:20.930348 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:44:20.930514 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:44:20.930670 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 17 17:44:20.930882 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 17 17:44:20.931039 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 17 17:44:20.931056 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:44:20.931074 kernel: Initialise system trusted keyrings Mar 17 17:44:20.931085 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:44:20.931096 kernel: Key type asymmetric registered Mar 17 17:44:20.931107 kernel: Asymmetric key parser 'x509' registered Mar 17 17:44:20.931119 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:44:20.931130 kernel: io scheduler mq-deadline registered Mar 17 17:44:20.931141 kernel: io scheduler kyber registered Mar 17 17:44:20.931152 kernel: io scheduler bfq registered Mar 17 17:44:20.931163 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:44:20.931179 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:44:20.931191 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:44:20.931206 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:44:20.931217 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:44:20.931228 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:44:20.931240 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:44:20.931255 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:44:20.931267 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:44:20.931496 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:44:20.931516 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:44:20.931673 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:44:20.931857 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:44:20 UTC (1742233460) Mar 17 17:44:20.932020 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 17 17:44:20.932037 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:44:20.932054 kernel: efifb: probing for efifb Mar 17 17:44:20.932065 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 17 17:44:20.932077 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 17 17:44:20.932088 kernel: efifb: scrolling: redraw Mar 17 17:44:20.932099 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:44:20.932110 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:44:20.932121 kernel: fb0: EFI VGA frame buffer device Mar 17 17:44:20.932132 kernel: pstore: Using crash dump compression: deflate Mar 17 17:44:20.932143 kernel: pstore: Registered efi_pstore as persistent store backend Mar 17 17:44:20.932159 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:44:20.932170 kernel: Segment Routing with IPv6 Mar 17 17:44:20.932180 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:44:20.932191 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:44:20.932202 kernel: Key type dns_resolver registered Mar 17 17:44:20.932213 kernel: IPI shorthand broadcast: enabled Mar 17 17:44:20.932224 kernel: sched_clock: Marking stable (1089003322, 153484124)->(1296970199, -54482753) Mar 17 17:44:20.932235 kernel: registered taskstats version 1 Mar 17 17:44:20.932246 kernel: Loading compiled-in X.509 certificates Mar 17 17:44:20.932262 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:44:20.932273 kernel: Key type .fscrypt registered Mar 17 17:44:20.932284 kernel: Key type fscrypt-provisioning registered Mar 17 17:44:20.932295 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:44:20.932306 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:44:20.932317 kernel: ima: No architecture policies found Mar 17 17:44:20.932328 kernel: clk: Disabling unused clocks Mar 17 17:44:20.932340 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:44:20.932351 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:44:20.932367 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:44:20.932378 kernel: Run /init as init process Mar 17 17:44:20.932391 kernel: with arguments: Mar 17 17:44:20.932402 kernel: /init Mar 17 17:44:20.932414 kernel: with environment: Mar 17 17:44:20.932425 kernel: HOME=/ Mar 17 17:44:20.932435 kernel: TERM=linux Mar 17 17:44:20.932446 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:44:20.932460 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:44:20.932495 systemd[1]: Detected virtualization kvm. Mar 17 17:44:20.932507 systemd[1]: Detected architecture x86-64. Mar 17 17:44:20.932518 systemd[1]: Running in initrd. Mar 17 17:44:20.932530 systemd[1]: No hostname configured, using default hostname. Mar 17 17:44:20.932541 systemd[1]: Hostname set to <localhost>. Mar 17 17:44:20.932553 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:44:20.932565 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:44:20.932580 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:44:20.932592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:44:20.932604 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:44:20.932616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:44:20.932627 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:44:20.932639 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:44:20.932653 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:44:20.932668 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:44:20.932680 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:44:20.932692 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:44:20.932703 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:44:20.932715 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:44:20.932740 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:44:20.932752 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:44:20.932764 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:44:20.932780 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:44:20.932792 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:44:20.932803 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:44:20.932815 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:44:20.932827 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:44:20.932838 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:44:20.932849 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:44:20.932861 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:44:20.932873 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:44:20.932888 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:44:20.932900 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:44:20.932911 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:44:20.932923 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:44:20.932934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:44:20.932946 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:44:20.932958 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:44:20.932969 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:44:20.933009 systemd-journald[192]: Collecting audit messages is disabled. Mar 17 17:44:20.933040 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:44:20.933051 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:44:20.933063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:44:20.933075 systemd-journald[192]: Journal started Mar 17 17:44:20.933098 systemd-journald[192]: Runtime Journal (/run/log/journal/eb220a6cbfe14cebb0e267e2b2b17254) is 6.0M, max 48.3M, 42.2M free. Mar 17 17:44:20.938291 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:44:20.919232 systemd-modules-load[193]: Inserted module 'overlay' Mar 17 17:44:20.944384 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:44:20.944441 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:44:20.955768 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:44:20.958192 systemd-modules-load[193]: Inserted module 'br_netfilter' Mar 17 17:44:20.959414 kernel: Bridge firewalling registered Mar 17 17:44:20.966393 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:44:20.968075 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:44:20.981968 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:44:20.984583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:44:20.987104 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:44:20.990047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:44:20.999827 dracut-cmdline[219]: dracut-dracut-053 Mar 17 17:44:21.001897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:44:21.004266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:44:21.006834 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:44:21.016953 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:44:21.051455 systemd-resolved[243]: Positive Trust Anchors: Mar 17 17:44:21.051487 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:44:21.051529 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:44:21.054133 systemd-resolved[243]: Defaulting to hostname 'linux'. Mar 17 17:44:21.055493 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:44:21.063212 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:44:21.108772 kernel: SCSI subsystem initialized Mar 17 17:44:21.117759 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:44:21.128763 kernel: iscsi: registered transport (tcp) Mar 17 17:44:21.151748 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:44:21.151782 kernel: QLogic iSCSI HBA Driver Mar 17 17:44:21.209124 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:44:21.224913 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:44:21.251931 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:44:21.252028 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:44:21.252043 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:44:21.315801 kernel: raid6: avx2x4 gen() 28737 MB/s Mar 17 17:44:21.332781 kernel: raid6: avx2x2 gen() 29250 MB/s Mar 17 17:44:21.349856 kernel: raid6: avx2x1 gen() 25135 MB/s Mar 17 17:44:21.349954 kernel: raid6: using algorithm avx2x2 gen() 29250 MB/s Mar 17 17:44:21.367851 kernel: raid6: .... xor() 20001 MB/s, rmw enabled Mar 17 17:44:21.367915 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:44:21.387764 kernel: xor: automatically using best checksumming function avx Mar 17 17:44:21.544771 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:44:21.559037 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:44:21.566975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:44:21.579774 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 17 17:44:21.584613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:44:21.595869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:44:21.610859 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Mar 17 17:44:21.650240 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:44:21.660864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:44:21.737884 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:44:21.748904 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:44:21.762889 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:44:21.766279 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:44:21.769115 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:44:21.771439 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:44:21.775764 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:44:21.808994 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:44:21.809152 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:44:21.809168 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:44:21.809188 kernel: GPT:9289727 != 19775487 Mar 17 17:44:21.809199 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:44:21.809209 kernel: GPT:9289727 != 19775487 Mar 17 17:44:21.809219 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:44:21.809230 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:44:21.809241 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:44:21.809251 kernel: AES CTR mode by8 optimization enabled Mar 17 17:44:21.809262 kernel: libata version 3.00 loaded. Mar 17 17:44:21.782102 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:44:21.802790 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:44:21.816872 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:44:21.842961 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:44:21.842981 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:44:21.843145 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:44:21.843305 kernel: scsi host0: ahci Mar 17 17:44:21.843480 kernel: scsi host1: ahci Mar 17 17:44:21.843630 kernel: scsi host2: ahci Mar 17 17:44:21.843797 kernel: scsi host3: ahci Mar 17 17:44:21.843951 kernel: scsi host4: ahci Mar 17 17:44:21.844103 kernel: scsi host5: ahci Mar 17 17:44:21.844256 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (459) Mar 17 17:44:21.844273 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (473) Mar 17 17:44:21.844287 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 17 17:44:21.844297 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 17 17:44:21.844307 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 17 17:44:21.844318 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 17 17:44:21.844328 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 17 17:44:21.844338 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 17 17:44:21.822738 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:44:21.822864 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:44:21.825204 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:44:21.829443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:44:21.829631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:44:21.831548 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:44:21.844307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:44:21.859165 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:44:21.864414 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:44:21.869175 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:44:21.881323 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:44:21.888119 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:44:21.888623 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:44:21.905989 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:44:21.908130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:44:21.916716 disk-uuid[557]: Primary Header is updated. Mar 17 17:44:21.916716 disk-uuid[557]: Secondary Entries is updated. Mar 17 17:44:21.916716 disk-uuid[557]: Secondary Header is updated. Mar 17 17:44:21.920795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:44:21.924759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:44:21.928782 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:44:22.150779 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:44:22.150896 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:44:22.160142 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:44:22.160175 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:44:22.160186 kernel: ata3.00: applying bridge limits Mar 17 17:44:22.161213 kernel: ata3.00: configured for UDMA/100 Mar 17 17:44:22.161747 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:44:22.165749 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:44:22.165781 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:44:22.166756 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:44:22.206772 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:44:22.219508 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:44:22.219531 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:44:22.938640 disk-uuid[559]: The operation has completed successfully. Mar 17 17:44:22.939974 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:44:22.968664 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:44:22.968849 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:44:22.991880 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:44:22.996627 sh[594]: Success Mar 17 17:44:23.010043 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:44:23.043329 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:44:23.071446 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:44:23.073929 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:44:23.088831 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:44:23.088870 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:44:23.088882 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:44:23.089851 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:44:23.091189 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:44:23.095582 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:44:23.097150 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:44:23.107870 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:44:23.109525 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:44:23.119753 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:44:23.119782 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:44:23.119793 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:44:23.122787 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:44:23.132797 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:44:23.134478 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:44:23.309717 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:44:23.318864 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:44:23.400252 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:44:23.413899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:44:23.428245 ignition[724]: Ignition 2.20.0 Mar 17 17:44:23.428256 ignition[724]: Stage: fetch-offline Mar 17 17:44:23.428295 ignition[724]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:44:23.428307 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:44:23.428430 ignition[724]: parsed url from cmdline: "" Mar 17 17:44:23.428435 ignition[724]: no config URL provided Mar 17 17:44:23.428440 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:44:23.428450 ignition[724]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:44:23.428485 ignition[724]: op(1): [started] loading QEMU firmware config module Mar 17 17:44:23.428491 ignition[724]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:44:23.435903 ignition[724]: op(1): [finished] loading QEMU firmware config module Mar 17 17:44:23.438058 systemd-networkd[782]: lo: Link UP Mar 17 17:44:23.438067 systemd-networkd[782]: lo: Gained carrier Mar 17 17:44:23.439790 systemd-networkd[782]: Enumeration completed Mar 17 17:44:23.440226 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:44:23.440230 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:44:23.441072 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:44:23.441719 systemd-networkd[782]: eth0: Link UP Mar 17 17:44:23.441733 systemd-networkd[782]: eth0: Gained carrier Mar 17 17:44:23.441741 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:44:23.442572 systemd[1]: Reached target network.target - Network. Mar 17 17:44:23.455800 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:44:23.491260 ignition[724]: parsing config with SHA512: 2759c79b29fe6fcab78ccc9b7c4530d866b77ea67aae4fce1b5eb7f3bb2e57479a6b988f5268f5d751d4128938cc976d61b3032e35f63e17df282cbb9b648346 Mar 17 17:44:23.495462 unknown[724]: fetched base config from "system" Mar 17 17:44:23.495478 unknown[724]: fetched user config from "qemu" Mar 17 17:44:23.496077 ignition[724]: fetch-offline: fetch-offline passed Mar 17 17:44:23.496156 ignition[724]: Ignition finished successfully Mar 17 17:44:23.501162 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:44:23.501926 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:44:23.527010 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:44:23.553661 ignition[789]: Ignition 2.20.0 Mar 17 17:44:23.553676 ignition[789]: Stage: kargs Mar 17 17:44:23.553862 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:44:23.558077 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:44:23.553875 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:44:23.554713 ignition[789]: kargs: kargs passed Mar 17 17:44:23.554779 ignition[789]: Ignition finished successfully Mar 17 17:44:23.567930 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:44:23.594959 ignition[797]: Ignition 2.20.0 Mar 17 17:44:23.594973 ignition[797]: Stage: disks Mar 17 17:44:23.595136 ignition[797]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:44:23.595147 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:44:23.595992 ignition[797]: disks: disks passed Mar 17 17:44:23.596040 ignition[797]: Ignition finished successfully Mar 17 17:44:23.604337 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:44:23.606707 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:44:23.607225 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:44:23.607600 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:44:23.608156 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:44:23.608524 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:44:23.631043 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:44:23.646172 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:44:23.652775 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:44:23.664813 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:44:23.754743 kernel: EXT4-fs (vda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:44:23.754807 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:44:23.756366 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:44:23.775840 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:44:23.777715 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:44:23.779182 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:44:23.784849 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Mar 17 17:44:23.784869 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:44:23.779243 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:44:23.792209 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:44:23.792256 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:44:23.792310 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:44:23.779285 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:44:23.788472 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:44:23.793622 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:44:23.796943 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:44:23.862571 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:44:23.867659 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:44:23.872988 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:44:23.877903 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:44:23.969701 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:44:23.987836 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:44:23.989209 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:44:24.000752 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:44:24.016850 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:44:24.021819 ignition[928]: INFO : Ignition 2.20.0 Mar 17 17:44:24.021819 ignition[928]: INFO : Stage: mount Mar 17 17:44:24.023421 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:44:24.023421 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:44:24.023421 ignition[928]: INFO : mount: mount passed Mar 17 17:44:24.023421 ignition[928]: INFO : Ignition finished successfully Mar 17 17:44:24.028738 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:44:24.035954 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:44:24.088241 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:44:24.096996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:44:24.105233 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Mar 17 17:44:24.105262 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:44:24.105274 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:44:24.106740 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:44:24.109747 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:44:24.110525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:44:24.135590 ignition[959]: INFO : Ignition 2.20.0 Mar 17 17:44:24.135590 ignition[959]: INFO : Stage: files Mar 17 17:44:24.137372 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:44:24.137372 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:44:24.137372 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:44:24.141065 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:44:24.141065 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:44:24.141065 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:44:24.141065 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:44:24.146729 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:44:24.146729 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:44:24.146729 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:44:24.141339 unknown[959]: wrote ssh authorized keys file for user: core Mar 17 17:44:24.233110 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:44:24.356276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:44:24.358292 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:44:24.358292 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 17:44:24.637907 systemd-networkd[782]: eth0: Gained IPv6LL Mar 17 17:44:24.721614 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:44:24.840142 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:44:24.840142 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:44:24.844004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:44:25.128820 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:44:25.586178 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:44:25.586178 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:44:25.590118 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:44:25.590118 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:44:25.590118 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:44:25.590118 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 17:44:25.590118 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:44:25.590118 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:44:25.590118 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 17:44:25.590118 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:44:25.617224 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:44:25.622372 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:44:25.624305 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:44:25.624305 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:44:25.627379 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:44:25.629080 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:44:25.631166 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:44:25.633004 ignition[959]: INFO : files: files passed Mar 17 17:44:25.633864 ignition[959]: INFO : Ignition finished successfully Mar 17 17:44:25.638119 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:44:25.652016 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:44:25.654574 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:44:25.656529 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:44:25.656680 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:44:25.671968 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:44:25.676155 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:44:25.676155 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:44:25.679765 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:44:25.682979 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:44:25.685594 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:44:25.696876 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:44:25.724024 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:44:25.724168 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:44:25.724889 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:44:25.725166 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:44:25.725544 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:44:25.726388 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:44:25.748622 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:44:25.762932 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:44:25.773092 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:44:25.773471 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:44:25.775835 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:44:25.777827 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:44:25.777963 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:44:25.781005 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:44:25.782755 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:44:25.784778 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:44:25.786807 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:44:25.788843 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:44:25.790990 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:44:25.793085 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:44:25.795367 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:44:25.797382 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:44:25.799552 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:44:25.801309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:44:25.801476 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:44:25.803564 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:44:25.805185 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:44:25.807232 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:44:25.807379 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:44:25.809455 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:44:25.809589 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:44:25.811740 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:44:25.811873 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:44:25.813878 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:44:25.815616 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:44:25.819807 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:44:25.821248 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:44:25.823217 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:44:25.824985 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:44:25.825106 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:44:25.826996 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:44:25.827109 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:44:25.829435 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:44:25.829572 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:44:25.831484 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:44:25.831617 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:44:25.842884 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:44:25.844602 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:44:25.845750 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:44:25.845915 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:44:25.848009 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:44:25.848212 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:44:25.854269 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:44:25.854411 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:44:25.858527 ignition[1014]: INFO : Ignition 2.20.0 Mar 17 17:44:25.858527 ignition[1014]: INFO : Stage: umount Mar 17 17:44:25.858527 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:44:25.858527 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:44:25.863063 ignition[1014]: INFO : umount: umount passed Mar 17 17:44:25.863063 ignition[1014]: INFO : Ignition finished successfully Mar 17 17:44:25.865017 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:44:25.866033 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:44:25.868660 systemd[1]: Stopped target network.target - Network. Mar 17 17:44:25.870452 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:44:25.871434 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:44:25.873549 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:44:25.873607 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:44:25.876772 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:44:25.876833 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:44:25.879707 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:44:25.879785 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:44:25.883137 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:44:25.885631 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:44:25.888796 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 17 17:44:25.890465 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:44:25.892332 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:44:25.893649 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:44:25.897945 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:44:25.898069 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:44:25.910868 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:44:25.916213 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:44:25.916289 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:44:25.918788 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:44:25.921328 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:44:25.921473 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:44:25.926687 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:44:25.926791 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:44:25.928290 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:44:25.928343 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:44:25.930249 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:44:25.930301 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:44:25.933988 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:44:25.934117 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:44:25.935874 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:44:25.936048 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:44:25.939154 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:44:25.939216 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:44:25.941267 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:44:25.941311 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:44:25.943191 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:44:25.943243 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:44:25.945347 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:44:25.945409 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:44:25.947571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:44:25.947625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:44:25.959973 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:44:25.962290 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:44:25.963541 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:44:25.966163 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:44:25.967304 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:44:25.970002 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:44:25.970063 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:44:25.979499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:44:25.980516 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:44:25.983232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:44:25.984380 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:44:26.058770 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:44:26.058927 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:44:26.062232 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:44:26.064363 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:44:26.064426 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:44:26.080005 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:44:26.087357 systemd[1]: Switching root. Mar 17 17:44:26.128899 systemd-journald[192]: Journal stopped Mar 17 17:44:27.434863 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Mar 17 17:44:27.434947 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:44:27.434962 kernel: SELinux: policy capability open_perms=1 Mar 17 17:44:27.434980 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:44:27.434992 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:44:27.435007 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:44:27.435019 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:44:27.435030 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:44:27.435042 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:44:27.435054 kernel: audit: type=1403 audit(1742233466.682:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:44:27.435067 systemd[1]: Successfully loaded SELinux policy in 42.234ms. Mar 17 17:44:27.435099 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.127ms. Mar 17 17:44:27.435113 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:44:27.435126 systemd[1]: Detected virtualization kvm. Mar 17 17:44:27.435142 systemd[1]: Detected architecture x86-64. Mar 17 17:44:27.435157 systemd[1]: Detected first boot. Mar 17 17:44:27.435169 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:44:27.435182 zram_generator::config[1058]: No configuration found. Mar 17 17:44:27.435198 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:44:27.435210 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:44:27.435223 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:44:27.435239 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:44:27.435255 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:44:27.435268 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:44:27.435280 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:44:27.435292 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:44:27.435305 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:44:27.435318 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:44:27.435337 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:44:27.435350 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:44:27.435365 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:44:27.435378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:44:27.435390 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:44:27.435403 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:44:27.435415 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:44:27.435428 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:44:27.435440 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:44:27.435453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:44:27.435466 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:44:27.435481 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:44:27.435493 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:44:27.435506 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:44:27.435520 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:44:27.435532 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:44:27.435545 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:44:27.435558 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:44:27.435571 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:44:27.435586 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:44:27.435609 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:44:27.435625 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:44:27.435640 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:44:27.435652 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:44:27.435665 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:44:27.435677 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:44:27.435689 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:44:27.435702 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:44:27.435718 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:44:27.437104 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:44:27.437121 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:44:27.437134 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:44:27.437146 systemd[1]: Reached target machines.target - Containers. Mar 17 17:44:27.437159 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:44:27.437171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:44:27.437188 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:44:27.437201 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:44:27.437217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:44:27.437230 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:44:27.437242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:44:27.437255 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:44:27.437267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:44:27.437285 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:44:27.437297 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:44:27.437312 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:44:27.437651 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:44:27.437666 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:44:27.437678 kernel: loop: module loaded Mar 17 17:44:27.437691 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:44:27.437702 kernel: fuse: init (API version 7.39) Mar 17 17:44:27.437715 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:44:27.437740 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:44:27.437753 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:44:27.437765 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:44:27.437782 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:44:27.437794 systemd[1]: Stopped verity-setup.service. Mar 17 17:44:27.437808 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:44:27.437820 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:44:27.437832 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:44:27.437847 kernel: ACPI: bus type drm_connector registered Mar 17 17:44:27.437882 systemd-journald[1128]: Collecting audit messages is disabled. Mar 17 17:44:27.437909 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:44:27.437921 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:44:27.437933 systemd-journald[1128]: Journal started Mar 17 17:44:27.437955 systemd-journald[1128]: Runtime Journal (/run/log/journal/eb220a6cbfe14cebb0e267e2b2b17254) is 6.0M, max 48.3M, 42.2M free. Mar 17 17:44:27.207977 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:44:27.227977 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:44:27.228497 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:44:27.440821 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:44:27.442605 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:44:27.444059 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:44:27.445408 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:44:27.447239 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:44:27.447445 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:44:27.449063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:44:27.449269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:44:27.450764 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:44:27.451202 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:44:27.452636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:44:27.452840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:44:27.454459 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:44:27.454644 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:44:27.456160 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:44:27.457691 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:44:27.457891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:44:27.459333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:44:27.460895 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:44:27.462522 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:44:27.479038 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:44:27.492877 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:44:27.495511 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:44:27.496658 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:44:27.496692 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:44:27.498748 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:44:27.501937 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:44:27.504509 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:44:27.505760 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:44:27.508651 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:44:27.511025 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:44:27.512430 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:44:27.515459 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:44:27.516606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:44:27.520595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:44:27.524981 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:44:27.529830 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:44:27.539558 systemd-journald[1128]: Time spent on flushing to /var/log/journal/eb220a6cbfe14cebb0e267e2b2b17254 is 17.790ms for 1044 entries. Mar 17 17:44:27.539558 systemd-journald[1128]: System Journal (/var/log/journal/eb220a6cbfe14cebb0e267e2b2b17254) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:44:27.625259 systemd-journald[1128]: Received client request to flush runtime journal. Mar 17 17:44:27.625326 kernel: loop0: detected capacity change from 0 to 140992 Mar 17 17:44:27.533649 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:44:27.536945 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:44:27.539185 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:44:27.541801 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:44:27.549185 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:44:27.560279 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:44:27.617558 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:44:27.620433 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:44:27.622487 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:44:27.632456 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:44:27.640710 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 17 17:44:27.640748 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 17 17:44:27.644839 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:44:27.647455 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:44:27.648228 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:44:27.650130 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:44:27.654770 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:44:27.659960 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:44:27.682748 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 17:44:27.692266 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:44:27.719220 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:44:27.739151 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Mar 17 17:44:27.739174 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Mar 17 17:44:27.744852 kernel: loop2: detected capacity change from 0 to 138184 Mar 17 17:44:27.748143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:44:27.776759 kernel: loop3: detected capacity change from 0 to 140992 Mar 17 17:44:27.790761 kernel: loop4: detected capacity change from 0 to 210664 Mar 17 17:44:27.801756 kernel: loop5: detected capacity change from 0 to 138184 Mar 17 17:44:27.812925 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:44:27.813602 (sd-merge)[1200]: Merged extensions into '/usr'. Mar 17 17:44:27.822212 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:44:27.822235 systemd[1]: Reloading... Mar 17 17:44:27.917421 zram_generator::config[1226]: No configuration found. Mar 17 17:44:28.024980 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:44:28.095997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:44:28.146187 systemd[1]: Reloading finished in 323 ms. Mar 17 17:44:28.231313 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:44:28.233065 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:44:28.249894 systemd[1]: Starting ensure-sysext.service... Mar 17 17:44:28.258150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:44:28.262894 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:44:28.262915 systemd[1]: Reloading... Mar 17 17:44:28.291708 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:44:28.292097 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:44:28.293140 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:44:28.293447 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 17 17:44:28.293522 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 17 17:44:28.301487 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:44:28.301579 systemd-tmpfiles[1264]: Skipping /boot Mar 17 17:44:28.320651 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:44:28.320820 systemd-tmpfiles[1264]: Skipping /boot Mar 17 17:44:28.329756 zram_generator::config[1293]: No configuration found. Mar 17 17:44:28.455147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:44:28.504378 systemd[1]: Reloading finished in 241 ms. Mar 17 17:44:28.522379 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:44:28.540525 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:44:28.551581 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:44:28.554252 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:44:28.556900 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:44:28.563438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:44:28.567958 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:44:28.570521 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:44:28.573661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:44:28.574208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:44:28.577940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:44:28.580451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:44:28.584206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:44:28.586137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:44:28.586242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:44:28.587120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:44:28.587708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:44:28.595188 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:44:28.597342 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:44:28.597543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:44:28.609982 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:44:28.612224 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:44:28.612459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:44:28.614538 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Mar 17 17:44:28.618431 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:44:28.618835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:44:28.624015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:44:28.627169 augenrules[1363]: No rules Mar 17 17:44:28.628699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:44:28.632047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:44:28.634571 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:44:28.636024 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:44:28.638959 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:44:28.640993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:44:28.642324 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:44:28.642574 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:44:28.644350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:44:28.644546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:44:28.646053 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:44:28.648029 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:44:28.649748 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:44:28.649953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:44:28.651660 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:44:28.651853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:44:28.660863 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:44:28.679028 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:44:28.686885 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:44:28.688093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:44:28.689870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:44:28.692893 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:44:28.697148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:44:28.703934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:44:28.705674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:44:28.708874 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:44:28.710792 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:44:28.710821 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:44:28.711402 systemd[1]: Finished ensure-sysext.service. Mar 17 17:44:28.717081 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:44:28.757464 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1378) Mar 17 17:44:28.750847 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:44:28.755967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:44:28.756179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:44:28.757634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:44:28.762863 augenrules[1397]: /sbin/augenrules: No change Mar 17 17:44:28.777813 augenrules[1429]: No rules Mar 17 17:44:28.775595 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:44:28.775891 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:44:28.777594 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:44:28.778249 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:44:28.783371 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:44:28.783608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:44:28.787320 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:44:28.789390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:44:28.790004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:44:28.794595 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:44:28.827455 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:44:28.832850 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:44:28.834903 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:44:28.844760 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:44:28.856847 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:44:28.859029 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:44:28.859209 systemd-networkd[1409]: lo: Link UP Mar 17 17:44:28.859214 systemd-networkd[1409]: lo: Gained carrier Mar 17 17:44:28.861031 systemd-networkd[1409]: Enumeration completed Mar 17 17:44:28.861138 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:44:28.861474 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:44:28.861478 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:44:28.862450 systemd-networkd[1409]: eth0: Link UP Mar 17 17:44:28.862467 systemd-networkd[1409]: eth0: Gained carrier Mar 17 17:44:28.862482 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:44:28.867923 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:44:28.879824 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:44:28.917009 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 17 17:44:28.923959 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:44:28.924138 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:44:28.924336 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:44:28.945007 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:44:28.946545 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:44:30.296662 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:44:30.297305 systemd-timesyncd[1416]: Initial clock synchronization to Mon 2025-03-17 17:44:30.296198 UTC. Mar 17 17:44:30.301691 systemd-resolved[1332]: Positive Trust Anchors: Mar 17 17:44:30.301709 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:44:30.301743 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:44:30.309698 systemd-resolved[1332]: Defaulting to hostname 'linux'. Mar 17 17:44:30.313199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:44:30.316514 systemd[1]: Reached target network.target - Network. Mar 17 17:44:30.317715 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:44:30.333664 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:44:30.335904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:44:30.341998 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:44:30.342478 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:44:30.346960 kernel: kvm_amd: TSC scaling supported Mar 17 17:44:30.346993 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:44:30.347006 kernel: kvm_amd: Nested Paging enabled Mar 17 17:44:30.347018 kernel: kvm_amd: LBR virtualization supported Mar 17 17:44:30.348118 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:44:30.348145 kernel: kvm_amd: Virtual GIF supported Mar 17 17:44:30.349117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:44:30.374869 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:44:30.406498 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:44:30.417783 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:44:30.419390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:44:30.430059 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:44:30.468812 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:44:30.470571 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:44:30.471789 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:44:30.473021 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:44:30.474498 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:44:30.476104 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:44:30.477486 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:44:30.478892 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:44:30.480295 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:44:30.480321 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:44:30.481346 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:44:30.483306 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:44:30.486876 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:44:30.498566 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:44:30.501030 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:44:30.502723 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:44:30.504024 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:44:30.505109 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:44:30.506208 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:44:30.506236 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:44:30.507307 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:44:30.509524 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:44:30.512573 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:44:30.514376 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:44:30.518901 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:44:30.520142 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:44:30.522871 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:44:30.523723 jq[1471]: false Mar 17 17:44:30.529785 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:44:30.532793 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:44:30.536826 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:44:30.541159 extend-filesystems[1472]: Found loop3 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found loop4 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found loop5 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found sr0 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found vda Mar 17 17:44:30.543478 extend-filesystems[1472]: Found vda1 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found vda2 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found vda3 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found usr Mar 17 17:44:30.543478 extend-filesystems[1472]: Found vda4 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found vda6 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found vda7 Mar 17 17:44:30.543478 extend-filesystems[1472]: Found vda9 Mar 17 17:44:30.543478 extend-filesystems[1472]: Checking size of /dev/vda9 Mar 17 17:44:30.542249 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:44:30.542991 dbus-daemon[1470]: [system] SELinux support is enabled Mar 17 17:44:30.544441 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:44:30.544876 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:44:30.546778 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:44:30.552816 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:44:30.553796 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:44:30.557766 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:44:30.561411 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:44:30.562493 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:44:30.566656 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:44:30.566945 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:44:30.568371 extend-filesystems[1472]: Resized partition /dev/vda9 Mar 17 17:44:30.572967 jq[1483]: true Mar 17 17:44:30.582893 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:44:30.589986 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:44:30.590880 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:44:30.592292 update_engine[1481]: I20250317 17:44:30.591775 1481 main.cc:92] Flatcar Update Engine starting Mar 17 17:44:30.591737 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:44:30.593421 jq[1499]: true Mar 17 17:44:30.607642 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1388) Mar 17 17:44:30.608669 update_engine[1481]: I20250317 17:44:30.608462 1481 update_check_scheduler.cc:74] Next update check in 2m31s Mar 17 17:44:30.619855 tar[1486]: linux-amd64/helm Mar 17 17:44:30.627632 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:44:30.628118 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:44:30.660969 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:44:30.661010 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:44:30.662401 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:44:30.662425 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:44:30.665706 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:44:30.672770 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:44:30.768674 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:44:30.920787 systemd-logind[1479]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:44:30.920810 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:44:30.921584 extend-filesystems[1495]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:44:30.921584 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:44:30.921584 extend-filesystems[1495]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:44:30.932389 extend-filesystems[1472]: Resized filesystem in /dev/vda9 Mar 17 17:44:30.923978 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:44:30.935139 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:44:30.935259 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:44:30.924344 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:44:30.924586 systemd-logind[1479]: New seat seat0. Mar 17 17:44:30.933532 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:44:30.938024 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:44:30.942959 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:44:30.950869 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:44:30.966088 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:44:30.976520 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:44:30.976827 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:44:30.983916 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:44:31.005467 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:44:31.013913 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:44:31.018574 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:44:31.019885 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:44:31.177646 containerd[1501]: time="2025-03-17T17:44:31.177442359Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:44:31.202963 containerd[1501]: time="2025-03-17T17:44:31.202883228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:44:31.205177 containerd[1501]: time="2025-03-17T17:44:31.205122639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:44:31.205177 containerd[1501]: time="2025-03-17T17:44:31.205154589Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:44:31.205177 containerd[1501]: time="2025-03-17T17:44:31.205178794Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:44:31.205446 containerd[1501]: time="2025-03-17T17:44:31.205417121Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:44:31.205446 containerd[1501]: time="2025-03-17T17:44:31.205442599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:44:31.205549 containerd[1501]: time="2025-03-17T17:44:31.205523922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:44:31.205549 containerd[1501]: time="2025-03-17T17:44:31.205541805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:44:31.205810 containerd[1501]: time="2025-03-17T17:44:31.205781465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:44:31.205810 containerd[1501]: time="2025-03-17T17:44:31.205802394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:44:31.205862 containerd[1501]: time="2025-03-17T17:44:31.205818785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:44:31.205862 containerd[1501]: time="2025-03-17T17:44:31.205829756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:44:31.205967 containerd[1501]: time="2025-03-17T17:44:31.205949109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:44:31.206252 containerd[1501]: time="2025-03-17T17:44:31.206223695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:44:31.206379 containerd[1501]: time="2025-03-17T17:44:31.206354159Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:44:31.206379 containerd[1501]: time="2025-03-17T17:44:31.206372163Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:44:31.206510 containerd[1501]: time="2025-03-17T17:44:31.206486187Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:44:31.206578 containerd[1501]: time="2025-03-17T17:44:31.206555056Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:44:31.213828 containerd[1501]: time="2025-03-17T17:44:31.213794177Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:44:31.213869 containerd[1501]: time="2025-03-17T17:44:31.213842829Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:44:31.213869 containerd[1501]: time="2025-03-17T17:44:31.213859921Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:44:31.213907 containerd[1501]: time="2025-03-17T17:44:31.213875931Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:44:31.213907 containerd[1501]: time="2025-03-17T17:44:31.213891460Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:44:31.214069 containerd[1501]: time="2025-03-17T17:44:31.214039768Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214356753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214530178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214548493Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214564623Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214707120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214741424Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214757515Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214795977Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214845830Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.214981345Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.215005430Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.215024856Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.215060243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215558 containerd[1501]: time="2025-03-17T17:44:31.215084869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215102973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215123241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215175218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215198983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215219582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215239389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215260128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215283341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215302056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215320391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215339446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215356278Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215489087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215649929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.215849 containerd[1501]: time="2025-03-17T17:44:31.215675296Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:44:31.217270 containerd[1501]: time="2025-03-17T17:44:31.217246795Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:44:31.217454 containerd[1501]: time="2025-03-17T17:44:31.217431712Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:44:31.217525 containerd[1501]: time="2025-03-17T17:44:31.217509367Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:44:31.217602 containerd[1501]: time="2025-03-17T17:44:31.217565483Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:44:31.217602 containerd[1501]: time="2025-03-17T17:44:31.217583156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.217602 containerd[1501]: time="2025-03-17T17:44:31.217614054Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:44:31.217817 containerd[1501]: time="2025-03-17T17:44:31.217649911Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:44:31.217817 containerd[1501]: time="2025-03-17T17:44:31.217661903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:44:31.217995 containerd[1501]: time="2025-03-17T17:44:31.217950805Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:44:31.217995 containerd[1501]: time="2025-03-17T17:44:31.218000148Z" level=info msg="Connect containerd service" Mar 17 17:44:31.218285 containerd[1501]: time="2025-03-17T17:44:31.218043840Z" level=info msg="using legacy CRI server" Mar 17 17:44:31.218285 containerd[1501]: time="2025-03-17T17:44:31.218051575Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:44:31.218285 containerd[1501]: time="2025-03-17T17:44:31.218186798Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:44:31.218894 containerd[1501]: time="2025-03-17T17:44:31.218861554Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:44:31.219194 containerd[1501]: time="2025-03-17T17:44:31.219045629Z" level=info msg="Start subscribing containerd event" Mar 17 17:44:31.219194 containerd[1501]: time="2025-03-17T17:44:31.219102336Z" level=info msg="Start recovering state" Mar 17 17:44:31.219284 containerd[1501]: time="2025-03-17T17:44:31.219258559Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:44:31.219343 containerd[1501]: time="2025-03-17T17:44:31.219326446Z" level=info msg="Start event monitor" Mar 17 17:44:31.219450 containerd[1501]: time="2025-03-17T17:44:31.219330855Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:44:31.219518 containerd[1501]: time="2025-03-17T17:44:31.219404773Z" level=info msg="Start snapshots syncer" Mar 17 17:44:31.219585 containerd[1501]: time="2025-03-17T17:44:31.219572408Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:44:31.219706 containerd[1501]: time="2025-03-17T17:44:31.219671123Z" level=info msg="Start streaming server" Mar 17 17:44:31.220055 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:44:31.220378 containerd[1501]: time="2025-03-17T17:44:31.220322856Z" level=info msg="containerd successfully booted in 0.044886s" Mar 17 17:44:31.324056 tar[1486]: linux-amd64/LICENSE Mar 17 17:44:31.324189 tar[1486]: linux-amd64/README.md Mar 17 17:44:31.341268 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:44:32.062813 systemd-networkd[1409]: eth0: Gained IPv6LL Mar 17 17:44:32.066444 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:44:32.068492 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:44:32.084888 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:44:32.087508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:44:32.089782 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:44:32.110549 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:44:32.111001 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:44:32.112878 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:44:32.115677 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:44:33.320465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:44:33.322339 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:44:33.323742 systemd[1]: Startup finished in 1.226s (kernel) + 5.965s (initrd) + 5.337s (userspace) = 12.530s. Mar 17 17:44:33.327489 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:44:34.141021 kubelet[1583]: E0317 17:44:34.140911 1583 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:44:34.145452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:44:34.145685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:44:34.146025 systemd[1]: kubelet.service: Consumed 1.934s CPU time. Mar 17 17:44:36.953250 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:44:36.954796 systemd[1]: Started sshd@0-10.0.0.87:22-10.0.0.1:50016.service - OpenSSH per-connection server daemon (10.0.0.1:50016). Mar 17 17:44:37.163507 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 50016 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:44:37.165851 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:37.174747 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:44:37.184905 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:44:37.187070 systemd-logind[1479]: New session 1 of user core. Mar 17 17:44:37.198777 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:44:37.202071 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:44:37.212277 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:44:37.336050 systemd[1602]: Queued start job for default target default.target. Mar 17 17:44:37.350036 systemd[1602]: Created slice app.slice - User Application Slice. Mar 17 17:44:37.350084 systemd[1602]: Reached target paths.target - Paths. Mar 17 17:44:37.350116 systemd[1602]: Reached target timers.target - Timers. Mar 17 17:44:37.352826 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:44:37.370416 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:44:37.370642 systemd[1602]: Reached target sockets.target - Sockets. Mar 17 17:44:37.370667 systemd[1602]: Reached target basic.target - Basic System. Mar 17 17:44:37.370734 systemd[1602]: Reached target default.target - Main User Target. Mar 17 17:44:37.370789 systemd[1602]: Startup finished in 151ms. Mar 17 17:44:37.371691 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:44:37.392921 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:44:37.454717 systemd[1]: Started sshd@1-10.0.0.87:22-10.0.0.1:50026.service - OpenSSH per-connection server daemon (10.0.0.1:50026). Mar 17 17:44:37.505443 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 50026 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:44:37.507526 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:37.511972 systemd-logind[1479]: New session 2 of user core. Mar 17 17:44:37.532003 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:44:37.587340 sshd[1615]: Connection closed by 10.0.0.1 port 50026 Mar 17 17:44:37.587844 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:37.599448 systemd[1]: sshd@1-10.0.0.87:22-10.0.0.1:50026.service: Deactivated successfully. Mar 17 17:44:37.601498 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:44:37.603178 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:44:37.613868 systemd[1]: Started sshd@2-10.0.0.87:22-10.0.0.1:50028.service - OpenSSH per-connection server daemon (10.0.0.1:50028). Mar 17 17:44:37.614890 systemd-logind[1479]: Removed session 2. Mar 17 17:44:37.649681 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 50028 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:44:37.651251 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:37.655662 systemd-logind[1479]: New session 3 of user core. Mar 17 17:44:37.669741 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:44:37.718752 sshd[1622]: Connection closed by 10.0.0.1 port 50028 Mar 17 17:44:37.719200 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:37.737290 systemd[1]: sshd@2-10.0.0.87:22-10.0.0.1:50028.service: Deactivated successfully. Mar 17 17:44:37.739936 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:44:37.741978 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:44:37.752947 systemd[1]: Started sshd@3-10.0.0.87:22-10.0.0.1:50042.service - OpenSSH per-connection server daemon (10.0.0.1:50042). Mar 17 17:44:37.754402 systemd-logind[1479]: Removed session 3. Mar 17 17:44:37.789020 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 50042 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:44:37.791570 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:37.795612 systemd-logind[1479]: New session 4 of user core. Mar 17 17:44:37.803809 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:44:37.858335 sshd[1629]: Connection closed by 10.0.0.1 port 50042 Mar 17 17:44:37.858897 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:37.866501 systemd[1]: sshd@3-10.0.0.87:22-10.0.0.1:50042.service: Deactivated successfully. Mar 17 17:44:37.868518 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:44:37.870280 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:44:37.871657 systemd[1]: Started sshd@4-10.0.0.87:22-10.0.0.1:50052.service - OpenSSH per-connection server daemon (10.0.0.1:50052). Mar 17 17:44:37.872483 systemd-logind[1479]: Removed session 4. Mar 17 17:44:37.912727 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 50052 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:44:37.914271 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:37.919060 systemd-logind[1479]: New session 5 of user core. Mar 17 17:44:37.928743 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:44:37.988774 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:44:37.989126 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:44:38.002959 sudo[1637]: pam_unix(sudo:session): session closed for user root Mar 17 17:44:38.004717 sshd[1636]: Connection closed by 10.0.0.1 port 50052 Mar 17 17:44:38.005317 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:38.014587 systemd[1]: sshd@4-10.0.0.87:22-10.0.0.1:50052.service: Deactivated successfully. Mar 17 17:44:38.016482 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:44:38.018364 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:44:38.019913 systemd[1]: Started sshd@5-10.0.0.87:22-10.0.0.1:50068.service - OpenSSH per-connection server daemon (10.0.0.1:50068). Mar 17 17:44:38.020661 systemd-logind[1479]: Removed session 5. Mar 17 17:44:38.061185 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 50068 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:44:38.062890 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:38.067030 systemd-logind[1479]: New session 6 of user core. Mar 17 17:44:38.081745 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:44:38.136984 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:44:38.137337 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:44:38.141330 sudo[1646]: pam_unix(sudo:session): session closed for user root Mar 17 17:44:38.148214 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:44:38.148563 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:44:38.170920 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:44:38.204099 augenrules[1668]: No rules Mar 17 17:44:38.206122 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:44:38.206394 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:44:38.207724 sudo[1645]: pam_unix(sudo:session): session closed for user root Mar 17 17:44:38.209312 sshd[1644]: Connection closed by 10.0.0.1 port 50068 Mar 17 17:44:38.209734 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:38.223488 systemd[1]: sshd@5-10.0.0.87:22-10.0.0.1:50068.service: Deactivated successfully. Mar 17 17:44:38.225434 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:44:38.227171 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:44:38.232852 systemd[1]: Started sshd@6-10.0.0.87:22-10.0.0.1:50076.service - OpenSSH per-connection server daemon (10.0.0.1:50076). Mar 17 17:44:38.233727 systemd-logind[1479]: Removed session 6. Mar 17 17:44:38.268483 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 50076 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:44:38.270182 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:38.273955 systemd-logind[1479]: New session 7 of user core. Mar 17 17:44:38.283745 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:44:38.336563 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:44:38.336915 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:44:38.814875 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:44:38.815022 (dockerd)[1699]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:44:39.267663 dockerd[1699]: time="2025-03-17T17:44:39.267470289Z" level=info msg="Starting up" Mar 17 17:44:39.442563 dockerd[1699]: time="2025-03-17T17:44:39.442507771Z" level=info msg="Loading containers: start." Mar 17 17:44:39.654655 kernel: Initializing XFRM netlink socket Mar 17 17:44:39.738169 systemd-networkd[1409]: docker0: Link UP Mar 17 17:44:39.775217 dockerd[1699]: time="2025-03-17T17:44:39.775170290Z" level=info msg="Loading containers: done." Mar 17 17:44:39.800780 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1258677522-merged.mount: Deactivated successfully. Mar 17 17:44:39.801664 dockerd[1699]: time="2025-03-17T17:44:39.801610354Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:44:39.801735 dockerd[1699]: time="2025-03-17T17:44:39.801714028Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:44:39.801864 dockerd[1699]: time="2025-03-17T17:44:39.801843391Z" level=info msg="Daemon has completed initialization" Mar 17 17:44:39.839725 dockerd[1699]: time="2025-03-17T17:44:39.839642845Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:44:39.839923 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:44:40.918922 containerd[1501]: time="2025-03-17T17:44:40.918870032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:44:41.692988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338613999.mount: Deactivated successfully. Mar 17 17:44:43.358440 containerd[1501]: time="2025-03-17T17:44:43.358352245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:43.359182 containerd[1501]: time="2025-03-17T17:44:43.359070342Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 17 17:44:43.360393 containerd[1501]: time="2025-03-17T17:44:43.360365542Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:43.364258 containerd[1501]: time="2025-03-17T17:44:43.364182393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:43.365475 containerd[1501]: time="2025-03-17T17:44:43.365431126Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.446507833s" Mar 17 17:44:43.365533 containerd[1501]: time="2025-03-17T17:44:43.365479867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 17:44:43.397330 containerd[1501]: time="2025-03-17T17:44:43.397281409Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:44:44.395965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:44:44.405789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:44:44.599733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:44:44.606236 (kubelet)[1972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:44:44.882111 kubelet[1972]: E0317 17:44:44.881856 1972 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:44:44.890216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:44:44.890445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:44:48.063026 containerd[1501]: time="2025-03-17T17:44:48.062941270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:48.078985 containerd[1501]: time="2025-03-17T17:44:48.078901252Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 17 17:44:48.096776 containerd[1501]: time="2025-03-17T17:44:48.096679586Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:48.118305 containerd[1501]: time="2025-03-17T17:44:48.118242740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:48.119365 containerd[1501]: time="2025-03-17T17:44:48.119314440Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 4.721991012s" Mar 17 17:44:48.119365 containerd[1501]: time="2025-03-17T17:44:48.119366217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 17:44:48.148590 containerd[1501]: time="2025-03-17T17:44:48.148536563Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:44:51.378307 containerd[1501]: time="2025-03-17T17:44:51.378236240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:51.423947 containerd[1501]: time="2025-03-17T17:44:51.423867888Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 17 17:44:51.429466 containerd[1501]: time="2025-03-17T17:44:51.429329114Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:51.433764 containerd[1501]: time="2025-03-17T17:44:51.433684315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:51.434844 containerd[1501]: time="2025-03-17T17:44:51.434804416Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 3.286223089s" Mar 17 17:44:51.434844 containerd[1501]: time="2025-03-17T17:44:51.434839502Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 17:44:51.462948 containerd[1501]: time="2025-03-17T17:44:51.462902380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:44:53.491162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3450805420.mount: Deactivated successfully. Mar 17 17:44:54.691551 containerd[1501]: time="2025-03-17T17:44:54.691452641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:54.694828 containerd[1501]: time="2025-03-17T17:44:54.694772238Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 17 17:44:54.730372 containerd[1501]: time="2025-03-17T17:44:54.730299891Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:54.765941 containerd[1501]: time="2025-03-17T17:44:54.765866606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:44:54.766653 containerd[1501]: time="2025-03-17T17:44:54.766560628Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 3.303607403s" Mar 17 17:44:54.766749 containerd[1501]: time="2025-03-17T17:44:54.766663251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:44:54.790156 containerd[1501]: time="2025-03-17T17:44:54.790091554Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:44:55.140668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:44:55.149868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:44:55.311428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:44:55.316086 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:44:55.451331 kubelet[2024]: E0317 17:44:55.451163 2024 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:44:55.455998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:44:55.456223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:44:59.504267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661241847.mount: Deactivated successfully. Mar 17 17:45:04.961924 containerd[1501]: time="2025-03-17T17:45:04.961824386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:04.983341 containerd[1501]: time="2025-03-17T17:45:04.983276197Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:45:04.996252 containerd[1501]: time="2025-03-17T17:45:04.996133748Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:05.031575 containerd[1501]: time="2025-03-17T17:45:05.031506081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:05.033042 containerd[1501]: time="2025-03-17T17:45:05.032963299Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 10.242814467s" Mar 17 17:45:05.033042 containerd[1501]: time="2025-03-17T17:45:05.033033754Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:45:05.058901 containerd[1501]: time="2025-03-17T17:45:05.058856858Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:45:05.706828 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:45:05.723982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:45:05.886898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:45:05.892717 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:45:06.384611 kubelet[2091]: E0317 17:45:06.384525 2091 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:45:06.389536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:45:06.389828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:45:07.331356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642664077.mount: Deactivated successfully. Mar 17 17:45:07.463366 containerd[1501]: time="2025-03-17T17:45:07.463272016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:07.480965 containerd[1501]: time="2025-03-17T17:45:07.480869576Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 17 17:45:07.495323 containerd[1501]: time="2025-03-17T17:45:07.495254752Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:07.514921 containerd[1501]: time="2025-03-17T17:45:07.514843264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:07.515646 containerd[1501]: time="2025-03-17T17:45:07.515579288Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.456681724s" Mar 17 17:45:07.515646 containerd[1501]: time="2025-03-17T17:45:07.515610027Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 17:45:07.543554 containerd[1501]: time="2025-03-17T17:45:07.543478368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:45:09.194657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108155195.mount: Deactivated successfully. Mar 17 17:45:15.795935 update_engine[1481]: I20250317 17:45:15.795836 1481 update_attempter.cc:509] Updating boot flags... Mar 17 17:45:15.858662 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2162) Mar 17 17:45:15.917553 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2162) Mar 17 17:45:15.973659 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2162) Mar 17 17:45:16.640000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:45:16.651778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:45:16.802289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:45:16.807066 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:45:16.960890 kubelet[2182]: E0317 17:45:16.960494 2182 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:45:16.964898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:45:16.965135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:45:17.821581 containerd[1501]: time="2025-03-17T17:45:17.821487647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:17.836149 containerd[1501]: time="2025-03-17T17:45:17.836056975Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 17 17:45:17.847215 containerd[1501]: time="2025-03-17T17:45:17.847169347Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:17.852901 containerd[1501]: time="2025-03-17T17:45:17.852861004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:17.854212 containerd[1501]: time="2025-03-17T17:45:17.854146238Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 10.310605941s" Mar 17 17:45:17.854292 containerd[1501]: time="2025-03-17T17:45:17.854208766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 17:45:21.111017 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:45:21.119866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:45:21.136597 systemd[1]: Reloading requested from client PID 2269 ('systemctl') (unit session-7.scope)... Mar 17 17:45:21.136612 systemd[1]: Reloading... Mar 17 17:45:21.218676 zram_generator::config[2311]: No configuration found. Mar 17 17:45:21.831134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:45:21.908784 systemd[1]: Reloading finished in 771 ms. Mar 17 17:45:21.952730 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:45:21.952827 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:45:21.953097 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:45:21.955819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:45:22.104738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:45:22.109713 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:45:22.151965 kubelet[2357]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:45:22.151965 kubelet[2357]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:45:22.151965 kubelet[2357]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:45:22.152237 kubelet[2357]: I0317 17:45:22.152004 2357 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:45:22.781487 kubelet[2357]: I0317 17:45:22.781408 2357 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:45:22.781487 kubelet[2357]: I0317 17:45:22.781452 2357 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:45:22.781760 kubelet[2357]: I0317 17:45:22.781750 2357 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:45:22.864414 kubelet[2357]: I0317 17:45:22.864355 2357 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:45:22.866856 kubelet[2357]: E0317 17:45:22.866818 2357 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:22.887879 kubelet[2357]: I0317 17:45:22.887822 2357 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:45:22.891377 kubelet[2357]: I0317 17:45:22.891072 2357 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:45:22.891656 kubelet[2357]: I0317 17:45:22.891373 2357 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:45:22.893116 kubelet[2357]: I0317 17:45:22.893089 2357 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:45:22.893116 kubelet[2357]: I0317 17:45:22.893113 2357 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:45:22.893304 kubelet[2357]: I0317 17:45:22.893281 2357 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:45:22.939018 kubelet[2357]: I0317 17:45:22.938978 2357 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:45:22.939018 kubelet[2357]: I0317 17:45:22.939007 2357 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:45:22.939097 kubelet[2357]: I0317 17:45:22.939042 2357 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:45:22.939097 kubelet[2357]: I0317 17:45:22.939070 2357 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:45:22.939701 kubelet[2357]: W0317 17:45:22.939594 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:22.939755 kubelet[2357]: E0317 17:45:22.939706 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:22.942126 kubelet[2357]: W0317 17:45:22.942079 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:22.942126 kubelet[2357]: E0317 17:45:22.942126 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:22.945297 kubelet[2357]: I0317 17:45:22.945246 2357 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:45:23.617010 kubelet[2357]: I0317 17:45:23.616952 2357 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:45:23.617559 kubelet[2357]: W0317 17:45:23.617059 2357 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:45:23.617943 kubelet[2357]: I0317 17:45:23.617916 2357 server.go:1264] "Started kubelet" Mar 17 17:45:23.618514 kubelet[2357]: I0317 17:45:23.618372 2357 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:45:23.619695 kubelet[2357]: I0317 17:45:23.618864 2357 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:45:23.619695 kubelet[2357]: I0317 17:45:23.618910 2357 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:45:23.620558 kubelet[2357]: I0317 17:45:23.620499 2357 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:45:23.621763 kubelet[2357]: I0317 17:45:23.621034 2357 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:45:23.621763 kubelet[2357]: I0317 17:45:23.621375 2357 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:45:23.621837 kubelet[2357]: I0317 17:45:23.621817 2357 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:45:23.621876 kubelet[2357]: I0317 17:45:23.621871 2357 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:45:23.622224 kubelet[2357]: W0317 17:45:23.622169 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:23.622224 kubelet[2357]: E0317 17:45:23.622214 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:23.623551 kubelet[2357]: E0317 17:45:23.622851 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="200ms" Mar 17 17:45:23.623551 kubelet[2357]: E0317 17:45:23.623385 2357 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:45:23.623551 kubelet[2357]: I0317 17:45:23.623466 2357 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:45:23.623744 kubelet[2357]: I0317 17:45:23.623564 2357 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:45:23.624430 kubelet[2357]: I0317 17:45:23.624404 2357 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:45:23.628473 kubelet[2357]: E0317 17:45:23.628299 2357 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.87:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.87:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da828b3057e81 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:45:23.617889921 +0000 UTC m=+1.503990672,LastTimestamp:2025-03-17 17:45:23.617889921 +0000 UTC m=+1.503990672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:45:23.640112 kubelet[2357]: I0317 17:45:23.640040 2357 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:45:23.641261 kubelet[2357]: I0317 17:45:23.640852 2357 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:45:23.641261 kubelet[2357]: I0317 17:45:23.640868 2357 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:45:23.641261 kubelet[2357]: I0317 17:45:23.640906 2357 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:45:23.642156 kubelet[2357]: I0317 17:45:23.642076 2357 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:45:23.642156 kubelet[2357]: I0317 17:45:23.642132 2357 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:45:23.642156 kubelet[2357]: I0317 17:45:23.642154 2357 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:45:23.642413 kubelet[2357]: E0317 17:45:23.642206 2357 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:45:23.643596 kubelet[2357]: W0317 17:45:23.643373 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:23.643596 kubelet[2357]: E0317 17:45:23.643561 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:23.667143 kubelet[2357]: I0317 17:45:23.667102 2357 policy_none.go:49] "None policy: Start" Mar 17 17:45:23.669857 kubelet[2357]: I0317 17:45:23.669829 2357 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:45:23.669857 kubelet[2357]: I0317 17:45:23.669859 2357 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:45:23.714281 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:45:23.723560 kubelet[2357]: I0317 17:45:23.723512 2357 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:45:23.723954 kubelet[2357]: E0317 17:45:23.723926 2357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 17 17:45:23.724549 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:45:23.735181 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:45:23.736159 kubelet[2357]: I0317 17:45:23.736141 2357 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:45:23.736412 kubelet[2357]: I0317 17:45:23.736369 2357 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:45:23.736588 kubelet[2357]: I0317 17:45:23.736501 2357 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:45:23.737461 kubelet[2357]: E0317 17:45:23.737425 2357 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:45:23.742609 kubelet[2357]: I0317 17:45:23.742578 2357 topology_manager.go:215] "Topology Admit Handler" podUID="7dc92c4b3a420bf4fd9711666a1e3c8f" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:45:23.743468 kubelet[2357]: I0317 17:45:23.743434 2357 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:45:23.744157 kubelet[2357]: I0317 17:45:23.744128 2357 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:45:23.751570 systemd[1]: Created slice kubepods-burstable-pod7dc92c4b3a420bf4fd9711666a1e3c8f.slice - libcontainer container kubepods-burstable-pod7dc92c4b3a420bf4fd9711666a1e3c8f.slice. Mar 17 17:45:23.763903 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 17 17:45:23.767902 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 17 17:45:23.823499 kubelet[2357]: E0317 17:45:23.823429 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="400ms" Mar 17 17:45:23.922989 kubelet[2357]: I0317 17:45:23.922932 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7dc92c4b3a420bf4fd9711666a1e3c8f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7dc92c4b3a420bf4fd9711666a1e3c8f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:45:23.922989 kubelet[2357]: I0317 17:45:23.922985 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:23.922989 kubelet[2357]: I0317 17:45:23.923007 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:23.923221 kubelet[2357]: I0317 17:45:23.923029 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:23.923221 kubelet[2357]: I0317 17:45:23.923050 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:45:23.923221 kubelet[2357]: I0317 17:45:23.923081 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7dc92c4b3a420bf4fd9711666a1e3c8f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dc92c4b3a420bf4fd9711666a1e3c8f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:45:23.923221 kubelet[2357]: I0317 17:45:23.923108 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7dc92c4b3a420bf4fd9711666a1e3c8f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dc92c4b3a420bf4fd9711666a1e3c8f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:45:23.923221 kubelet[2357]: I0317 17:45:23.923127 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:23.923334 kubelet[2357]: I0317 17:45:23.923145 2357 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:23.926212 kubelet[2357]: I0317 17:45:23.926175 2357 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:45:23.926582 kubelet[2357]: E0317 17:45:23.926546 2357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 17 17:45:23.937053 kubelet[2357]: W0317 17:45:23.936993 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:23.937132 kubelet[2357]: E0317 17:45:23.937059 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:24.039577 kubelet[2357]: W0317 17:45:24.039474 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:24.039577 kubelet[2357]: E0317 17:45:24.039561 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:24.063850 kubelet[2357]: E0317 17:45:24.063796 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:24.064678 containerd[1501]: time="2025-03-17T17:45:24.064641351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7dc92c4b3a420bf4fd9711666a1e3c8f,Namespace:kube-system,Attempt:0,}" Mar 17 17:45:24.067068 kubelet[2357]: E0317 17:45:24.067020 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:24.067608 containerd[1501]: time="2025-03-17T17:45:24.067566031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 17:45:24.070092 kubelet[2357]: E0317 17:45:24.070061 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:24.070724 containerd[1501]: time="2025-03-17T17:45:24.070669948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 17:45:24.225020 kubelet[2357]: E0317 17:45:24.224876 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="800ms" Mar 17 17:45:24.328581 kubelet[2357]: I0317 17:45:24.328541 2357 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:45:24.328884 kubelet[2357]: E0317 17:45:24.328850 2357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 17 17:45:24.538822 kubelet[2357]: W0317 17:45:24.538616 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:24.538822 kubelet[2357]: E0317 17:45:24.538706 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:24.560388 kubelet[2357]: W0317 17:45:24.560309 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:24.560388 kubelet[2357]: E0317 17:45:24.560376 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:25.026231 kubelet[2357]: E0317 17:45:25.026169 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="1.6s" Mar 17 17:45:25.030802 kubelet[2357]: E0317 17:45:25.030778 2357 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:25.130767 kubelet[2357]: I0317 17:45:25.130720 2357 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:45:25.131172 kubelet[2357]: E0317 17:45:25.131126 2357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 17 17:45:25.514747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652457076.mount: Deactivated successfully. Mar 17 17:45:25.687181 containerd[1501]: time="2025-03-17T17:45:25.687097155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:45:25.732881 containerd[1501]: time="2025-03-17T17:45:25.732809420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:45:25.807502 containerd[1501]: time="2025-03-17T17:45:25.807301067Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:45:25.857921 containerd[1501]: time="2025-03-17T17:45:25.857846932Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:45:25.890517 containerd[1501]: time="2025-03-17T17:45:25.890447322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:45:25.932645 containerd[1501]: time="2025-03-17T17:45:25.932582340Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:45:25.976376 containerd[1501]: time="2025-03-17T17:45:25.976311534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:45:25.993270 containerd[1501]: time="2025-03-17T17:45:25.993238998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:45:25.994151 containerd[1501]: time="2025-03-17T17:45:25.994116212Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.929356598s" Mar 17 17:45:26.008862 containerd[1501]: time="2025-03-17T17:45:26.008833622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.941156322s" Mar 17 17:45:26.016756 kubelet[2357]: W0317 17:45:26.016715 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:26.016756 kubelet[2357]: E0317 17:45:26.016755 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:26.022515 containerd[1501]: time="2025-03-17T17:45:26.022481129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.9516894s" Mar 17 17:45:26.051666 kubelet[2357]: W0317 17:45:26.051598 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:26.051666 kubelet[2357]: E0317 17:45:26.051666 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:26.548874 containerd[1501]: time="2025-03-17T17:45:26.548736341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:45:26.548874 containerd[1501]: time="2025-03-17T17:45:26.548818096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:45:26.549243 containerd[1501]: time="2025-03-17T17:45:26.548871826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:26.549243 containerd[1501]: time="2025-03-17T17:45:26.549054331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:26.573792 systemd[1]: Started cri-containerd-73c55175325cc518fd5c41647e0d37eca9d1329696098863670ea72b96c04472.scope - libcontainer container 73c55175325cc518fd5c41647e0d37eca9d1329696098863670ea72b96c04472. Mar 17 17:45:26.613043 containerd[1501]: time="2025-03-17T17:45:26.612984492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7dc92c4b3a420bf4fd9711666a1e3c8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"73c55175325cc518fd5c41647e0d37eca9d1329696098863670ea72b96c04472\"" Mar 17 17:45:26.614004 containerd[1501]: time="2025-03-17T17:45:26.613740748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:45:26.614004 containerd[1501]: time="2025-03-17T17:45:26.613810800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:45:26.614004 containerd[1501]: time="2025-03-17T17:45:26.613877016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:26.614004 containerd[1501]: time="2025-03-17T17:45:26.613966124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:26.614337 kubelet[2357]: E0317 17:45:26.614307 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:26.621184 containerd[1501]: time="2025-03-17T17:45:26.621148749Z" level=info msg="CreateContainer within sandbox \"73c55175325cc518fd5c41647e0d37eca9d1329696098863670ea72b96c04472\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:45:26.627110 kubelet[2357]: E0317 17:45:26.627063 2357 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="3.2s" Mar 17 17:45:26.635786 systemd[1]: Started cri-containerd-6d93826af6cb0e886e4a5bdb24702564ae1bfb48224d88bfea7a7e36450aa8ad.scope - libcontainer container 6d93826af6cb0e886e4a5bdb24702564ae1bfb48224d88bfea7a7e36450aa8ad. Mar 17 17:45:26.648888 containerd[1501]: time="2025-03-17T17:45:26.648665240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:45:26.648888 containerd[1501]: time="2025-03-17T17:45:26.648724622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:45:26.648888 containerd[1501]: time="2025-03-17T17:45:26.648738167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:26.648888 containerd[1501]: time="2025-03-17T17:45:26.648807368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:26.670967 systemd[1]: Started cri-containerd-7b34a9ab5296c64678b9b3a103f55c037c53d27581de9be14cf5fe7a5544c707.scope - libcontainer container 7b34a9ab5296c64678b9b3a103f55c037c53d27581de9be14cf5fe7a5544c707. Mar 17 17:45:26.672517 containerd[1501]: time="2025-03-17T17:45:26.672465694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d93826af6cb0e886e4a5bdb24702564ae1bfb48224d88bfea7a7e36450aa8ad\"" Mar 17 17:45:26.673100 kubelet[2357]: E0317 17:45:26.673071 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:26.675662 containerd[1501]: time="2025-03-17T17:45:26.675567533Z" level=info msg="CreateContainer within sandbox \"6d93826af6cb0e886e4a5bdb24702564ae1bfb48224d88bfea7a7e36450aa8ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:45:26.706251 containerd[1501]: time="2025-03-17T17:45:26.706201975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b34a9ab5296c64678b9b3a103f55c037c53d27581de9be14cf5fe7a5544c707\"" Mar 17 17:45:26.707244 kubelet[2357]: E0317 17:45:26.707217 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:26.709020 containerd[1501]: time="2025-03-17T17:45:26.708990453Z" level=info msg="CreateContainer within sandbox \"7b34a9ab5296c64678b9b3a103f55c037c53d27581de9be14cf5fe7a5544c707\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:45:26.732670 kubelet[2357]: I0317 17:45:26.732611 2357 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:45:26.733108 kubelet[2357]: E0317 17:45:26.733071 2357 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Mar 17 17:45:26.840752 kubelet[2357]: W0317 17:45:26.840497 2357 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:26.840752 kubelet[2357]: E0317 17:45:26.840565 2357 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Mar 17 17:45:27.163548 containerd[1501]: time="2025-03-17T17:45:27.163486363Z" level=info msg="CreateContainer within sandbox \"73c55175325cc518fd5c41647e0d37eca9d1329696098863670ea72b96c04472\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"be64954c40c74b06f8945d00e09b09b09a197847e04fb55f9f77476187533bf8\"" Mar 17 17:45:27.164373 containerd[1501]: time="2025-03-17T17:45:27.164330254Z" level=info msg="StartContainer for \"be64954c40c74b06f8945d00e09b09b09a197847e04fb55f9f77476187533bf8\"" Mar 17 17:45:27.205827 containerd[1501]: time="2025-03-17T17:45:27.205757955Z" level=info msg="CreateContainer within sandbox \"6d93826af6cb0e886e4a5bdb24702564ae1bfb48224d88bfea7a7e36450aa8ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7a89140b1ed62e806dc3e2325e603f07b504e05e4c076b7113cca424aa88be3\"" Mar 17 17:45:27.206573 containerd[1501]: time="2025-03-17T17:45:27.206511415Z" level=info msg="StartContainer for \"b7a89140b1ed62e806dc3e2325e603f07b504e05e4c076b7113cca424aa88be3\"" Mar 17 17:45:27.221877 systemd[1]: Started cri-containerd-be64954c40c74b06f8945d00e09b09b09a197847e04fb55f9f77476187533bf8.scope - libcontainer container be64954c40c74b06f8945d00e09b09b09a197847e04fb55f9f77476187533bf8. Mar 17 17:45:27.233976 containerd[1501]: time="2025-03-17T17:45:27.233926248Z" level=info msg="CreateContainer within sandbox \"7b34a9ab5296c64678b9b3a103f55c037c53d27581de9be14cf5fe7a5544c707\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ca03b95c380cd05944b8988687fe7264fff799e8edee938aa72f5ff05d46025\"" Mar 17 17:45:27.234768 containerd[1501]: time="2025-03-17T17:45:27.234734051Z" level=info msg="StartContainer for \"0ca03b95c380cd05944b8988687fe7264fff799e8edee938aa72f5ff05d46025\"" Mar 17 17:45:27.235871 systemd[1]: Started cri-containerd-b7a89140b1ed62e806dc3e2325e603f07b504e05e4c076b7113cca424aa88be3.scope - libcontainer container b7a89140b1ed62e806dc3e2325e603f07b504e05e4c076b7113cca424aa88be3. Mar 17 17:45:27.321807 systemd[1]: Started cri-containerd-0ca03b95c380cd05944b8988687fe7264fff799e8edee938aa72f5ff05d46025.scope - libcontainer container 0ca03b95c380cd05944b8988687fe7264fff799e8edee938aa72f5ff05d46025. Mar 17 17:45:27.354200 containerd[1501]: time="2025-03-17T17:45:27.354113568Z" level=info msg="StartContainer for \"be64954c40c74b06f8945d00e09b09b09a197847e04fb55f9f77476187533bf8\" returns successfully" Mar 17 17:45:27.354343 containerd[1501]: time="2025-03-17T17:45:27.354126423Z" level=info msg="StartContainer for \"b7a89140b1ed62e806dc3e2325e603f07b504e05e4c076b7113cca424aa88be3\" returns successfully" Mar 17 17:45:27.409104 containerd[1501]: time="2025-03-17T17:45:27.408678676Z" level=info msg="StartContainer for \"0ca03b95c380cd05944b8988687fe7264fff799e8edee938aa72f5ff05d46025\" returns successfully" Mar 17 17:45:27.655424 kubelet[2357]: E0317 17:45:27.655369 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:27.657061 kubelet[2357]: E0317 17:45:27.657031 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:27.659870 kubelet[2357]: E0317 17:45:27.659839 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:28.661531 kubelet[2357]: E0317 17:45:28.661496 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:28.982252 kubelet[2357]: E0317 17:45:28.982094 2357 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 17 17:45:29.389534 kubelet[2357]: E0317 17:45:29.389491 2357 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 17 17:45:29.663696 kubelet[2357]: E0317 17:45:29.663532 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:29.928711 kubelet[2357]: E0317 17:45:29.928546 2357 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:45:29.928711 kubelet[2357]: E0317 17:45:29.928574 2357 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 17 17:45:29.935188 kubelet[2357]: I0317 17:45:29.935155 2357 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:45:30.016661 kubelet[2357]: I0317 17:45:30.014945 2357 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:45:30.120961 kubelet[2357]: E0317 17:45:30.120909 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:30.123200 kubelet[2357]: E0317 17:45:30.123176 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:30.223773 kubelet[2357]: E0317 17:45:30.223591 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:30.324186 kubelet[2357]: E0317 17:45:30.324119 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:30.425005 kubelet[2357]: E0317 17:45:30.424946 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:30.525735 kubelet[2357]: E0317 17:45:30.525492 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:30.626159 kubelet[2357]: E0317 17:45:30.626096 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:30.726723 kubelet[2357]: E0317 17:45:30.726669 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:30.827734 kubelet[2357]: E0317 17:45:30.827506 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:30.928405 kubelet[2357]: E0317 17:45:30.928319 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.029098 kubelet[2357]: E0317 17:45:31.029042 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.129968 kubelet[2357]: E0317 17:45:31.129803 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.230463 kubelet[2357]: E0317 17:45:31.230405 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.331115 kubelet[2357]: E0317 17:45:31.331055 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.431831 kubelet[2357]: E0317 17:45:31.431774 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.532571 kubelet[2357]: E0317 17:45:31.532515 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.633151 kubelet[2357]: E0317 17:45:31.633097 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.734225 kubelet[2357]: E0317 17:45:31.734042 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.834768 kubelet[2357]: E0317 17:45:31.834710 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:31.935104 kubelet[2357]: E0317 17:45:31.935047 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:32.036219 kubelet[2357]: E0317 17:45:32.036017 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:32.136952 kubelet[2357]: E0317 17:45:32.136871 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:32.237721 kubelet[2357]: E0317 17:45:32.237653 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:32.338441 kubelet[2357]: E0317 17:45:32.338269 2357 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:45:32.684176 kubelet[2357]: E0317 17:45:32.684113 2357 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:32.747922 systemd[1]: Reloading requested from client PID 2638 ('systemctl') (unit session-7.scope)... Mar 17 17:45:32.747940 systemd[1]: Reloading... Mar 17 17:45:32.854513 zram_generator::config[2680]: No configuration found. Mar 17 17:45:32.946837 kubelet[2357]: I0317 17:45:32.946726 2357 apiserver.go:52] "Watching apiserver" Mar 17 17:45:32.957693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:45:33.022058 kubelet[2357]: I0317 17:45:33.022012 2357 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:45:33.051174 systemd[1]: Reloading finished in 302 ms. Mar 17 17:45:33.100656 kubelet[2357]: I0317 17:45:33.100398 2357 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:45:33.100459 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:45:33.124183 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:45:33.124486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:45:33.131151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:45:33.306677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:45:33.311769 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:45:33.380007 kubelet[2722]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:45:33.380007 kubelet[2722]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:45:33.380007 kubelet[2722]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:45:33.380528 kubelet[2722]: I0317 17:45:33.380043 2722 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:45:33.385091 kubelet[2722]: I0317 17:45:33.385057 2722 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:45:33.385091 kubelet[2722]: I0317 17:45:33.385078 2722 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:45:33.385398 kubelet[2722]: I0317 17:45:33.385369 2722 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:45:33.386790 kubelet[2722]: I0317 17:45:33.386761 2722 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:45:33.389219 kubelet[2722]: I0317 17:45:33.389185 2722 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:45:33.397500 kubelet[2722]: I0317 17:45:33.397471 2722 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:45:33.397792 kubelet[2722]: I0317 17:45:33.397751 2722 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:45:33.398002 kubelet[2722]: I0317 17:45:33.397783 2722 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:45:33.398114 kubelet[2722]: I0317 17:45:33.398021 2722 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:45:33.398114 kubelet[2722]: I0317 17:45:33.398038 2722 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:45:33.398114 kubelet[2722]: I0317 17:45:33.398097 2722 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:45:33.398232 kubelet[2722]: I0317 17:45:33.398219 2722 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:45:33.398272 kubelet[2722]: I0317 17:45:33.398236 2722 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:45:33.398272 kubelet[2722]: I0317 17:45:33.398268 2722 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:45:33.398344 kubelet[2722]: I0317 17:45:33.398293 2722 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:45:33.399526 kubelet[2722]: I0317 17:45:33.399462 2722 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:45:33.399850 kubelet[2722]: I0317 17:45:33.399758 2722 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:45:33.404698 kubelet[2722]: I0317 17:45:33.400229 2722 server.go:1264] "Started kubelet" Mar 17 17:45:33.404698 kubelet[2722]: I0317 17:45:33.401635 2722 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:45:33.404698 kubelet[2722]: I0317 17:45:33.401654 2722 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:45:33.404698 kubelet[2722]: I0317 17:45:33.402038 2722 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:45:33.438681 kubelet[2722]: I0317 17:45:33.438606 2722 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:45:33.442567 kubelet[2722]: I0317 17:45:33.442544 2722 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:45:33.445699 kubelet[2722]: I0317 17:45:33.445139 2722 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:45:33.445699 kubelet[2722]: I0317 17:45:33.445254 2722 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:45:33.445699 kubelet[2722]: I0317 17:45:33.445417 2722 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:45:33.449064 kubelet[2722]: E0317 17:45:33.449027 2722 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:45:33.449396 kubelet[2722]: I0317 17:45:33.449379 2722 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:45:33.449525 kubelet[2722]: I0317 17:45:33.449506 2722 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:45:33.452664 kubelet[2722]: I0317 17:45:33.452241 2722 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:45:33.463419 kubelet[2722]: I0317 17:45:33.463354 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:45:33.464798 kubelet[2722]: I0317 17:45:33.464772 2722 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:45:33.464863 kubelet[2722]: I0317 17:45:33.464812 2722 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:45:33.464863 kubelet[2722]: I0317 17:45:33.464837 2722 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:45:33.464931 kubelet[2722]: E0317 17:45:33.464893 2722 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:45:33.491264 kubelet[2722]: I0317 17:45:33.491205 2722 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:45:33.491264 kubelet[2722]: I0317 17:45:33.491231 2722 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:45:33.491264 kubelet[2722]: I0317 17:45:33.491257 2722 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:45:33.491501 kubelet[2722]: I0317 17:45:33.491485 2722 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:45:33.491553 kubelet[2722]: I0317 17:45:33.491501 2722 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:45:33.491553 kubelet[2722]: I0317 17:45:33.491525 2722 policy_none.go:49] "None policy: Start" Mar 17 17:45:33.492372 kubelet[2722]: I0317 17:45:33.492342 2722 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:45:33.492422 kubelet[2722]: I0317 17:45:33.492389 2722 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:45:33.492591 kubelet[2722]: I0317 17:45:33.492573 2722 state_mem.go:75] "Updated machine memory state" Mar 17 17:45:33.497244 kubelet[2722]: I0317 17:45:33.497156 2722 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:45:33.497422 kubelet[2722]: I0317 17:45:33.497382 2722 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:45:33.497524 kubelet[2722]: I0317 17:45:33.497506 2722 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:45:33.550061 kubelet[2722]: I0317 17:45:33.550015 2722 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:45:33.565339 kubelet[2722]: I0317 17:45:33.565150 2722 topology_manager.go:215] "Topology Admit Handler" podUID="7dc92c4b3a420bf4fd9711666a1e3c8f" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:45:33.582087 kubelet[2722]: I0317 17:45:33.582012 2722 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:45:33.582302 kubelet[2722]: I0317 17:45:33.582159 2722 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:45:33.646090 kubelet[2722]: I0317 17:45:33.645813 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7dc92c4b3a420bf4fd9711666a1e3c8f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dc92c4b3a420bf4fd9711666a1e3c8f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:45:33.646090 kubelet[2722]: I0317 17:45:33.645853 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7dc92c4b3a420bf4fd9711666a1e3c8f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7dc92c4b3a420bf4fd9711666a1e3c8f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:45:33.646090 kubelet[2722]: I0317 17:45:33.645882 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7dc92c4b3a420bf4fd9711666a1e3c8f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7dc92c4b3a420bf4fd9711666a1e3c8f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:45:33.646090 kubelet[2722]: I0317 17:45:33.645900 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:33.646090 kubelet[2722]: I0317 17:45:33.645918 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:33.669330 kubelet[2722]: E0317 17:45:33.669246 2722 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:45:33.699401 kubelet[2722]: I0317 17:45:33.699352 2722 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 17:45:33.699570 kubelet[2722]: I0317 17:45:33.699459 2722 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:45:33.746827 kubelet[2722]: I0317 17:45:33.746777 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:33.746976 kubelet[2722]: I0317 17:45:33.746853 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:33.746976 kubelet[2722]: I0317 17:45:33.746888 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:33.746976 kubelet[2722]: I0317 17:45:33.746908 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:45:33.949841 kubelet[2722]: E0317 17:45:33.949791 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:33.949971 kubelet[2722]: E0317 17:45:33.949791 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:33.969903 kubelet[2722]: E0317 17:45:33.969876 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:34.113064 sudo[2760]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:45:34.113462 sudo[2760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:45:34.399432 kubelet[2722]: I0317 17:45:34.399387 2722 apiserver.go:52] "Watching apiserver" Mar 17 17:45:34.446281 kubelet[2722]: I0317 17:45:34.446249 2722 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:45:34.474757 kubelet[2722]: E0317 17:45:34.474073 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:34.556667 kubelet[2722]: E0317 17:45:34.556612 2722 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:45:34.557030 kubelet[2722]: E0317 17:45:34.557011 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:34.567048 kubelet[2722]: E0317 17:45:34.567014 2722 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:45:34.567418 kubelet[2722]: E0317 17:45:34.567380 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:34.588527 sudo[2760]: pam_unix(sudo:session): session closed for user root Mar 17 17:45:34.678365 kubelet[2722]: I0317 17:45:34.677922 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.677902842 podStartE2EDuration="2.677902842s" podCreationTimestamp="2025-03-17 17:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:45:34.557114059 +0000 UTC m=+1.214382612" watchObservedRunningTime="2025-03-17 17:45:34.677902842 +0000 UTC m=+1.335171395" Mar 17 17:45:34.708010 kubelet[2722]: I0317 17:45:34.707935 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7079127170000001 podStartE2EDuration="1.707912717s" podCreationTimestamp="2025-03-17 17:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:45:34.678091397 +0000 UTC m=+1.335359950" watchObservedRunningTime="2025-03-17 17:45:34.707912717 +0000 UTC m=+1.365181270" Mar 17 17:45:34.708213 kubelet[2722]: I0317 17:45:34.708029 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.708025659 podStartE2EDuration="1.708025659s" podCreationTimestamp="2025-03-17 17:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:45:34.707076233 +0000 UTC m=+1.364344776" watchObservedRunningTime="2025-03-17 17:45:34.708025659 +0000 UTC m=+1.365294212" Mar 17 17:45:35.475679 kubelet[2722]: E0317 17:45:35.475638 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:35.476091 kubelet[2722]: E0317 17:45:35.475742 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:36.059038 sudo[1679]: pam_unix(sudo:session): session closed for user root Mar 17 17:45:36.060567 sshd[1678]: Connection closed by 10.0.0.1 port 50076 Mar 17 17:45:36.062303 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Mar 17 17:45:36.066769 systemd[1]: sshd@6-10.0.0.87:22-10.0.0.1:50076.service: Deactivated successfully. Mar 17 17:45:36.068930 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:45:36.069178 systemd[1]: session-7.scope: Consumed 5.897s CPU time, 191.4M memory peak, 0B memory swap peak. Mar 17 17:45:36.069795 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:45:36.070891 systemd-logind[1479]: Removed session 7. Mar 17 17:45:38.885869 kubelet[2722]: E0317 17:45:38.885820 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:39.485547 kubelet[2722]: E0317 17:45:39.485492 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:42.099805 kubelet[2722]: E0317 17:45:42.099770 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:42.277091 kubelet[2722]: E0317 17:45:42.277046 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:42.488814 kubelet[2722]: E0317 17:45:42.488772 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:42.489029 kubelet[2722]: E0317 17:45:42.488846 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:46.540105 kubelet[2722]: I0317 17:45:46.540055 2722 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:45:46.540790 containerd[1501]: time="2025-03-17T17:45:46.540407112Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:45:46.541577 kubelet[2722]: I0317 17:45:46.541163 2722 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:45:47.404108 kubelet[2722]: I0317 17:45:47.403649 2722 topology_manager.go:215] "Topology Admit Handler" podUID="34b74fec-4cb5-4693-9c54-dec435231cc7" podNamespace="kube-system" podName="kube-proxy-pfgcq" Mar 17 17:45:47.410813 systemd[1]: Created slice kubepods-besteffort-pod34b74fec_4cb5_4693_9c54_dec435231cc7.slice - libcontainer container kubepods-besteffort-pod34b74fec_4cb5_4693_9c54_dec435231cc7.slice. Mar 17 17:45:47.520421 kubelet[2722]: I0317 17:45:47.520373 2722 topology_manager.go:215] "Topology Admit Handler" podUID="310676e6-6288-4c89-86b8-0ade01ffbc34" podNamespace="kube-system" podName="cilium-ht4lg" Mar 17 17:45:47.524810 kubelet[2722]: I0317 17:45:47.523165 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34b74fec-4cb5-4693-9c54-dec435231cc7-xtables-lock\") pod \"kube-proxy-pfgcq\" (UID: \"34b74fec-4cb5-4693-9c54-dec435231cc7\") " pod="kube-system/kube-proxy-pfgcq" Mar 17 17:45:47.524810 kubelet[2722]: I0317 17:45:47.523202 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-host-proc-sys-net\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.524810 kubelet[2722]: I0317 17:45:47.523227 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7mn\" (UniqueName: \"kubernetes.io/projected/310676e6-6288-4c89-86b8-0ade01ffbc34-kube-api-access-df7mn\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.524810 kubelet[2722]: I0317 17:45:47.523249 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34b74fec-4cb5-4693-9c54-dec435231cc7-kube-proxy\") pod \"kube-proxy-pfgcq\" (UID: \"34b74fec-4cb5-4693-9c54-dec435231cc7\") " pod="kube-system/kube-proxy-pfgcq" Mar 17 17:45:47.524810 kubelet[2722]: I0317 17:45:47.523274 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-run\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525082 kubelet[2722]: I0317 17:45:47.523295 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/310676e6-6288-4c89-86b8-0ade01ffbc34-clustermesh-secrets\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525082 kubelet[2722]: I0317 17:45:47.523322 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-etc-cni-netd\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525082 kubelet[2722]: I0317 17:45:47.523373 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34b74fec-4cb5-4693-9c54-dec435231cc7-lib-modules\") pod \"kube-proxy-pfgcq\" (UID: \"34b74fec-4cb5-4693-9c54-dec435231cc7\") " pod="kube-system/kube-proxy-pfgcq" Mar 17 17:45:47.525082 kubelet[2722]: I0317 17:45:47.523396 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-bpf-maps\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525082 kubelet[2722]: I0317 17:45:47.523426 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cni-path\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525082 kubelet[2722]: I0317 17:45:47.523449 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-lib-modules\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525260 kubelet[2722]: I0317 17:45:47.523466 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-xtables-lock\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525260 kubelet[2722]: I0317 17:45:47.523496 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/310676e6-6288-4c89-86b8-0ade01ffbc34-hubble-tls\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525260 kubelet[2722]: I0317 17:45:47.523517 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-cgroup\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525260 kubelet[2722]: I0317 17:45:47.523535 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-config-path\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525260 kubelet[2722]: I0317 17:45:47.523558 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2m4z\" (UniqueName: \"kubernetes.io/projected/34b74fec-4cb5-4693-9c54-dec435231cc7-kube-api-access-l2m4z\") pod \"kube-proxy-pfgcq\" (UID: \"34b74fec-4cb5-4693-9c54-dec435231cc7\") " pod="kube-system/kube-proxy-pfgcq" Mar 17 17:45:47.525260 kubelet[2722]: I0317 17:45:47.523578 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-hostproc\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.525442 kubelet[2722]: I0317 17:45:47.523603 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-host-proc-sys-kernel\") pod \"cilium-ht4lg\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " pod="kube-system/cilium-ht4lg" Mar 17 17:45:47.534964 systemd[1]: Created slice kubepods-burstable-pod310676e6_6288_4c89_86b8_0ade01ffbc34.slice - libcontainer container kubepods-burstable-pod310676e6_6288_4c89_86b8_0ade01ffbc34.slice. Mar 17 17:45:47.786714 kubelet[2722]: I0317 17:45:47.786557 2722 topology_manager.go:215] "Topology Admit Handler" podUID="653194cf-6ff9-44e6-a56f-8e853b111cf1" podNamespace="kube-system" podName="cilium-operator-599987898-hg5gj" Mar 17 17:45:47.792795 systemd[1]: Created slice kubepods-besteffort-pod653194cf_6ff9_44e6_a56f_8e853b111cf1.slice - libcontainer container kubepods-besteffort-pod653194cf_6ff9_44e6_a56f_8e853b111cf1.slice. Mar 17 17:45:47.925164 kubelet[2722]: I0317 17:45:47.925101 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhx7\" (UniqueName: \"kubernetes.io/projected/653194cf-6ff9-44e6-a56f-8e853b111cf1-kube-api-access-6rhx7\") pod \"cilium-operator-599987898-hg5gj\" (UID: \"653194cf-6ff9-44e6-a56f-8e853b111cf1\") " pod="kube-system/cilium-operator-599987898-hg5gj" Mar 17 17:45:47.925164 kubelet[2722]: I0317 17:45:47.925147 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/653194cf-6ff9-44e6-a56f-8e853b111cf1-cilium-config-path\") pod \"cilium-operator-599987898-hg5gj\" (UID: \"653194cf-6ff9-44e6-a56f-8e853b111cf1\") " pod="kube-system/cilium-operator-599987898-hg5gj" Mar 17 17:45:48.021340 kubelet[2722]: E0317 17:45:48.021281 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:48.022078 containerd[1501]: time="2025-03-17T17:45:48.022015160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pfgcq,Uid:34b74fec-4cb5-4693-9c54-dec435231cc7,Namespace:kube-system,Attempt:0,}" Mar 17 17:45:48.061118 containerd[1501]: time="2025-03-17T17:45:48.060925551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:45:48.061118 containerd[1501]: time="2025-03-17T17:45:48.060983329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:45:48.061118 containerd[1501]: time="2025-03-17T17:45:48.060998277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:48.061118 containerd[1501]: time="2025-03-17T17:45:48.061077055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:48.082760 systemd[1]: Started cri-containerd-be0d7a4252ad73e8846510848e6fb13b9cc531cfde2ae6a3fb0808b3572f9ffb.scope - libcontainer container be0d7a4252ad73e8846510848e6fb13b9cc531cfde2ae6a3fb0808b3572f9ffb. Mar 17 17:45:48.095659 kubelet[2722]: E0317 17:45:48.095522 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:48.096761 containerd[1501]: time="2025-03-17T17:45:48.096154055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hg5gj,Uid:653194cf-6ff9-44e6-a56f-8e853b111cf1,Namespace:kube-system,Attempt:0,}" Mar 17 17:45:48.107165 containerd[1501]: time="2025-03-17T17:45:48.107070441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pfgcq,Uid:34b74fec-4cb5-4693-9c54-dec435231cc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"be0d7a4252ad73e8846510848e6fb13b9cc531cfde2ae6a3fb0808b3572f9ffb\"" Mar 17 17:45:48.108177 kubelet[2722]: E0317 17:45:48.108145 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:48.110486 containerd[1501]: time="2025-03-17T17:45:48.110451382Z" level=info msg="CreateContainer within sandbox \"be0d7a4252ad73e8846510848e6fb13b9cc531cfde2ae6a3fb0808b3572f9ffb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:45:48.138835 kubelet[2722]: E0317 17:45:48.138512 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:48.139404 containerd[1501]: time="2025-03-17T17:45:48.138531448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:45:48.139404 containerd[1501]: time="2025-03-17T17:45:48.139282379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:45:48.139404 containerd[1501]: time="2025-03-17T17:45:48.139296766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:48.139533 containerd[1501]: time="2025-03-17T17:45:48.139386404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:48.139792 containerd[1501]: time="2025-03-17T17:45:48.139694132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ht4lg,Uid:310676e6-6288-4c89-86b8-0ade01ffbc34,Namespace:kube-system,Attempt:0,}" Mar 17 17:45:48.148889 containerd[1501]: time="2025-03-17T17:45:48.148847246Z" level=info msg="CreateContainer within sandbox \"be0d7a4252ad73e8846510848e6fb13b9cc531cfde2ae6a3fb0808b3572f9ffb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea5408ae0faf06957f326cfa0fe3f1c179e9437a6bdc9d76da69190529230d0f\"" Mar 17 17:45:48.149699 containerd[1501]: time="2025-03-17T17:45:48.149551559Z" level=info msg="StartContainer for \"ea5408ae0faf06957f326cfa0fe3f1c179e9437a6bdc9d76da69190529230d0f\"" Mar 17 17:45:48.160813 systemd[1]: Started cri-containerd-08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42.scope - libcontainer container 08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42. Mar 17 17:45:48.186769 systemd[1]: Started cri-containerd-ea5408ae0faf06957f326cfa0fe3f1c179e9437a6bdc9d76da69190529230d0f.scope - libcontainer container ea5408ae0faf06957f326cfa0fe3f1c179e9437a6bdc9d76da69190529230d0f. Mar 17 17:45:48.204877 containerd[1501]: time="2025-03-17T17:45:48.204755235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hg5gj,Uid:653194cf-6ff9-44e6-a56f-8e853b111cf1,Namespace:kube-system,Attempt:0,} returns sandbox id \"08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42\"" Mar 17 17:45:48.205605 kubelet[2722]: E0317 17:45:48.205577 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:48.207227 containerd[1501]: time="2025-03-17T17:45:48.206475967Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:45:48.253499 containerd[1501]: time="2025-03-17T17:45:48.253384171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:45:48.253499 containerd[1501]: time="2025-03-17T17:45:48.253467327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:45:48.254442 containerd[1501]: time="2025-03-17T17:45:48.253478488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:48.254654 containerd[1501]: time="2025-03-17T17:45:48.254565370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:45:48.265710 containerd[1501]: time="2025-03-17T17:45:48.265665461Z" level=info msg="StartContainer for \"ea5408ae0faf06957f326cfa0fe3f1c179e9437a6bdc9d76da69190529230d0f\" returns successfully" Mar 17 17:45:48.277316 systemd[1]: Started cri-containerd-ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367.scope - libcontainer container ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367. Mar 17 17:45:48.302219 containerd[1501]: time="2025-03-17T17:45:48.302169931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ht4lg,Uid:310676e6-6288-4c89-86b8-0ade01ffbc34,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\"" Mar 17 17:45:48.302909 kubelet[2722]: E0317 17:45:48.302874 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:48.502003 kubelet[2722]: E0317 17:45:48.501893 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:52.915363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847994189.mount: Deactivated successfully. Mar 17 17:45:53.456073 containerd[1501]: time="2025-03-17T17:45:53.456013024Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:53.457162 containerd[1501]: time="2025-03-17T17:45:53.457112940Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 17:45:53.458427 containerd[1501]: time="2025-03-17T17:45:53.458379578Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:53.462363 containerd[1501]: time="2025-03-17T17:45:53.462174515Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.255665776s" Mar 17 17:45:53.462363 containerd[1501]: time="2025-03-17T17:45:53.462224779Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 17:45:53.463538 containerd[1501]: time="2025-03-17T17:45:53.463501556Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:45:53.466139 containerd[1501]: time="2025-03-17T17:45:53.465092504Z" level=info msg="CreateContainer within sandbox \"08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:45:53.484344 kubelet[2722]: I0317 17:45:53.484097 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pfgcq" podStartSLOduration=6.48407586 podStartE2EDuration="6.48407586s" podCreationTimestamp="2025-03-17 17:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:45:48.510956132 +0000 UTC m=+15.168224685" watchObservedRunningTime="2025-03-17 17:45:53.48407586 +0000 UTC m=+20.141344413" Mar 17 17:45:53.486490 containerd[1501]: time="2025-03-17T17:45:53.486437585Z" level=info msg="CreateContainer within sandbox \"08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96\"" Mar 17 17:45:53.487197 containerd[1501]: time="2025-03-17T17:45:53.487153088Z" level=info msg="StartContainer for \"b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96\"" Mar 17 17:45:53.521775 systemd[1]: Started cri-containerd-b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96.scope - libcontainer container b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96. Mar 17 17:45:53.557459 containerd[1501]: time="2025-03-17T17:45:53.557402802Z" level=info msg="StartContainer for \"b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96\" returns successfully" Mar 17 17:45:54.517603 kubelet[2722]: E0317 17:45:54.517546 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:54.529806 kubelet[2722]: I0317 17:45:54.529664 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-hg5gj" podStartSLOduration=2.272543683 podStartE2EDuration="7.529639213s" podCreationTimestamp="2025-03-17 17:45:47 +0000 UTC" firstStartedPulling="2025-03-17 17:45:48.206146178 +0000 UTC m=+14.863414731" lastFinishedPulling="2025-03-17 17:45:53.463241708 +0000 UTC m=+20.120510261" observedRunningTime="2025-03-17 17:45:54.529103858 +0000 UTC m=+21.186372411" watchObservedRunningTime="2025-03-17 17:45:54.529639213 +0000 UTC m=+21.186907766" Mar 17 17:45:55.519551 kubelet[2722]: E0317 17:45:55.519505 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:45:59.704503 systemd[1]: Started sshd@7-10.0.0.87:22-10.0.0.1:49168.service - OpenSSH per-connection server daemon (10.0.0.1:49168). Mar 17 17:45:59.780446 sshd[3147]: Accepted publickey for core from 10.0.0.1 port 49168 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:45:59.782352 sshd-session[3147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:45:59.796772 systemd-logind[1479]: New session 8 of user core. Mar 17 17:45:59.806934 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:45:59.959247 sshd[3149]: Connection closed by 10.0.0.1 port 49168 Mar 17 17:45:59.959694 sshd-session[3147]: pam_unix(sshd:session): session closed for user core Mar 17 17:45:59.964929 systemd[1]: sshd@7-10.0.0.87:22-10.0.0.1:49168.service: Deactivated successfully. Mar 17 17:45:59.968285 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:45:59.969248 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:45:59.970465 systemd-logind[1479]: Removed session 8. Mar 17 17:46:02.737363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775521694.mount: Deactivated successfully. Mar 17 17:46:04.902469 containerd[1501]: time="2025-03-17T17:46:04.902395090Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:04.903345 containerd[1501]: time="2025-03-17T17:46:04.903290470Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 17:46:04.905032 containerd[1501]: time="2025-03-17T17:46:04.904984089Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:04.906903 containerd[1501]: time="2025-03-17T17:46:04.906864899Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.443322406s" Mar 17 17:46:04.906903 containerd[1501]: time="2025-03-17T17:46:04.906896769Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 17:46:04.909463 containerd[1501]: time="2025-03-17T17:46:04.909426938Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:46:04.940427 containerd[1501]: time="2025-03-17T17:46:04.940352553Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\"" Mar 17 17:46:04.941097 containerd[1501]: time="2025-03-17T17:46:04.941047066Z" level=info msg="StartContainer for \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\"" Mar 17 17:46:04.984889 systemd[1]: Started cri-containerd-34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4.scope - libcontainer container 34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4. Mar 17 17:46:04.986682 systemd[1]: Started sshd@8-10.0.0.87:22-10.0.0.1:41320.service - OpenSSH per-connection server daemon (10.0.0.1:41320). Mar 17 17:46:05.017985 containerd[1501]: time="2025-03-17T17:46:05.017941118Z" level=info msg="StartContainer for \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\" returns successfully" Mar 17 17:46:05.031758 systemd[1]: cri-containerd-34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4.scope: Deactivated successfully. Mar 17 17:46:05.042914 sshd[3211]: Accepted publickey for core from 10.0.0.1 port 41320 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:05.045186 sshd-session[3211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:05.051715 systemd-logind[1479]: New session 9 of user core. Mar 17 17:46:05.062795 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:46:05.261790 sshd[3246]: Connection closed by 10.0.0.1 port 41320 Mar 17 17:46:05.262136 sshd-session[3211]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:05.267073 systemd[1]: sshd@8-10.0.0.87:22-10.0.0.1:41320.service: Deactivated successfully. Mar 17 17:46:05.269090 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:46:05.269871 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:46:05.270759 systemd-logind[1479]: Removed session 9. Mar 17 17:46:05.551106 kubelet[2722]: E0317 17:46:05.550955 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:05.632479 containerd[1501]: time="2025-03-17T17:46:05.632378281Z" level=info msg="shim disconnected" id=34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4 namespace=k8s.io Mar 17 17:46:05.632479 containerd[1501]: time="2025-03-17T17:46:05.632458401Z" level=warning msg="cleaning up after shim disconnected" id=34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4 namespace=k8s.io Mar 17 17:46:05.632479 containerd[1501]: time="2025-03-17T17:46:05.632469672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:05.935611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4-rootfs.mount: Deactivated successfully. Mar 17 17:46:06.553697 kubelet[2722]: E0317 17:46:06.553616 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:06.556163 containerd[1501]: time="2025-03-17T17:46:06.556110159Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:46:06.650333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4194554728.mount: Deactivated successfully. Mar 17 17:46:06.652795 containerd[1501]: time="2025-03-17T17:46:06.652750604Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\"" Mar 17 17:46:06.653325 containerd[1501]: time="2025-03-17T17:46:06.653299687Z" level=info msg="StartContainer for \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\"" Mar 17 17:46:06.683799 systemd[1]: Started cri-containerd-9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af.scope - libcontainer container 9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af. Mar 17 17:46:06.711361 containerd[1501]: time="2025-03-17T17:46:06.711309107Z" level=info msg="StartContainer for \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\" returns successfully" Mar 17 17:46:06.725324 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:46:06.725727 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:06.725827 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:46:06.732938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:46:06.733143 systemd[1]: cri-containerd-9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af.scope: Deactivated successfully. Mar 17 17:46:06.754734 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:06.760092 containerd[1501]: time="2025-03-17T17:46:06.760012066Z" level=info msg="shim disconnected" id=9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af namespace=k8s.io Mar 17 17:46:06.760092 containerd[1501]: time="2025-03-17T17:46:06.760090815Z" level=warning msg="cleaning up after shim disconnected" id=9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af namespace=k8s.io Mar 17 17:46:06.760281 containerd[1501]: time="2025-03-17T17:46:06.760105673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:06.936538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af-rootfs.mount: Deactivated successfully. Mar 17 17:46:07.557046 kubelet[2722]: E0317 17:46:07.557003 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:07.558885 containerd[1501]: time="2025-03-17T17:46:07.558842236Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:46:07.586149 containerd[1501]: time="2025-03-17T17:46:07.586093736Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\"" Mar 17 17:46:07.586705 containerd[1501]: time="2025-03-17T17:46:07.586673303Z" level=info msg="StartContainer for \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\"" Mar 17 17:46:07.619901 systemd[1]: Started cri-containerd-fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717.scope - libcontainer container fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717. Mar 17 17:46:07.655131 containerd[1501]: time="2025-03-17T17:46:07.655081969Z" level=info msg="StartContainer for \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\" returns successfully" Mar 17 17:46:07.655917 systemd[1]: cri-containerd-fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717.scope: Deactivated successfully. Mar 17 17:46:07.683225 containerd[1501]: time="2025-03-17T17:46:07.683146156Z" level=info msg="shim disconnected" id=fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717 namespace=k8s.io Mar 17 17:46:07.683225 containerd[1501]: time="2025-03-17T17:46:07.683215560Z" level=warning msg="cleaning up after shim disconnected" id=fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717 namespace=k8s.io Mar 17 17:46:07.683225 containerd[1501]: time="2025-03-17T17:46:07.683224016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:07.935984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717-rootfs.mount: Deactivated successfully. Mar 17 17:46:08.560639 kubelet[2722]: E0317 17:46:08.560592 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:08.562768 containerd[1501]: time="2025-03-17T17:46:08.562689961Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:46:08.727338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811269723.mount: Deactivated successfully. Mar 17 17:46:08.736768 containerd[1501]: time="2025-03-17T17:46:08.736716450Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\"" Mar 17 17:46:08.738202 containerd[1501]: time="2025-03-17T17:46:08.737285827Z" level=info msg="StartContainer for \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\"" Mar 17 17:46:08.774926 systemd[1]: Started cri-containerd-d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014.scope - libcontainer container d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014. Mar 17 17:46:08.803669 systemd[1]: cri-containerd-d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014.scope: Deactivated successfully. Mar 17 17:46:08.808489 containerd[1501]: time="2025-03-17T17:46:08.808421597Z" level=info msg="StartContainer for \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\" returns successfully" Mar 17 17:46:08.834693 containerd[1501]: time="2025-03-17T17:46:08.834491852Z" level=info msg="shim disconnected" id=d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014 namespace=k8s.io Mar 17 17:46:08.834693 containerd[1501]: time="2025-03-17T17:46:08.834560955Z" level=warning msg="cleaning up after shim disconnected" id=d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014 namespace=k8s.io Mar 17 17:46:08.834693 containerd[1501]: time="2025-03-17T17:46:08.834569993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:08.935947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014-rootfs.mount: Deactivated successfully. Mar 17 17:46:09.565399 kubelet[2722]: E0317 17:46:09.565287 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:09.567732 containerd[1501]: time="2025-03-17T17:46:09.567648458Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:46:09.589361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100908678.mount: Deactivated successfully. Mar 17 17:46:09.591959 containerd[1501]: time="2025-03-17T17:46:09.591900666Z" level=info msg="CreateContainer within sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\"" Mar 17 17:46:09.592542 containerd[1501]: time="2025-03-17T17:46:09.592507845Z" level=info msg="StartContainer for \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\"" Mar 17 17:46:09.623803 systemd[1]: Started cri-containerd-0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac.scope - libcontainer container 0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac. Mar 17 17:46:09.656303 containerd[1501]: time="2025-03-17T17:46:09.656257783Z" level=info msg="StartContainer for \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\" returns successfully" Mar 17 17:46:09.763237 kubelet[2722]: I0317 17:46:09.763201 2722 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:46:09.788282 kubelet[2722]: I0317 17:46:09.788223 2722 topology_manager.go:215] "Topology Admit Handler" podUID="eeef6847-b62e-44e1-b1ae-2ff5b89a4bf1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-z6dwl" Mar 17 17:46:09.788473 kubelet[2722]: I0317 17:46:09.788418 2722 topology_manager.go:215] "Topology Admit Handler" podUID="c3470e3b-726d-457e-86d5-fa37074210d8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bs5gk" Mar 17 17:46:09.798100 systemd[1]: Created slice kubepods-burstable-podc3470e3b_726d_457e_86d5_fa37074210d8.slice - libcontainer container kubepods-burstable-podc3470e3b_726d_457e_86d5_fa37074210d8.slice. Mar 17 17:46:09.803857 systemd[1]: Created slice kubepods-burstable-podeeef6847_b62e_44e1_b1ae_2ff5b89a4bf1.slice - libcontainer container kubepods-burstable-podeeef6847_b62e_44e1_b1ae_2ff5b89a4bf1.slice. Mar 17 17:46:09.962938 kubelet[2722]: I0317 17:46:09.962871 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7hw\" (UniqueName: \"kubernetes.io/projected/c3470e3b-726d-457e-86d5-fa37074210d8-kube-api-access-pv7hw\") pod \"coredns-7db6d8ff4d-bs5gk\" (UID: \"c3470e3b-726d-457e-86d5-fa37074210d8\") " pod="kube-system/coredns-7db6d8ff4d-bs5gk" Mar 17 17:46:09.962938 kubelet[2722]: I0317 17:46:09.962925 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeef6847-b62e-44e1-b1ae-2ff5b89a4bf1-config-volume\") pod \"coredns-7db6d8ff4d-z6dwl\" (UID: \"eeef6847-b62e-44e1-b1ae-2ff5b89a4bf1\") " pod="kube-system/coredns-7db6d8ff4d-z6dwl" Mar 17 17:46:09.962938 kubelet[2722]: I0317 17:46:09.962945 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3470e3b-726d-457e-86d5-fa37074210d8-config-volume\") pod \"coredns-7db6d8ff4d-bs5gk\" (UID: \"c3470e3b-726d-457e-86d5-fa37074210d8\") " pod="kube-system/coredns-7db6d8ff4d-bs5gk" Mar 17 17:46:09.963171 kubelet[2722]: I0317 17:46:09.962962 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrrs\" (UniqueName: \"kubernetes.io/projected/eeef6847-b62e-44e1-b1ae-2ff5b89a4bf1-kube-api-access-6xrrs\") pod \"coredns-7db6d8ff4d-z6dwl\" (UID: \"eeef6847-b62e-44e1-b1ae-2ff5b89a4bf1\") " pod="kube-system/coredns-7db6d8ff4d-z6dwl" Mar 17 17:46:10.101577 kubelet[2722]: E0317 17:46:10.101529 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:10.106953 kubelet[2722]: E0317 17:46:10.106919 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:10.109805 containerd[1501]: time="2025-03-17T17:46:10.109759664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs5gk,Uid:c3470e3b-726d-457e-86d5-fa37074210d8,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:10.109963 containerd[1501]: time="2025-03-17T17:46:10.109858124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z6dwl,Uid:eeef6847-b62e-44e1-b1ae-2ff5b89a4bf1,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:10.274053 systemd[1]: Started sshd@9-10.0.0.87:22-10.0.0.1:41330.service - OpenSSH per-connection server daemon (10.0.0.1:41330). Mar 17 17:46:10.329332 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 41330 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:10.331454 sshd-session[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:10.337668 systemd-logind[1479]: New session 10 of user core. Mar 17 17:46:10.342779 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:46:10.534294 sshd[3550]: Connection closed by 10.0.0.1 port 41330 Mar 17 17:46:10.534879 sshd-session[3528]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:10.539456 systemd[1]: sshd@9-10.0.0.87:22-10.0.0.1:41330.service: Deactivated successfully. Mar 17 17:46:10.541610 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:46:10.542238 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:46:10.543118 systemd-logind[1479]: Removed session 10. Mar 17 17:46:10.570084 kubelet[2722]: E0317 17:46:10.570051 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:11.572319 kubelet[2722]: E0317 17:46:11.572268 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:12.137711 systemd-networkd[1409]: cilium_host: Link UP Mar 17 17:46:12.137944 systemd-networkd[1409]: cilium_net: Link UP Mar 17 17:46:12.137950 systemd-networkd[1409]: cilium_net: Gained carrier Mar 17 17:46:12.138196 systemd-networkd[1409]: cilium_host: Gained carrier Mar 17 17:46:12.139362 systemd-networkd[1409]: cilium_host: Gained IPv6LL Mar 17 17:46:12.251059 systemd-networkd[1409]: cilium_vxlan: Link UP Mar 17 17:46:12.251316 systemd-networkd[1409]: cilium_vxlan: Gained carrier Mar 17 17:46:12.375838 systemd-networkd[1409]: cilium_net: Gained IPv6LL Mar 17 17:46:12.475707 kernel: NET: Registered PF_ALG protocol family Mar 17 17:46:12.573482 kubelet[2722]: E0317 17:46:12.573409 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:13.200891 systemd-networkd[1409]: lxc_health: Link UP Mar 17 17:46:13.216005 systemd-networkd[1409]: lxc_health: Gained carrier Mar 17 17:46:13.397945 systemd-networkd[1409]: lxc37473318ee01: Link UP Mar 17 17:46:13.407385 kernel: eth0: renamed from tmpb142a Mar 17 17:46:13.412102 systemd-networkd[1409]: lxc37473318ee01: Gained carrier Mar 17 17:46:13.430199 systemd-networkd[1409]: lxcc7fc3ba0d10e: Link UP Mar 17 17:46:13.445202 kernel: eth0: renamed from tmp9d304 Mar 17 17:46:13.447937 systemd-networkd[1409]: lxcc7fc3ba0d10e: Gained carrier Mar 17 17:46:13.575447 kubelet[2722]: E0317 17:46:13.575250 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:14.143415 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Mar 17 17:46:14.484411 kubelet[2722]: I0317 17:46:14.483752 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ht4lg" podStartSLOduration=10.879833513 podStartE2EDuration="27.483733977s" podCreationTimestamp="2025-03-17 17:45:47 +0000 UTC" firstStartedPulling="2025-03-17 17:45:48.303817677 +0000 UTC m=+14.961086220" lastFinishedPulling="2025-03-17 17:46:04.90771812 +0000 UTC m=+31.564986684" observedRunningTime="2025-03-17 17:46:10.816022269 +0000 UTC m=+37.473290832" watchObservedRunningTime="2025-03-17 17:46:14.483733977 +0000 UTC m=+41.141002541" Mar 17 17:46:14.586562 kubelet[2722]: E0317 17:46:14.586434 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:14.782889 systemd-networkd[1409]: lxcc7fc3ba0d10e: Gained IPv6LL Mar 17 17:46:15.038870 systemd-networkd[1409]: lxc_health: Gained IPv6LL Mar 17 17:46:15.422801 systemd-networkd[1409]: lxc37473318ee01: Gained IPv6LL Mar 17 17:46:15.546793 systemd[1]: Started sshd@10-10.0.0.87:22-10.0.0.1:38056.service - OpenSSH per-connection server daemon (10.0.0.1:38056). Mar 17 17:46:15.588846 kubelet[2722]: E0317 17:46:15.588788 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:15.606412 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 38056 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:15.608331 sshd-session[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:15.613005 systemd-logind[1479]: New session 11 of user core. Mar 17 17:46:15.620802 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:46:15.791346 sshd[3976]: Connection closed by 10.0.0.1 port 38056 Mar 17 17:46:15.793014 sshd-session[3974]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:15.797006 systemd[1]: sshd@10-10.0.0.87:22-10.0.0.1:38056.service: Deactivated successfully. Mar 17 17:46:15.799431 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:46:15.800338 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:46:15.801249 systemd-logind[1479]: Removed session 11. Mar 17 17:46:16.976296 containerd[1501]: time="2025-03-17T17:46:16.975320223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:16.976296 containerd[1501]: time="2025-03-17T17:46:16.975492744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:16.976296 containerd[1501]: time="2025-03-17T17:46:16.975526107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:16.977041 containerd[1501]: time="2025-03-17T17:46:16.976591188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:16.978078 containerd[1501]: time="2025-03-17T17:46:16.977973919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:16.983827 containerd[1501]: time="2025-03-17T17:46:16.981775707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:16.983827 containerd[1501]: time="2025-03-17T17:46:16.981806455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:16.983827 containerd[1501]: time="2025-03-17T17:46:16.981924662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:17.001766 systemd[1]: Started cri-containerd-9d304f5b7984eebe7e027d600faaecbaf55b2b1deaed06c75138a95eb3335f1d.scope - libcontainer container 9d304f5b7984eebe7e027d600faaecbaf55b2b1deaed06c75138a95eb3335f1d. Mar 17 17:46:17.007416 systemd[1]: Started cri-containerd-b142aeef4fcdae77700a993b175a51e04c2cc464ee729b8b0fde25024bd94e59.scope - libcontainer container b142aeef4fcdae77700a993b175a51e04c2cc464ee729b8b0fde25024bd94e59. Mar 17 17:46:17.020727 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:46:17.031405 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:46:17.065085 containerd[1501]: time="2025-03-17T17:46:17.065025318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z6dwl,Uid:eeef6847-b62e-44e1-b1ae-2ff5b89a4bf1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d304f5b7984eebe7e027d600faaecbaf55b2b1deaed06c75138a95eb3335f1d\"" Mar 17 17:46:17.067391 kubelet[2722]: E0317 17:46:17.067351 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:17.069821 containerd[1501]: time="2025-03-17T17:46:17.069709853Z" level=info msg="CreateContainer within sandbox \"9d304f5b7984eebe7e027d600faaecbaf55b2b1deaed06c75138a95eb3335f1d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:46:17.075266 containerd[1501]: time="2025-03-17T17:46:17.075207134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bs5gk,Uid:c3470e3b-726d-457e-86d5-fa37074210d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b142aeef4fcdae77700a993b175a51e04c2cc464ee729b8b0fde25024bd94e59\"" Mar 17 17:46:17.075790 kubelet[2722]: E0317 17:46:17.075762 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:17.078915 containerd[1501]: time="2025-03-17T17:46:17.078809795Z" level=info msg="CreateContainer within sandbox \"b142aeef4fcdae77700a993b175a51e04c2cc464ee729b8b0fde25024bd94e59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:46:17.097533 containerd[1501]: time="2025-03-17T17:46:17.097497325Z" level=info msg="CreateContainer within sandbox \"9d304f5b7984eebe7e027d600faaecbaf55b2b1deaed06c75138a95eb3335f1d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0cac50f9687cd6c51e6d595563de389e499ccfd0ddc4adccfc11d3047a8a092a\"" Mar 17 17:46:17.098046 containerd[1501]: time="2025-03-17T17:46:17.097965061Z" level=info msg="StartContainer for \"0cac50f9687cd6c51e6d595563de389e499ccfd0ddc4adccfc11d3047a8a092a\"" Mar 17 17:46:17.104652 containerd[1501]: time="2025-03-17T17:46:17.103432235Z" level=info msg="CreateContainer within sandbox \"b142aeef4fcdae77700a993b175a51e04c2cc464ee729b8b0fde25024bd94e59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fc3bf07ce6e7e10af6c2fad43f660772e8a8d015d1be4fdc30b7c69c6bc7f8a\"" Mar 17 17:46:17.104652 containerd[1501]: time="2025-03-17T17:46:17.104147314Z" level=info msg="StartContainer for \"6fc3bf07ce6e7e10af6c2fad43f660772e8a8d015d1be4fdc30b7c69c6bc7f8a\"" Mar 17 17:46:17.129867 systemd[1]: Started cri-containerd-0cac50f9687cd6c51e6d595563de389e499ccfd0ddc4adccfc11d3047a8a092a.scope - libcontainer container 0cac50f9687cd6c51e6d595563de389e499ccfd0ddc4adccfc11d3047a8a092a. Mar 17 17:46:17.144118 systemd[1]: Started cri-containerd-6fc3bf07ce6e7e10af6c2fad43f660772e8a8d015d1be4fdc30b7c69c6bc7f8a.scope - libcontainer container 6fc3bf07ce6e7e10af6c2fad43f660772e8a8d015d1be4fdc30b7c69c6bc7f8a. Mar 17 17:46:17.174463 containerd[1501]: time="2025-03-17T17:46:17.174401641Z" level=info msg="StartContainer for \"0cac50f9687cd6c51e6d595563de389e499ccfd0ddc4adccfc11d3047a8a092a\" returns successfully" Mar 17 17:46:17.180963 containerd[1501]: time="2025-03-17T17:46:17.180441863Z" level=info msg="StartContainer for \"6fc3bf07ce6e7e10af6c2fad43f660772e8a8d015d1be4fdc30b7c69c6bc7f8a\" returns successfully" Mar 17 17:46:17.594964 kubelet[2722]: E0317 17:46:17.594828 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:17.597949 kubelet[2722]: E0317 17:46:17.597919 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:17.608121 kubelet[2722]: I0317 17:46:17.607960 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-z6dwl" podStartSLOduration=30.60782422 podStartE2EDuration="30.60782422s" podCreationTimestamp="2025-03-17 17:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:46:17.607097016 +0000 UTC m=+44.264365589" watchObservedRunningTime="2025-03-17 17:46:17.60782422 +0000 UTC m=+44.265092783" Mar 17 17:46:17.617722 kubelet[2722]: I0317 17:46:17.617569 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bs5gk" podStartSLOduration=30.617550042 podStartE2EDuration="30.617550042s" podCreationTimestamp="2025-03-17 17:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:46:17.617523931 +0000 UTC m=+44.274792484" watchObservedRunningTime="2025-03-17 17:46:17.617550042 +0000 UTC m=+44.274818595" Mar 17 17:46:17.982402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615294480.mount: Deactivated successfully. Mar 17 17:46:18.598222 kubelet[2722]: E0317 17:46:18.598170 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:18.598689 kubelet[2722]: E0317 17:46:18.598301 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:19.599515 kubelet[2722]: E0317 17:46:19.599469 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:20.821139 systemd[1]: Started sshd@11-10.0.0.87:22-10.0.0.1:38058.service - OpenSSH per-connection server daemon (10.0.0.1:38058). Mar 17 17:46:20.861262 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 38058 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:20.863066 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:20.867945 systemd-logind[1479]: New session 12 of user core. Mar 17 17:46:20.876746 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:46:21.003509 sshd[4173]: Connection closed by 10.0.0.1 port 38058 Mar 17 17:46:21.004377 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:21.012511 systemd[1]: sshd@11-10.0.0.87:22-10.0.0.1:38058.service: Deactivated successfully. Mar 17 17:46:21.014553 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:46:21.017011 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:46:21.025292 systemd[1]: Started sshd@12-10.0.0.87:22-10.0.0.1:37810.service - OpenSSH per-connection server daemon (10.0.0.1:37810). Mar 17 17:46:21.026483 systemd-logind[1479]: Removed session 12. Mar 17 17:46:21.060147 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 37810 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:21.061696 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:21.065648 systemd-logind[1479]: New session 13 of user core. Mar 17 17:46:21.072748 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:46:21.325861 sshd[4188]: Connection closed by 10.0.0.1 port 37810 Mar 17 17:46:21.326346 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:21.335059 systemd[1]: sshd@12-10.0.0.87:22-10.0.0.1:37810.service: Deactivated successfully. Mar 17 17:46:21.337481 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:46:21.339361 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:46:21.345325 systemd[1]: Started sshd@13-10.0.0.87:22-10.0.0.1:37816.service - OpenSSH per-connection server daemon (10.0.0.1:37816). Mar 17 17:46:21.346457 systemd-logind[1479]: Removed session 13. Mar 17 17:46:21.383376 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 37816 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:21.385608 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:21.392128 systemd-logind[1479]: New session 14 of user core. Mar 17 17:46:21.399908 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:46:21.520209 sshd[4200]: Connection closed by 10.0.0.1 port 37816 Mar 17 17:46:21.520658 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:21.525368 systemd[1]: sshd@13-10.0.0.87:22-10.0.0.1:37816.service: Deactivated successfully. Mar 17 17:46:21.527821 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:46:21.528585 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:46:21.529673 systemd-logind[1479]: Removed session 14. Mar 17 17:46:26.538091 systemd[1]: Started sshd@14-10.0.0.87:22-10.0.0.1:37820.service - OpenSSH per-connection server daemon (10.0.0.1:37820). Mar 17 17:46:26.581881 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 37820 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:26.583713 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:26.588177 systemd-logind[1479]: New session 15 of user core. Mar 17 17:46:26.594785 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:46:26.717368 sshd[4216]: Connection closed by 10.0.0.1 port 37820 Mar 17 17:46:26.717907 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:26.722909 systemd[1]: sshd@14-10.0.0.87:22-10.0.0.1:37820.service: Deactivated successfully. Mar 17 17:46:26.725267 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:46:26.726167 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:46:26.727219 systemd-logind[1479]: Removed session 15. Mar 17 17:46:31.728664 systemd[1]: Started sshd@15-10.0.0.87:22-10.0.0.1:58052.service - OpenSSH per-connection server daemon (10.0.0.1:58052). Mar 17 17:46:31.773141 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 58052 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:31.774770 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:31.778893 systemd-logind[1479]: New session 16 of user core. Mar 17 17:46:31.793772 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:46:31.904402 sshd[4230]: Connection closed by 10.0.0.1 port 58052 Mar 17 17:46:31.904875 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:31.912349 systemd[1]: sshd@15-10.0.0.87:22-10.0.0.1:58052.service: Deactivated successfully. Mar 17 17:46:31.914328 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:46:31.915961 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:46:31.924901 systemd[1]: Started sshd@16-10.0.0.87:22-10.0.0.1:58062.service - OpenSSH per-connection server daemon (10.0.0.1:58062). Mar 17 17:46:31.926277 systemd-logind[1479]: Removed session 16. Mar 17 17:46:31.960998 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 58062 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:31.962356 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:31.966431 systemd-logind[1479]: New session 17 of user core. Mar 17 17:46:31.975315 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:46:32.242412 sshd[4244]: Connection closed by 10.0.0.1 port 58062 Mar 17 17:46:32.243030 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:32.250519 systemd[1]: sshd@16-10.0.0.87:22-10.0.0.1:58062.service: Deactivated successfully. Mar 17 17:46:32.252716 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:46:32.254430 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:46:32.265918 systemd[1]: Started sshd@17-10.0.0.87:22-10.0.0.1:58072.service - OpenSSH per-connection server daemon (10.0.0.1:58072). Mar 17 17:46:32.267028 systemd-logind[1479]: Removed session 17. Mar 17 17:46:32.305174 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 58072 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:32.306759 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:32.310753 systemd-logind[1479]: New session 18 of user core. Mar 17 17:46:32.320758 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:46:33.832648 sshd[4256]: Connection closed by 10.0.0.1 port 58072 Mar 17 17:46:33.834137 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:33.845196 systemd[1]: sshd@17-10.0.0.87:22-10.0.0.1:58072.service: Deactivated successfully. Mar 17 17:46:33.847546 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:46:33.849040 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:46:33.857324 systemd[1]: Started sshd@18-10.0.0.87:22-10.0.0.1:58082.service - OpenSSH per-connection server daemon (10.0.0.1:58082). Mar 17 17:46:33.861704 systemd-logind[1479]: Removed session 18. Mar 17 17:46:33.893802 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 58082 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:33.895432 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:33.899528 systemd-logind[1479]: New session 19 of user core. Mar 17 17:46:33.913758 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:46:34.145450 sshd[4297]: Connection closed by 10.0.0.1 port 58082 Mar 17 17:46:34.145901 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:34.158039 systemd[1]: sshd@18-10.0.0.87:22-10.0.0.1:58082.service: Deactivated successfully. Mar 17 17:46:34.160481 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:46:34.162769 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:46:34.172951 systemd[1]: Started sshd@19-10.0.0.87:22-10.0.0.1:58098.service - OpenSSH per-connection server daemon (10.0.0.1:58098). Mar 17 17:46:34.174051 systemd-logind[1479]: Removed session 19. Mar 17 17:46:34.208775 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 58098 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:34.210260 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:34.214144 systemd-logind[1479]: New session 20 of user core. Mar 17 17:46:34.221738 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:46:34.328900 sshd[4309]: Connection closed by 10.0.0.1 port 58098 Mar 17 17:46:34.329295 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:34.333440 systemd[1]: sshd@19-10.0.0.87:22-10.0.0.1:58098.service: Deactivated successfully. Mar 17 17:46:34.335591 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:46:34.336335 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:46:34.337361 systemd-logind[1479]: Removed session 20. Mar 17 17:46:39.340369 systemd[1]: Started sshd@20-10.0.0.87:22-10.0.0.1:58108.service - OpenSSH per-connection server daemon (10.0.0.1:58108). Mar 17 17:46:39.380652 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 58108 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:39.382331 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:39.386273 systemd-logind[1479]: New session 21 of user core. Mar 17 17:46:39.402760 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:46:39.523162 sshd[4323]: Connection closed by 10.0.0.1 port 58108 Mar 17 17:46:39.523520 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:39.528215 systemd[1]: sshd@20-10.0.0.87:22-10.0.0.1:58108.service: Deactivated successfully. Mar 17 17:46:39.530425 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:46:39.531127 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:46:39.532330 systemd-logind[1479]: Removed session 21. Mar 17 17:46:44.535093 systemd[1]: Started sshd@21-10.0.0.87:22-10.0.0.1:60304.service - OpenSSH per-connection server daemon (10.0.0.1:60304). Mar 17 17:46:44.576232 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 60304 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:44.577929 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:44.581855 systemd-logind[1479]: New session 22 of user core. Mar 17 17:46:44.591743 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:46:44.710241 sshd[4341]: Connection closed by 10.0.0.1 port 60304 Mar 17 17:46:44.710671 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:44.715054 systemd[1]: sshd@21-10.0.0.87:22-10.0.0.1:60304.service: Deactivated successfully. Mar 17 17:46:44.717356 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:46:44.718068 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:46:44.719055 systemd-logind[1479]: Removed session 22. Mar 17 17:46:49.723049 systemd[1]: Started sshd@22-10.0.0.87:22-10.0.0.1:60316.service - OpenSSH per-connection server daemon (10.0.0.1:60316). Mar 17 17:46:49.764638 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 60316 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:49.766345 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:49.771142 systemd-logind[1479]: New session 23 of user core. Mar 17 17:46:49.783800 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:46:49.892719 sshd[4357]: Connection closed by 10.0.0.1 port 60316 Mar 17 17:46:49.894491 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:49.898316 systemd[1]: sshd@22-10.0.0.87:22-10.0.0.1:60316.service: Deactivated successfully. Mar 17 17:46:49.900579 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:46:49.901291 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:46:49.902183 systemd-logind[1479]: Removed session 23. Mar 17 17:46:50.466408 kubelet[2722]: E0317 17:46:50.466357 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:53.466098 kubelet[2722]: E0317 17:46:53.466046 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:54.905754 systemd[1]: Started sshd@23-10.0.0.87:22-10.0.0.1:53726.service - OpenSSH per-connection server daemon (10.0.0.1:53726). Mar 17 17:46:54.946859 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 53726 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:54.948605 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:54.952917 systemd-logind[1479]: New session 24 of user core. Mar 17 17:46:54.974820 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:46:55.077612 sshd[4371]: Connection closed by 10.0.0.1 port 53726 Mar 17 17:46:55.078112 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:55.090599 systemd[1]: sshd@23-10.0.0.87:22-10.0.0.1:53726.service: Deactivated successfully. Mar 17 17:46:55.092491 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:46:55.093990 systemd-logind[1479]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:46:55.095263 systemd[1]: Started sshd@24-10.0.0.87:22-10.0.0.1:53742.service - OpenSSH per-connection server daemon (10.0.0.1:53742). Mar 17 17:46:55.096091 systemd-logind[1479]: Removed session 24. Mar 17 17:46:55.136271 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 53742 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:55.137878 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:55.141833 systemd-logind[1479]: New session 25 of user core. Mar 17 17:46:55.155773 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:46:55.466068 kubelet[2722]: E0317 17:46:55.465933 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:56.466102 kubelet[2722]: E0317 17:46:56.466042 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:56.559965 containerd[1501]: time="2025-03-17T17:46:56.559868734Z" level=info msg="StopContainer for \"b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96\" with timeout 30 (s)" Mar 17 17:46:56.576299 containerd[1501]: time="2025-03-17T17:46:56.576246165Z" level=info msg="Stop container \"b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96\" with signal terminated" Mar 17 17:46:56.586840 systemd[1]: run-containerd-runc-k8s.io-0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac-runc.kX9dyo.mount: Deactivated successfully. Mar 17 17:46:56.588773 systemd[1]: cri-containerd-b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96.scope: Deactivated successfully. Mar 17 17:46:56.608501 containerd[1501]: time="2025-03-17T17:46:56.608397064Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:46:56.611023 containerd[1501]: time="2025-03-17T17:46:56.610974410Z" level=info msg="StopContainer for \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\" with timeout 2 (s)" Mar 17 17:46:56.611270 containerd[1501]: time="2025-03-17T17:46:56.611249991Z" level=info msg="Stop container \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\" with signal terminated" Mar 17 17:46:56.616520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96-rootfs.mount: Deactivated successfully. Mar 17 17:46:56.618932 systemd-networkd[1409]: lxc_health: Link DOWN Mar 17 17:46:56.618939 systemd-networkd[1409]: lxc_health: Lost carrier Mar 17 17:46:56.637042 containerd[1501]: time="2025-03-17T17:46:56.636941291Z" level=info msg="shim disconnected" id=b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96 namespace=k8s.io Mar 17 17:46:56.637042 containerd[1501]: time="2025-03-17T17:46:56.637026282Z" level=warning msg="cleaning up after shim disconnected" id=b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96 namespace=k8s.io Mar 17 17:46:56.637042 containerd[1501]: time="2025-03-17T17:46:56.637040108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:56.646229 systemd[1]: cri-containerd-0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac.scope: Deactivated successfully. Mar 17 17:46:56.646651 systemd[1]: cri-containerd-0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac.scope: Consumed 7.239s CPU time. Mar 17 17:46:56.658636 containerd[1501]: time="2025-03-17T17:46:56.658561520Z" level=info msg="StopContainer for \"b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96\" returns successfully" Mar 17 17:46:56.662960 containerd[1501]: time="2025-03-17T17:46:56.662912500Z" level=info msg="StopPodSandbox for \"08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42\"" Mar 17 17:46:56.670607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac-rootfs.mount: Deactivated successfully. Mar 17 17:46:56.676110 containerd[1501]: time="2025-03-17T17:46:56.662969619Z" level=info msg="Container to stop \"b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:46:56.677375 containerd[1501]: time="2025-03-17T17:46:56.677327229Z" level=info msg="shim disconnected" id=0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac namespace=k8s.io Mar 17 17:46:56.677428 containerd[1501]: time="2025-03-17T17:46:56.677373185Z" level=warning msg="cleaning up after shim disconnected" id=0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac namespace=k8s.io Mar 17 17:46:56.677428 containerd[1501]: time="2025-03-17T17:46:56.677384187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:56.678350 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42-shm.mount: Deactivated successfully. Mar 17 17:46:56.684828 systemd[1]: cri-containerd-08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42.scope: Deactivated successfully. Mar 17 17:46:56.698950 containerd[1501]: time="2025-03-17T17:46:56.698891301Z" level=info msg="StopContainer for \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\" returns successfully" Mar 17 17:46:56.699335 containerd[1501]: time="2025-03-17T17:46:56.699314212Z" level=info msg="StopPodSandbox for \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\"" Mar 17 17:46:56.699461 containerd[1501]: time="2025-03-17T17:46:56.699346452Z" level=info msg="Container to stop \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:46:56.699461 containerd[1501]: time="2025-03-17T17:46:56.699381158Z" level=info msg="Container to stop \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:46:56.699461 containerd[1501]: time="2025-03-17T17:46:56.699389634Z" level=info msg="Container to stop \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:46:56.699461 containerd[1501]: time="2025-03-17T17:46:56.699398030Z" level=info msg="Container to stop \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:46:56.699461 containerd[1501]: time="2025-03-17T17:46:56.699406155Z" level=info msg="Container to stop \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:46:56.708673 systemd[1]: cri-containerd-ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367.scope: Deactivated successfully. Mar 17 17:46:56.715165 containerd[1501]: time="2025-03-17T17:46:56.714999022Z" level=info msg="shim disconnected" id=08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42 namespace=k8s.io Mar 17 17:46:56.715165 containerd[1501]: time="2025-03-17T17:46:56.715080796Z" level=warning msg="cleaning up after shim disconnected" id=08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42 namespace=k8s.io Mar 17 17:46:56.715165 containerd[1501]: time="2025-03-17T17:46:56.715094302Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:56.734759 containerd[1501]: time="2025-03-17T17:46:56.734608497Z" level=info msg="TearDown network for sandbox \"08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42\" successfully" Mar 17 17:46:56.734759 containerd[1501]: time="2025-03-17T17:46:56.734659413Z" level=info msg="StopPodSandbox for \"08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42\" returns successfully" Mar 17 17:46:56.736709 containerd[1501]: time="2025-03-17T17:46:56.736465269Z" level=info msg="shim disconnected" id=ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367 namespace=k8s.io Mar 17 17:46:56.736709 containerd[1501]: time="2025-03-17T17:46:56.736528359Z" level=warning msg="cleaning up after shim disconnected" id=ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367 namespace=k8s.io Mar 17 17:46:56.736709 containerd[1501]: time="2025-03-17T17:46:56.736538067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:56.757429 containerd[1501]: time="2025-03-17T17:46:56.757273902Z" level=info msg="TearDown network for sandbox \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" successfully" Mar 17 17:46:56.757429 containerd[1501]: time="2025-03-17T17:46:56.757313969Z" level=info msg="StopPodSandbox for \"ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367\" returns successfully" Mar 17 17:46:56.826775 kubelet[2722]: I0317 17:46:56.826701 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rhx7\" (UniqueName: \"kubernetes.io/projected/653194cf-6ff9-44e6-a56f-8e853b111cf1-kube-api-access-6rhx7\") pod \"653194cf-6ff9-44e6-a56f-8e853b111cf1\" (UID: \"653194cf-6ff9-44e6-a56f-8e853b111cf1\") " Mar 17 17:46:56.826775 kubelet[2722]: I0317 17:46:56.826751 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-run\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.826775 kubelet[2722]: I0317 17:46:56.826770 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-lib-modules\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.826775 kubelet[2722]: I0317 17:46:56.826799 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-cgroup\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827174 kubelet[2722]: I0317 17:46:56.826819 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/653194cf-6ff9-44e6-a56f-8e853b111cf1-cilium-config-path\") pod \"653194cf-6ff9-44e6-a56f-8e853b111cf1\" (UID: \"653194cf-6ff9-44e6-a56f-8e853b111cf1\") " Mar 17 17:46:56.827174 kubelet[2722]: I0317 17:46:56.826838 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-etc-cni-netd\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827174 kubelet[2722]: I0317 17:46:56.826865 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-host-proc-sys-kernel\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827174 kubelet[2722]: I0317 17:46:56.826879 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/310676e6-6288-4c89-86b8-0ade01ffbc34-hubble-tls\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827174 kubelet[2722]: I0317 17:46:56.826884 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.827174 kubelet[2722]: I0317 17:46:56.826897 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/310676e6-6288-4c89-86b8-0ade01ffbc34-clustermesh-secrets\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827388 kubelet[2722]: I0317 17:46:56.826990 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df7mn\" (UniqueName: \"kubernetes.io/projected/310676e6-6288-4c89-86b8-0ade01ffbc34-kube-api-access-df7mn\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827388 kubelet[2722]: I0317 17:46:56.827014 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-xtables-lock\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827388 kubelet[2722]: I0317 17:46:56.827035 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-host-proc-sys-net\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827388 kubelet[2722]: I0317 17:46:56.827056 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-config-path\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827388 kubelet[2722]: I0317 17:46:56.827080 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-hostproc\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.827388 kubelet[2722]: I0317 17:46:56.827119 2722 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.827750 kubelet[2722]: I0317 17:46:56.827146 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-hostproc" (OuterVolumeSpecName: "hostproc") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.827750 kubelet[2722]: I0317 17:46:56.827170 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.827750 kubelet[2722]: I0317 17:46:56.827385 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.827750 kubelet[2722]: I0317 17:46:56.827438 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.830662 kubelet[2722]: I0317 17:46:56.829779 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.830662 kubelet[2722]: I0317 17:46:56.829821 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.830662 kubelet[2722]: I0317 17:46:56.829867 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.830933 kubelet[2722]: I0317 17:46:56.830891 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310676e6-6288-4c89-86b8-0ade01ffbc34-kube-api-access-df7mn" (OuterVolumeSpecName: "kube-api-access-df7mn") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "kube-api-access-df7mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:46:56.832279 kubelet[2722]: I0317 17:46:56.832241 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/310676e6-6288-4c89-86b8-0ade01ffbc34-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:46:56.833203 kubelet[2722]: I0317 17:46:56.833168 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:46:56.833357 kubelet[2722]: I0317 17:46:56.833315 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653194cf-6ff9-44e6-a56f-8e853b111cf1-kube-api-access-6rhx7" (OuterVolumeSpecName: "kube-api-access-6rhx7") pod "653194cf-6ff9-44e6-a56f-8e853b111cf1" (UID: "653194cf-6ff9-44e6-a56f-8e853b111cf1"). InnerVolumeSpecName "kube-api-access-6rhx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:46:56.833587 kubelet[2722]: I0317 17:46:56.833550 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653194cf-6ff9-44e6-a56f-8e853b111cf1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "653194cf-6ff9-44e6-a56f-8e853b111cf1" (UID: "653194cf-6ff9-44e6-a56f-8e853b111cf1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:46:56.834545 kubelet[2722]: I0317 17:46:56.834503 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310676e6-6288-4c89-86b8-0ade01ffbc34-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:46:56.927302 kubelet[2722]: I0317 17:46:56.927238 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-bpf-maps\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.927302 kubelet[2722]: I0317 17:46:56.927288 2722 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cni-path\") pod \"310676e6-6288-4c89-86b8-0ade01ffbc34\" (UID: \"310676e6-6288-4c89-86b8-0ade01ffbc34\") " Mar 17 17:46:56.927538 kubelet[2722]: I0317 17:46:56.927322 2722 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927538 kubelet[2722]: I0317 17:46:56.927335 2722 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/653194cf-6ff9-44e6-a56f-8e853b111cf1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927538 kubelet[2722]: I0317 17:46:56.927323 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.927538 kubelet[2722]: I0317 17:46:56.927347 2722 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927538 kubelet[2722]: I0317 17:46:56.927372 2722 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cni-path" (OuterVolumeSpecName: "cni-path") pod "310676e6-6288-4c89-86b8-0ade01ffbc34" (UID: "310676e6-6288-4c89-86b8-0ade01ffbc34"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:46:56.927538 kubelet[2722]: I0317 17:46:56.927377 2722 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927538 kubelet[2722]: I0317 17:46:56.927396 2722 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927769 kubelet[2722]: I0317 17:46:56.927407 2722 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/310676e6-6288-4c89-86b8-0ade01ffbc34-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927769 kubelet[2722]: I0317 17:46:56.927417 2722 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/310676e6-6288-4c89-86b8-0ade01ffbc34-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927769 kubelet[2722]: I0317 17:46:56.927429 2722 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-df7mn\" (UniqueName: \"kubernetes.io/projected/310676e6-6288-4c89-86b8-0ade01ffbc34-kube-api-access-df7mn\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927769 kubelet[2722]: I0317 17:46:56.927440 2722 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927769 kubelet[2722]: I0317 17:46:56.927451 2722 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927769 kubelet[2722]: I0317 17:46:56.927461 2722 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/310676e6-6288-4c89-86b8-0ade01ffbc34-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927769 kubelet[2722]: I0317 17:46:56.927471 2722 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:56.927769 kubelet[2722]: I0317 17:46:56.927496 2722 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6rhx7\" (UniqueName: \"kubernetes.io/projected/653194cf-6ff9-44e6-a56f-8e853b111cf1-kube-api-access-6rhx7\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:57.028769 kubelet[2722]: I0317 17:46:57.028566 2722 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:57.028769 kubelet[2722]: I0317 17:46:57.028596 2722 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/310676e6-6288-4c89-86b8-0ade01ffbc34-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:46:57.473916 systemd[1]: Removed slice kubepods-burstable-pod310676e6_6288_4c89_86b8_0ade01ffbc34.slice - libcontainer container kubepods-burstable-pod310676e6_6288_4c89_86b8_0ade01ffbc34.slice. Mar 17 17:46:57.474010 systemd[1]: kubepods-burstable-pod310676e6_6288_4c89_86b8_0ade01ffbc34.slice: Consumed 7.352s CPU time. Mar 17 17:46:57.475100 systemd[1]: Removed slice kubepods-besteffort-pod653194cf_6ff9_44e6_a56f_8e853b111cf1.slice - libcontainer container kubepods-besteffort-pod653194cf_6ff9_44e6_a56f_8e853b111cf1.slice. Mar 17 17:46:57.579835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367-rootfs.mount: Deactivated successfully. Mar 17 17:46:57.579999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee14e6a3666ae7a5fc3f465a566c5f218cf9df2f82b9d28dcb7e8f3178aa0367-shm.mount: Deactivated successfully. Mar 17 17:46:57.580109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08c37106898cf6839746befb8555d80925d382a00eaebec0d863018539fcdc42-rootfs.mount: Deactivated successfully. Mar 17 17:46:57.580216 systemd[1]: var-lib-kubelet-pods-653194cf\x2d6ff9\x2d44e6\x2da56f\x2d8e853b111cf1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6rhx7.mount: Deactivated successfully. Mar 17 17:46:57.580337 systemd[1]: var-lib-kubelet-pods-310676e6\x2d6288\x2d4c89\x2d86b8\x2d0ade01ffbc34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddf7mn.mount: Deactivated successfully. Mar 17 17:46:57.580451 systemd[1]: var-lib-kubelet-pods-310676e6\x2d6288\x2d4c89\x2d86b8\x2d0ade01ffbc34-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:46:57.580567 systemd[1]: var-lib-kubelet-pods-310676e6\x2d6288\x2d4c89\x2d86b8\x2d0ade01ffbc34-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:46:57.704490 kubelet[2722]: I0317 17:46:57.704453 2722 scope.go:117] "RemoveContainer" containerID="0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac" Mar 17 17:46:57.715001 containerd[1501]: time="2025-03-17T17:46:57.714443397Z" level=info msg="RemoveContainer for \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\"" Mar 17 17:46:57.718597 containerd[1501]: time="2025-03-17T17:46:57.718553741Z" level=info msg="RemoveContainer for \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\" returns successfully" Mar 17 17:46:57.718907 kubelet[2722]: I0317 17:46:57.718870 2722 scope.go:117] "RemoveContainer" containerID="d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014" Mar 17 17:46:57.719878 containerd[1501]: time="2025-03-17T17:46:57.719848490Z" level=info msg="RemoveContainer for \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\"" Mar 17 17:46:57.724127 containerd[1501]: time="2025-03-17T17:46:57.723987279Z" level=info msg="RemoveContainer for \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\" returns successfully" Mar 17 17:46:57.724304 kubelet[2722]: I0317 17:46:57.724263 2722 scope.go:117] "RemoveContainer" containerID="fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717" Mar 17 17:46:57.725899 containerd[1501]: time="2025-03-17T17:46:57.725397255Z" level=info msg="RemoveContainer for \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\"" Mar 17 17:46:57.729823 containerd[1501]: time="2025-03-17T17:46:57.729756029Z" level=info msg="RemoveContainer for \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\" returns successfully" Mar 17 17:46:57.730070 kubelet[2722]: I0317 17:46:57.730029 2722 scope.go:117] "RemoveContainer" containerID="9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af" Mar 17 17:46:57.731349 containerd[1501]: time="2025-03-17T17:46:57.731045969Z" level=info msg="RemoveContainer for \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\"" Mar 17 17:46:57.735341 containerd[1501]: time="2025-03-17T17:46:57.735281460Z" level=info msg="RemoveContainer for \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\" returns successfully" Mar 17 17:46:57.735555 kubelet[2722]: I0317 17:46:57.735512 2722 scope.go:117] "RemoveContainer" containerID="34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4" Mar 17 17:46:57.736580 containerd[1501]: time="2025-03-17T17:46:57.736534149Z" level=info msg="RemoveContainer for \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\"" Mar 17 17:46:57.740519 containerd[1501]: time="2025-03-17T17:46:57.740466817Z" level=info msg="RemoveContainer for \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\" returns successfully" Mar 17 17:46:57.740757 kubelet[2722]: I0317 17:46:57.740730 2722 scope.go:117] "RemoveContainer" containerID="0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac" Mar 17 17:46:57.741034 containerd[1501]: time="2025-03-17T17:46:57.740987071Z" level=error msg="ContainerStatus for \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\": not found" Mar 17 17:46:57.751451 kubelet[2722]: E0317 17:46:57.751390 2722 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\": not found" containerID="0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac" Mar 17 17:46:57.751670 kubelet[2722]: I0317 17:46:57.751457 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac"} err="failed to get container status \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f690c19f8cd5f7d4d15088db1a8cc18e523b917fa1da41d8b74b1ca9ee0b8ac\": not found" Mar 17 17:46:57.751670 kubelet[2722]: I0317 17:46:57.751540 2722 scope.go:117] "RemoveContainer" containerID="d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014" Mar 17 17:46:57.751919 containerd[1501]: time="2025-03-17T17:46:57.751873091Z" level=error msg="ContainerStatus for \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\": not found" Mar 17 17:46:57.752090 kubelet[2722]: E0317 17:46:57.752060 2722 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\": not found" containerID="d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014" Mar 17 17:46:57.752158 kubelet[2722]: I0317 17:46:57.752090 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014"} err="failed to get container status \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\": rpc error: code = NotFound desc = an error occurred when try to find container \"d99234bf15f3c64446eda51e45c7808b942ad1f32b5a362332fb9144efb62014\": not found" Mar 17 17:46:57.752158 kubelet[2722]: I0317 17:46:57.752108 2722 scope.go:117] "RemoveContainer" containerID="fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717" Mar 17 17:46:57.752282 containerd[1501]: time="2025-03-17T17:46:57.752251677Z" level=error msg="ContainerStatus for \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\": not found" Mar 17 17:46:57.752399 kubelet[2722]: E0317 17:46:57.752369 2722 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\": not found" containerID="fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717" Mar 17 17:46:57.752446 kubelet[2722]: I0317 17:46:57.752397 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717"} err="failed to get container status \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb66b40e97f2d0f45018aa5ec4857b26a7d5ea914ff61f79b725332249733717\": not found" Mar 17 17:46:57.752446 kubelet[2722]: I0317 17:46:57.752417 2722 scope.go:117] "RemoveContainer" containerID="9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af" Mar 17 17:46:57.752699 containerd[1501]: time="2025-03-17T17:46:57.752642686Z" level=error msg="ContainerStatus for \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\": not found" Mar 17 17:46:57.752858 kubelet[2722]: E0317 17:46:57.752744 2722 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\": not found" containerID="9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af" Mar 17 17:46:57.752858 kubelet[2722]: I0317 17:46:57.752763 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af"} err="failed to get container status \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\": rpc error: code = NotFound desc = an error occurred when try to find container \"9002c9ac5c4e9bdf1485248dac4b6002c958d20010f63eed3f34d7102e8ad4af\": not found" Mar 17 17:46:57.752858 kubelet[2722]: I0317 17:46:57.752790 2722 scope.go:117] "RemoveContainer" containerID="34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4" Mar 17 17:46:57.753150 kubelet[2722]: E0317 17:46:57.753043 2722 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\": not found" containerID="34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4" Mar 17 17:46:57.753150 kubelet[2722]: I0317 17:46:57.753059 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4"} err="failed to get container status \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\": rpc error: code = NotFound desc = an error occurred when try to find container \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\": not found" Mar 17 17:46:57.753150 kubelet[2722]: I0317 17:46:57.753071 2722 scope.go:117] "RemoveContainer" containerID="b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96" Mar 17 17:46:57.753263 containerd[1501]: time="2025-03-17T17:46:57.752943957Z" level=error msg="ContainerStatus for \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34f5b7c96d217e4ebadf71010a0f6e1aac0ff5d2cc636fce9fabf96d05be0de4\": not found" Mar 17 17:46:57.754088 containerd[1501]: time="2025-03-17T17:46:57.754050119Z" level=info msg="RemoveContainer for \"b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96\"" Mar 17 17:46:57.764695 containerd[1501]: time="2025-03-17T17:46:57.764608128Z" level=info msg="RemoveContainer for \"b0d531001abea5eb970444a540ced2c8db0bfbead144cd450b824358b5af9a96\" returns successfully" Mar 17 17:46:58.513050 sshd[4385]: Connection closed by 10.0.0.1 port 53742 Mar 17 17:46:58.513528 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:58.517515 kubelet[2722]: E0317 17:46:58.517461 2722 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:46:58.522498 systemd[1]: sshd@24-10.0.0.87:22-10.0.0.1:53742.service: Deactivated successfully. Mar 17 17:46:58.525004 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:46:58.527088 systemd-logind[1479]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:46:58.533292 systemd[1]: Started sshd@25-10.0.0.87:22-10.0.0.1:53752.service - OpenSSH per-connection server daemon (10.0.0.1:53752). Mar 17 17:46:58.534363 systemd-logind[1479]: Removed session 25. Mar 17 17:46:58.569155 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 53752 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:58.571013 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:58.575461 systemd-logind[1479]: New session 26 of user core. Mar 17 17:46:58.585761 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:46:59.224159 sshd[4547]: Connection closed by 10.0.0.1 port 53752 Mar 17 17:46:59.225717 sshd-session[4545]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:59.238868 kubelet[2722]: I0317 17:46:59.238814 2722 topology_manager.go:215] "Topology Admit Handler" podUID="bca40116-086f-4cb0-8383-928b768712d7" podNamespace="kube-system" podName="cilium-hnhsq" Mar 17 17:46:59.239404 kubelet[2722]: E0317 17:46:59.238881 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="310676e6-6288-4c89-86b8-0ade01ffbc34" containerName="apply-sysctl-overwrites" Mar 17 17:46:59.239404 kubelet[2722]: E0317 17:46:59.238890 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="310676e6-6288-4c89-86b8-0ade01ffbc34" containerName="clean-cilium-state" Mar 17 17:46:59.239404 kubelet[2722]: E0317 17:46:59.238897 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="310676e6-6288-4c89-86b8-0ade01ffbc34" containerName="cilium-agent" Mar 17 17:46:59.239404 kubelet[2722]: E0317 17:46:59.238904 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="653194cf-6ff9-44e6-a56f-8e853b111cf1" containerName="cilium-operator" Mar 17 17:46:59.239404 kubelet[2722]: E0317 17:46:59.238911 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="310676e6-6288-4c89-86b8-0ade01ffbc34" containerName="mount-cgroup" Mar 17 17:46:59.239404 kubelet[2722]: E0317 17:46:59.238918 2722 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="310676e6-6288-4c89-86b8-0ade01ffbc34" containerName="mount-bpf-fs" Mar 17 17:46:59.239404 kubelet[2722]: I0317 17:46:59.238972 2722 memory_manager.go:354] "RemoveStaleState removing state" podUID="653194cf-6ff9-44e6-a56f-8e853b111cf1" containerName="cilium-operator" Mar 17 17:46:59.239404 kubelet[2722]: I0317 17:46:59.238979 2722 memory_manager.go:354] "RemoveStaleState removing state" podUID="310676e6-6288-4c89-86b8-0ade01ffbc34" containerName="cilium-agent" Mar 17 17:46:59.239997 systemd[1]: sshd@25-10.0.0.87:22-10.0.0.1:53752.service: Deactivated successfully. Mar 17 17:46:59.242840 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:46:59.246513 systemd-logind[1479]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:46:59.269108 systemd[1]: Started sshd@26-10.0.0.87:22-10.0.0.1:53762.service - OpenSSH per-connection server daemon (10.0.0.1:53762). Mar 17 17:46:59.270994 systemd-logind[1479]: Removed session 26. Mar 17 17:46:59.274713 systemd[1]: Created slice kubepods-burstable-podbca40116_086f_4cb0_8383_928b768712d7.slice - libcontainer container kubepods-burstable-podbca40116_086f_4cb0_8383_928b768712d7.slice. Mar 17 17:46:59.304815 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 53762 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:59.306323 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:59.312045 systemd-logind[1479]: New session 27 of user core. Mar 17 17:46:59.318798 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:46:59.343524 kubelet[2722]: I0317 17:46:59.343444 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bca40116-086f-4cb0-8383-928b768712d7-cilium-ipsec-secrets\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.343708 kubelet[2722]: I0317 17:46:59.343544 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bca40116-086f-4cb0-8383-928b768712d7-hubble-tls\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.343708 kubelet[2722]: I0317 17:46:59.343573 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-hostproc\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.343708 kubelet[2722]: I0317 17:46:59.343592 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5z7t\" (UniqueName: \"kubernetes.io/projected/bca40116-086f-4cb0-8383-928b768712d7-kube-api-access-q5z7t\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.343708 kubelet[2722]: I0317 17:46:59.343614 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bca40116-086f-4cb0-8383-928b768712d7-clustermesh-secrets\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.343708 kubelet[2722]: I0317 17:46:59.343664 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-etc-cni-netd\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.343708 kubelet[2722]: I0317 17:46:59.343683 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-cilium-run\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.344029 kubelet[2722]: I0317 17:46:59.343700 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-cni-path\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.344029 kubelet[2722]: I0317 17:46:59.343721 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-host-proc-sys-kernel\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.344029 kubelet[2722]: I0317 17:46:59.343743 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-lib-modules\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.344029 kubelet[2722]: I0317 17:46:59.343765 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-cilium-cgroup\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.344029 kubelet[2722]: I0317 17:46:59.343783 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-bpf-maps\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.344029 kubelet[2722]: I0317 17:46:59.343803 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-xtables-lock\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.344220 kubelet[2722]: I0317 17:46:59.343823 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bca40116-086f-4cb0-8383-928b768712d7-cilium-config-path\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.344220 kubelet[2722]: I0317 17:46:59.343841 2722 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bca40116-086f-4cb0-8383-928b768712d7-host-proc-sys-net\") pod \"cilium-hnhsq\" (UID: \"bca40116-086f-4cb0-8383-928b768712d7\") " pod="kube-system/cilium-hnhsq" Mar 17 17:46:59.369761 sshd[4560]: Connection closed by 10.0.0.1 port 53762 Mar 17 17:46:59.370240 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:59.382842 systemd[1]: sshd@26-10.0.0.87:22-10.0.0.1:53762.service: Deactivated successfully. Mar 17 17:46:59.384933 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:46:59.386559 systemd-logind[1479]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:46:59.392992 systemd[1]: Started sshd@27-10.0.0.87:22-10.0.0.1:53770.service - OpenSSH per-connection server daemon (10.0.0.1:53770). Mar 17 17:46:59.394015 systemd-logind[1479]: Removed session 27. Mar 17 17:46:59.428611 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 53770 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:46:59.430074 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:59.434238 systemd-logind[1479]: New session 28 of user core. Mar 17 17:46:59.445671 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:46:59.468511 kubelet[2722]: I0317 17:46:59.468457 2722 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310676e6-6288-4c89-86b8-0ade01ffbc34" path="/var/lib/kubelet/pods/310676e6-6288-4c89-86b8-0ade01ffbc34/volumes" Mar 17 17:46:59.469459 kubelet[2722]: I0317 17:46:59.469429 2722 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653194cf-6ff9-44e6-a56f-8e853b111cf1" path="/var/lib/kubelet/pods/653194cf-6ff9-44e6-a56f-8e853b111cf1/volumes" Mar 17 17:46:59.577762 kubelet[2722]: E0317 17:46:59.577593 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:59.579133 containerd[1501]: time="2025-03-17T17:46:59.578848389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hnhsq,Uid:bca40116-086f-4cb0-8383-928b768712d7,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:59.602392 containerd[1501]: time="2025-03-17T17:46:59.602204701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:59.602392 containerd[1501]: time="2025-03-17T17:46:59.602298939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:59.603207 containerd[1501]: time="2025-03-17T17:46:59.602319759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:59.603207 containerd[1501]: time="2025-03-17T17:46:59.603129360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:59.629761 systemd[1]: Started cri-containerd-cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815.scope - libcontainer container cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815. Mar 17 17:46:59.654737 containerd[1501]: time="2025-03-17T17:46:59.654673248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hnhsq,Uid:bca40116-086f-4cb0-8383-928b768712d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\"" Mar 17 17:46:59.655915 kubelet[2722]: E0317 17:46:59.655862 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:59.657993 containerd[1501]: time="2025-03-17T17:46:59.657946697Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:46:59.675546 containerd[1501]: time="2025-03-17T17:46:59.675487073Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f4b2f7e41d201768a1d70d01a0042ccb4ef839683eb260de2962b7c80e980fd\"" Mar 17 17:46:59.677186 containerd[1501]: time="2025-03-17T17:46:59.676097527Z" level=info msg="StartContainer for \"5f4b2f7e41d201768a1d70d01a0042ccb4ef839683eb260de2962b7c80e980fd\"" Mar 17 17:46:59.705793 systemd[1]: Started cri-containerd-5f4b2f7e41d201768a1d70d01a0042ccb4ef839683eb260de2962b7c80e980fd.scope - libcontainer container 5f4b2f7e41d201768a1d70d01a0042ccb4ef839683eb260de2962b7c80e980fd. Mar 17 17:46:59.735733 containerd[1501]: time="2025-03-17T17:46:59.735675317Z" level=info msg="StartContainer for \"5f4b2f7e41d201768a1d70d01a0042ccb4ef839683eb260de2962b7c80e980fd\" returns successfully" Mar 17 17:46:59.746004 systemd[1]: cri-containerd-5f4b2f7e41d201768a1d70d01a0042ccb4ef839683eb260de2962b7c80e980fd.scope: Deactivated successfully. Mar 17 17:46:59.783659 containerd[1501]: time="2025-03-17T17:46:59.783561068Z" level=info msg="shim disconnected" id=5f4b2f7e41d201768a1d70d01a0042ccb4ef839683eb260de2962b7c80e980fd namespace=k8s.io Mar 17 17:46:59.783659 containerd[1501]: time="2025-03-17T17:46:59.783644035Z" level=warning msg="cleaning up after shim disconnected" id=5f4b2f7e41d201768a1d70d01a0042ccb4ef839683eb260de2962b7c80e980fd namespace=k8s.io Mar 17 17:46:59.783659 containerd[1501]: time="2025-03-17T17:46:59.783654085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:47:00.450512 systemd[1]: run-containerd-runc-k8s.io-cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815-runc.LVPVcp.mount: Deactivated successfully. Mar 17 17:47:00.720458 kubelet[2722]: E0317 17:47:00.720330 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:00.723391 containerd[1501]: time="2025-03-17T17:47:00.723357728Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:47:00.738428 containerd[1501]: time="2025-03-17T17:47:00.738372935Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03800cdba246c1ba172c5b65213442818b5fbaafc490617176a332fac212ab4e\"" Mar 17 17:47:00.739666 containerd[1501]: time="2025-03-17T17:47:00.738984230Z" level=info msg="StartContainer for \"03800cdba246c1ba172c5b65213442818b5fbaafc490617176a332fac212ab4e\"" Mar 17 17:47:00.769807 systemd[1]: Started cri-containerd-03800cdba246c1ba172c5b65213442818b5fbaafc490617176a332fac212ab4e.scope - libcontainer container 03800cdba246c1ba172c5b65213442818b5fbaafc490617176a332fac212ab4e. Mar 17 17:47:00.799777 containerd[1501]: time="2025-03-17T17:47:00.799719341Z" level=info msg="StartContainer for \"03800cdba246c1ba172c5b65213442818b5fbaafc490617176a332fac212ab4e\" returns successfully" Mar 17 17:47:00.806287 systemd[1]: cri-containerd-03800cdba246c1ba172c5b65213442818b5fbaafc490617176a332fac212ab4e.scope: Deactivated successfully. Mar 17 17:47:00.829558 containerd[1501]: time="2025-03-17T17:47:00.829488160Z" level=info msg="shim disconnected" id=03800cdba246c1ba172c5b65213442818b5fbaafc490617176a332fac212ab4e namespace=k8s.io Mar 17 17:47:00.829558 containerd[1501]: time="2025-03-17T17:47:00.829554866Z" level=warning msg="cleaning up after shim disconnected" id=03800cdba246c1ba172c5b65213442818b5fbaafc490617176a332fac212ab4e namespace=k8s.io Mar 17 17:47:00.829860 containerd[1501]: time="2025-03-17T17:47:00.829567630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:47:01.450858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03800cdba246c1ba172c5b65213442818b5fbaafc490617176a332fac212ab4e-rootfs.mount: Deactivated successfully. Mar 17 17:47:01.724424 kubelet[2722]: E0317 17:47:01.724287 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:01.726520 containerd[1501]: time="2025-03-17T17:47:01.726480703Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:47:01.742581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687436793.mount: Deactivated successfully. Mar 17 17:47:01.743178 containerd[1501]: time="2025-03-17T17:47:01.742877437Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d02e6a821b8ea20e6432b5b319e584c896a29872a0760cbe5b7e48254deec3e7\"" Mar 17 17:47:01.743461 containerd[1501]: time="2025-03-17T17:47:01.743432526Z" level=info msg="StartContainer for \"d02e6a821b8ea20e6432b5b319e584c896a29872a0760cbe5b7e48254deec3e7\"" Mar 17 17:47:01.778870 systemd[1]: Started cri-containerd-d02e6a821b8ea20e6432b5b319e584c896a29872a0760cbe5b7e48254deec3e7.scope - libcontainer container d02e6a821b8ea20e6432b5b319e584c896a29872a0760cbe5b7e48254deec3e7. Mar 17 17:47:01.794662 update_engine[1481]: I20250317 17:47:01.794405 1481 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 17:47:01.794662 update_engine[1481]: I20250317 17:47:01.794466 1481 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 17:47:01.795124 update_engine[1481]: I20250317 17:47:01.794736 1481 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 17:47:01.795410 update_engine[1481]: I20250317 17:47:01.795369 1481 omaha_request_params.cc:62] Current group set to stable Mar 17 17:47:01.797000 update_engine[1481]: I20250317 17:47:01.796964 1481 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 17:47:01.797000 update_engine[1481]: I20250317 17:47:01.796986 1481 update_attempter.cc:643] Scheduling an action processor start. Mar 17 17:47:01.797077 update_engine[1481]: I20250317 17:47:01.797003 1481 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:47:01.797077 update_engine[1481]: I20250317 17:47:01.797044 1481 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 17:47:01.797159 update_engine[1481]: I20250317 17:47:01.797132 1481 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:47:01.797159 update_engine[1481]: I20250317 17:47:01.797147 1481 omaha_request_action.cc:272] Request: <?xml version="1.0" encoding="UTF-8"?> Mar 17 17:47:01.797159 update_engine[1481]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Mar 17 17:47:01.797159 update_engine[1481]: <os version="Chateau" platform="CoreOS" sp="4152.2.2_x86_64"></os> Mar 17 17:47:01.797159 update_engine[1481]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="4152.2.2" track="stable" bootid="{937fa5c9-0479-4e89-bf9a-66ff21596a20}" oem="" oemversion="" alephversion="4152.2.2" machineid="eb220a6cbfe14cebb0e267e2b2b17254" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Mar 17 17:47:01.797159 update_engine[1481]: <ping active="1"></ping> Mar 17 17:47:01.797159 update_engine[1481]: <updatecheck></updatecheck> Mar 17 17:47:01.797159 update_engine[1481]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Mar 17 17:47:01.797159 update_engine[1481]: </app> Mar 17 17:47:01.797159 update_engine[1481]: </request> Mar 17 17:47:01.797159 update_engine[1481]: I20250317 17:47:01.797154 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:47:01.797710 locksmithd[1524]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 17:47:01.801183 update_engine[1481]: I20250317 17:47:01.801146 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:47:01.801540 update_engine[1481]: I20250317 17:47:01.801494 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:47:01.813818 containerd[1501]: time="2025-03-17T17:47:01.813762426Z" level=info msg="StartContainer for \"d02e6a821b8ea20e6432b5b319e584c896a29872a0760cbe5b7e48254deec3e7\" returns successfully" Mar 17 17:47:01.815551 systemd[1]: cri-containerd-d02e6a821b8ea20e6432b5b319e584c896a29872a0760cbe5b7e48254deec3e7.scope: Deactivated successfully. Mar 17 17:47:01.819368 update_engine[1481]: E20250317 17:47:01.819315 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:47:01.819449 update_engine[1481]: I20250317 17:47:01.819419 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 17:47:01.842343 containerd[1501]: time="2025-03-17T17:47:01.842265093Z" level=info msg="shim disconnected" id=d02e6a821b8ea20e6432b5b319e584c896a29872a0760cbe5b7e48254deec3e7 namespace=k8s.io Mar 17 17:47:01.842343 containerd[1501]: time="2025-03-17T17:47:01.842321761Z" level=warning msg="cleaning up after shim disconnected" id=d02e6a821b8ea20e6432b5b319e584c896a29872a0760cbe5b7e48254deec3e7 namespace=k8s.io Mar 17 17:47:01.842343 containerd[1501]: time="2025-03-17T17:47:01.842331138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:47:02.450919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d02e6a821b8ea20e6432b5b319e584c896a29872a0760cbe5b7e48254deec3e7-rootfs.mount: Deactivated successfully. Mar 17 17:47:02.728111 kubelet[2722]: E0317 17:47:02.727987 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:02.730302 containerd[1501]: time="2025-03-17T17:47:02.730256044Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:47:02.754342 containerd[1501]: time="2025-03-17T17:47:02.754288064Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9\"" Mar 17 17:47:02.754875 containerd[1501]: time="2025-03-17T17:47:02.754841229Z" level=info msg="StartContainer for \"ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9\"" Mar 17 17:47:02.787751 systemd[1]: Started cri-containerd-ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9.scope - libcontainer container ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9. Mar 17 17:47:02.811614 systemd[1]: cri-containerd-ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9.scope: Deactivated successfully. Mar 17 17:47:02.813424 containerd[1501]: time="2025-03-17T17:47:02.813391198Z" level=info msg="StartContainer for \"ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9\" returns successfully" Mar 17 17:47:02.835515 containerd[1501]: time="2025-03-17T17:47:02.835441753Z" level=info msg="shim disconnected" id=ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9 namespace=k8s.io Mar 17 17:47:02.835515 containerd[1501]: time="2025-03-17T17:47:02.835505303Z" level=warning msg="cleaning up after shim disconnected" id=ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9 namespace=k8s.io Mar 17 17:47:02.835515 containerd[1501]: time="2025-03-17T17:47:02.835514861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:47:03.451434 systemd[1]: run-containerd-runc-k8s.io-ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9-runc.79gp24.mount: Deactivated successfully. Mar 17 17:47:03.451545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebce0b096ab2198b0c0caafcc3920eb52384f4702fcc605d120c67f4e4b533e9-rootfs.mount: Deactivated successfully. Mar 17 17:47:03.519000 kubelet[2722]: E0317 17:47:03.518960 2722 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:47:03.750119 kubelet[2722]: E0317 17:47:03.749983 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:03.751894 containerd[1501]: time="2025-03-17T17:47:03.751844400Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:47:03.807011 containerd[1501]: time="2025-03-17T17:47:03.806946167Z" level=info msg="CreateContainer within sandbox \"cd43bbdf5e5f7ae870a0a9de797c61bb718e00b3f140d754dc5a1ac632ed5815\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d13b033b945fa6ccbddc455dcfefd858ed904fa4e30f86527b9e103067212de\"" Mar 17 17:47:03.807610 containerd[1501]: time="2025-03-17T17:47:03.807571949Z" level=info msg="StartContainer for \"6d13b033b945fa6ccbddc455dcfefd858ed904fa4e30f86527b9e103067212de\"" Mar 17 17:47:03.837788 systemd[1]: Started cri-containerd-6d13b033b945fa6ccbddc455dcfefd858ed904fa4e30f86527b9e103067212de.scope - libcontainer container 6d13b033b945fa6ccbddc455dcfefd858ed904fa4e30f86527b9e103067212de. Mar 17 17:47:03.875479 containerd[1501]: time="2025-03-17T17:47:03.875417199Z" level=info msg="StartContainer for \"6d13b033b945fa6ccbddc455dcfefd858ed904fa4e30f86527b9e103067212de\" returns successfully" Mar 17 17:47:04.324654 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 17:47:04.754291 kubelet[2722]: E0317 17:47:04.754248 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:04.770051 kubelet[2722]: I0317 17:47:04.769972 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hnhsq" podStartSLOduration=5.769951991 podStartE2EDuration="5.769951991s" podCreationTimestamp="2025-03-17 17:46:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:47:04.76989896 +0000 UTC m=+91.427167533" watchObservedRunningTime="2025-03-17 17:47:04.769951991 +0000 UTC m=+91.427220544" Mar 17 17:47:05.756095 kubelet[2722]: E0317 17:47:05.756055 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:06.366394 kubelet[2722]: I0317 17:47:06.366116 2722 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:47:06Z","lastTransitionTime":"2025-03-17T17:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:47:06.758703 kubelet[2722]: E0317 17:47:06.758655 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:07.532888 systemd-networkd[1409]: lxc_health: Link UP Mar 17 17:47:07.547833 systemd-networkd[1409]: lxc_health: Gained carrier Mar 17 17:47:07.760696 kubelet[2722]: E0317 17:47:07.760091 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:07.942487 systemd[1]: run-containerd-runc-k8s.io-6d13b033b945fa6ccbddc455dcfefd858ed904fa4e30f86527b9e103067212de-runc.XJmaNs.mount: Deactivated successfully. Mar 17 17:47:08.762058 kubelet[2722]: E0317 17:47:08.762011 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:09.374848 systemd-networkd[1409]: lxc_health: Gained IPv6LL Mar 17 17:47:09.763755 kubelet[2722]: E0317 17:47:09.763712 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:11.802111 update_engine[1481]: I20250317 17:47:11.802024 1481 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:47:11.802695 update_engine[1481]: I20250317 17:47:11.802445 1481 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:47:11.802771 update_engine[1481]: I20250317 17:47:11.802734 1481 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:47:11.875923 update_engine[1481]: E20250317 17:47:11.875865 1481 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:47:11.875985 update_engine[1481]: I20250317 17:47:11.875962 1481 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 17:47:14.340205 sshd[4572]: Connection closed by 10.0.0.1 port 53770 Mar 17 17:47:14.340789 sshd-session[4566]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:14.344950 systemd[1]: sshd@27-10.0.0.87:22-10.0.0.1:53770.service: Deactivated successfully. Mar 17 17:47:14.347278 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:47:14.348138 systemd-logind[1479]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:47:14.349277 systemd-logind[1479]: Removed session 28.