Jan 30 13:14:38.903434 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:14:38.903466 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:14:38.903477 kernel: BIOS-provided physical RAM map: Jan 30 13:14:38.903483 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:14:38.903490 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:14:38.903496 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:14:38.903504 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:14:38.903510 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:14:38.903517 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 30 13:14:38.903523 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 30 13:14:38.903530 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 30 13:14:38.903538 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 30 13:14:38.903545 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 30 13:14:38.903551 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 30 13:14:38.903559 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 30 13:14:38.903567 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:14:38.903576 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 30 13:14:38.903583 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 30 13:14:38.903590 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 30 13:14:38.903596 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 30 13:14:38.903603 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 30 13:14:38.903610 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:14:38.903617 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:14:38.903624 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:14:38.903631 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 30 13:14:38.903638 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:14:38.903645 kernel: NX (Execute Disable) protection: active Jan 30 13:14:38.903654 kernel: APIC: Static calls initialized Jan 30 13:14:38.903661 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 30 13:14:38.903668 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 30 13:14:38.903675 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 30 13:14:38.903682 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 30 13:14:38.903688 kernel: extended physical RAM map: Jan 30 13:14:38.903695 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:14:38.903702 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:14:38.903709 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:14:38.903716 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:14:38.903723 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:14:38.903730 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 30 13:14:38.903740 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 30 13:14:38.903750 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 30 13:14:38.903757 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 30 13:14:38.903764 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 30 13:14:38.903772 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 30 13:14:38.903779 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 30 13:14:38.903788 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 30 13:14:38.903795 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 30 13:14:38.903803 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 30 13:14:38.903810 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 30 13:14:38.903817 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:14:38.903825 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 30 13:14:38.903832 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 30 13:14:38.903839 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 30 13:14:38.903847 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 30 13:14:38.903856 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 30 13:14:38.903863 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:14:38.903870 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:14:38.903878 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:14:38.903885 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 30 13:14:38.903892 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:14:38.903899 kernel: efi: EFI v2.7 by EDK II Jan 30 13:14:38.903907 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 30 13:14:38.903914 kernel: random: crng init done Jan 30 13:14:38.903922 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 30 13:14:38.903929 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 30 13:14:38.903936 kernel: secureboot: Secure boot disabled Jan 30 13:14:38.903945 kernel: SMBIOS 2.8 present. Jan 30 13:14:38.903953 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 30 13:14:38.903960 kernel: Hypervisor detected: KVM Jan 30 13:14:38.903967 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:14:38.903975 kernel: kvm-clock: using sched offset of 2723426550 cycles Jan 30 13:14:38.903983 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:14:38.903990 kernel: tsc: Detected 2794.748 MHz processor Jan 30 13:14:38.903998 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:14:38.904006 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:14:38.904013 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 30 13:14:38.904023 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:14:38.904031 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:14:38.904038 kernel: Using GB pages for direct mapping Jan 30 13:14:38.904045 kernel: ACPI: Early table checksum verification disabled Jan 30 13:14:38.904053 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:14:38.904061 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:14:38.904069 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:14:38.904076 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:14:38.904083 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:14:38.904093 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:14:38.904101 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:14:38.904108 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:14:38.904116 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:14:38.904123 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:14:38.904131 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:14:38.904138 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:14:38.904153 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:14:38.904166 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:14:38.904174 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:14:38.904182 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:14:38.904189 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:14:38.904197 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:14:38.904204 kernel: No NUMA configuration found Jan 30 13:14:38.904211 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 30 13:14:38.904219 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 30 13:14:38.904227 kernel: Zone ranges: Jan 30 13:14:38.904234 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:14:38.904244 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 30 13:14:38.904251 kernel: Normal empty Jan 30 13:14:38.904258 kernel: Movable zone start for each node Jan 30 13:14:38.904266 kernel: Early memory node ranges Jan 30 13:14:38.904273 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:14:38.904281 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:14:38.904288 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:14:38.904295 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 30 13:14:38.904303 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 30 13:14:38.904312 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 30 13:14:38.904320 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 30 13:14:38.904327 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 30 13:14:38.904334 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 30 13:14:38.904342 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:14:38.904350 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:14:38.904367 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:14:38.904379 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:14:38.904386 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 30 13:14:38.904394 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 30 13:14:38.904402 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 13:14:38.904410 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 30 13:14:38.904420 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 30 13:14:38.904427 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:14:38.904435 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:14:38.904443 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:14:38.904450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:14:38.904471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:14:38.904479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:14:38.904487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:14:38.904495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:14:38.904502 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:14:38.904510 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:14:38.904518 kernel: TSC deadline timer available Jan 30 13:14:38.904525 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:14:38.904533 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:14:38.904543 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:14:38.904550 kernel: kvm-guest: setup PV sched yield Jan 30 13:14:38.904558 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 30 13:14:38.904566 kernel: Booting paravirtualized kernel on KVM Jan 30 13:14:38.904574 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:14:38.904582 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:14:38.904589 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:14:38.904597 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:14:38.904605 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:14:38.904612 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:14:38.904623 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:14:38.904631 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:14:38.904640 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:14:38.904647 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:14:38.904655 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:14:38.904663 kernel: Fallback order for Node 0: 0 Jan 30 13:14:38.904671 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 30 13:14:38.904678 kernel: Policy zone: DMA32 Jan 30 13:14:38.904689 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:14:38.904697 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 177824K reserved, 0K cma-reserved) Jan 30 13:14:38.904704 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:14:38.904712 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:14:38.904720 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:14:38.904728 kernel: Dynamic Preempt: voluntary Jan 30 13:14:38.904735 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:14:38.904743 kernel: rcu: RCU event tracing is enabled. Jan 30 13:14:38.904752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:14:38.904761 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:14:38.904769 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:14:38.904777 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:14:38.904785 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:14:38.904793 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:14:38.904800 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:14:38.904808 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:14:38.904816 kernel: Console: colour dummy device 80x25 Jan 30 13:14:38.904823 kernel: printk: console [ttyS0] enabled Jan 30 13:14:38.904833 kernel: ACPI: Core revision 20230628 Jan 30 13:14:38.904841 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:14:38.904849 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:14:38.904857 kernel: x2apic enabled Jan 30 13:14:38.904864 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:14:38.904872 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:14:38.904880 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:14:38.904888 kernel: kvm-guest: setup PV IPIs Jan 30 13:14:38.904895 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:14:38.904906 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:14:38.904914 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 13:14:38.904921 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:14:38.904929 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:14:38.904937 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:14:38.904945 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:14:38.904953 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:14:38.904960 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:14:38.904968 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:14:38.904979 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:14:38.904986 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:14:38.904994 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:14:38.905002 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:14:38.905010 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:14:38.905018 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:14:38.905026 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:14:38.905034 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:14:38.905044 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:14:38.905051 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:14:38.905059 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:14:38.905067 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:14:38.905075 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:14:38.905083 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:14:38.905091 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:14:38.905098 kernel: landlock: Up and running. Jan 30 13:14:38.905106 kernel: SELinux: Initializing. Jan 30 13:14:38.905116 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:14:38.905124 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:14:38.905132 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:14:38.905139 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:14:38.905155 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:14:38.905165 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:14:38.905174 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:14:38.905181 kernel: ... version: 0 Jan 30 13:14:38.905189 kernel: ... bit width: 48 Jan 30 13:14:38.905199 kernel: ... generic registers: 6 Jan 30 13:14:38.905207 kernel: ... value mask: 0000ffffffffffff Jan 30 13:14:38.905215 kernel: ... max period: 00007fffffffffff Jan 30 13:14:38.905222 kernel: ... fixed-purpose events: 0 Jan 30 13:14:38.905230 kernel: ... event mask: 000000000000003f Jan 30 13:14:38.905238 kernel: signal: max sigframe size: 1776 Jan 30 13:14:38.905246 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:14:38.905253 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:14:38.905261 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:14:38.905271 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:14:38.905279 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:14:38.905286 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:14:38.905294 kernel: smpboot: Max logical packages: 1 Jan 30 13:14:38.905302 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 13:14:38.905310 kernel: devtmpfs: initialized Jan 30 13:14:38.905317 kernel: x86/mm: Memory block size: 128MB Jan 30 13:14:38.905325 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:14:38.905333 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:14:38.905343 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 30 13:14:38.905351 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:14:38.905359 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 30 13:14:38.905367 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:14:38.905374 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:14:38.905382 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:14:38.905390 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:14:38.905398 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:14:38.905405 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:14:38.905416 kernel: audit: type=2000 audit(1738242878.680:1): state=initialized audit_enabled=0 res=1 Jan 30 13:14:38.905425 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:14:38.905434 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:14:38.905442 kernel: cpuidle: using governor menu Jan 30 13:14:38.905450 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:14:38.905495 kernel: dca service started, version 1.12.1 Jan 30 13:14:38.905503 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 13:14:38.905511 kernel: PCI: Using configuration type 1 for base access Jan 30 13:14:38.905518 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:14:38.905529 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:14:38.905537 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:14:38.905545 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:14:38.905553 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:14:38.905560 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:14:38.905568 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:14:38.905576 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:14:38.905584 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:14:38.905592 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:14:38.905602 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:14:38.905610 kernel: ACPI: Interpreter enabled Jan 30 13:14:38.905618 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:14:38.905625 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:14:38.905633 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:14:38.905641 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:14:38.905649 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:14:38.905657 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:14:38.905836 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:14:38.905970 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:14:38.906092 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:14:38.906102 kernel: PCI host bridge to bus 0000:00 Jan 30 13:14:38.906242 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:14:38.906356 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:14:38.906481 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:14:38.906602 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 30 13:14:38.906712 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 30 13:14:38.906822 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 30 13:14:38.906932 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:14:38.907072 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:14:38.907214 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:14:38.907341 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:14:38.907481 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:14:38.907605 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:14:38.907723 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:14:38.907843 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:14:38.907972 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:14:38.908107 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:14:38.908246 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:14:38.908369 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 30 13:14:38.908529 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:14:38.908653 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:14:38.908808 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:14:38.909013 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 30 13:14:38.909162 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:14:38.909294 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:14:38.909415 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:14:38.909552 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 30 13:14:38.909675 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:14:38.909805 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:14:38.909926 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:14:38.910055 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:14:38.910194 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:14:38.910316 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:14:38.910467 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:14:38.910592 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:14:38.910603 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:14:38.910612 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:14:38.910619 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:14:38.910631 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:14:38.910638 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:14:38.910646 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:14:38.910654 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:14:38.910662 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:14:38.910669 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:14:38.910677 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:14:38.910685 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:14:38.910693 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:14:38.910703 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:14:38.910710 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:14:38.910718 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:14:38.910725 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:14:38.910733 kernel: iommu: Default domain type: Translated Jan 30 13:14:38.910741 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:14:38.910749 kernel: efivars: Registered efivars operations Jan 30 13:14:38.910756 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:14:38.910764 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:14:38.910775 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:14:38.910782 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 30 13:14:38.910790 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 30 13:14:38.910797 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 30 13:14:38.910805 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 30 13:14:38.910813 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 30 13:14:38.910820 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 30 13:14:38.910828 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 30 13:14:38.910948 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:14:38.911071 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:14:38.911207 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:14:38.911223 kernel: vgaarb: loaded Jan 30 13:14:38.911231 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:14:38.911246 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:14:38.911260 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:14:38.911277 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:14:38.911292 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:14:38.911321 kernel: pnp: PnP ACPI init Jan 30 13:14:38.911576 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 30 13:14:38.911589 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:14:38.911598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:14:38.911606 kernel: NET: Registered PF_INET protocol family Jan 30 13:14:38.911633 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:14:38.911644 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:14:38.911652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:14:38.911662 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:14:38.911670 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:14:38.911679 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:14:38.911687 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:14:38.911705 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:14:38.911713 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:14:38.911721 kernel: NET: Registered PF_XDP protocol family Jan 30 13:14:38.911852 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:14:38.911976 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:14:38.912094 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:14:38.912221 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:14:38.912334 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:14:38.912446 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 30 13:14:38.912577 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 30 13:14:38.912689 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 30 13:14:38.912700 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:14:38.912708 kernel: Initialise system trusted keyrings Jan 30 13:14:38.912720 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:14:38.912729 kernel: Key type asymmetric registered Jan 30 13:14:38.912746 kernel: Asymmetric key parser 'x509' registered Jan 30 13:14:38.912755 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:14:38.912763 kernel: io scheduler mq-deadline registered Jan 30 13:14:38.912771 kernel: io scheduler kyber registered Jan 30 13:14:38.912779 kernel: io scheduler bfq registered Jan 30 13:14:38.912787 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:14:38.912796 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:14:38.912808 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:14:38.912818 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:14:38.912826 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:14:38.912834 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:14:38.912842 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:14:38.912851 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:14:38.912861 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:14:38.912993 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:14:38.913005 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:14:38.913118 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:14:38.913248 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:14:38 UTC (1738242878) Jan 30 13:14:38.913365 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:14:38.913376 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:14:38.913388 kernel: efifb: probing for efifb Jan 30 13:14:38.913400 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 30 13:14:38.913408 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 30 13:14:38.913416 kernel: efifb: scrolling: redraw Jan 30 13:14:38.913424 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:14:38.913432 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 13:14:38.913440 kernel: fb0: EFI VGA frame buffer device Jan 30 13:14:38.913448 kernel: pstore: Using crash dump compression: deflate Jan 30 13:14:38.913469 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:14:38.913477 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:14:38.913487 kernel: Segment Routing with IPv6 Jan 30 13:14:38.913496 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:14:38.913504 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:14:38.913512 kernel: Key type dns_resolver registered Jan 30 13:14:38.913519 kernel: IPI shorthand broadcast: enabled Jan 30 13:14:38.913528 kernel: sched_clock: Marking stable (594003431, 169175186)->(837546709, -74368092) Jan 30 13:14:38.913536 kernel: registered taskstats version 1 Jan 30 13:14:38.913543 kernel: Loading compiled-in X.509 certificates Jan 30 13:14:38.913552 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:14:38.913562 kernel: Key type .fscrypt registered Jan 30 13:14:38.913570 kernel: Key type fscrypt-provisioning registered Jan 30 13:14:38.913578 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:14:38.913586 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:14:38.913594 kernel: ima: No architecture policies found Jan 30 13:14:38.913602 kernel: clk: Disabling unused clocks Jan 30 13:14:38.913610 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:14:38.913618 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:14:38.913628 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:14:38.913636 kernel: Run /init as init process Jan 30 13:14:38.913644 kernel: with arguments: Jan 30 13:14:38.913652 kernel: /init Jan 30 13:14:38.913660 kernel: with environment: Jan 30 13:14:38.913668 kernel: HOME=/ Jan 30 13:14:38.913676 kernel: TERM=linux Jan 30 13:14:38.913684 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:14:38.913694 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:14:38.913707 systemd[1]: Detected virtualization kvm. Jan 30 13:14:38.913716 systemd[1]: Detected architecture x86-64. Jan 30 13:14:38.913724 systemd[1]: Running in initrd. Jan 30 13:14:38.913732 systemd[1]: No hostname configured, using default hostname. Jan 30 13:14:38.913741 systemd[1]: Hostname set to . Jan 30 13:14:38.913750 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:14:38.913758 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:14:38.913767 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:14:38.913779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:14:38.913788 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:14:38.913797 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:14:38.913805 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:14:38.913814 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:14:38.913825 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:14:38.913836 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:14:38.913844 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:14:38.913853 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:14:38.913862 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:14:38.913870 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:14:38.913879 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:14:38.913888 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:14:38.913896 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:14:38.913905 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:14:38.913916 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:14:38.913925 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:14:38.913933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:14:38.913942 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:14:38.913951 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:14:38.913959 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:14:38.913968 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:14:38.913976 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:14:38.913987 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:14:38.913996 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:14:38.914004 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:14:38.914013 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:14:38.914022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:14:38.914030 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:14:38.914039 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:14:38.914048 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:14:38.914076 systemd-journald[194]: Collecting audit messages is disabled. Jan 30 13:14:38.914098 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:14:38.914107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:14:38.914116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:14:38.914125 systemd-journald[194]: Journal started Jan 30 13:14:38.914153 systemd-journald[194]: Runtime Journal (/run/log/journal/12918e2f2adf4615a6c7040109eafb85) is 6.0M, max 48.2M, 42.2M free. Jan 30 13:14:38.912767 systemd-modules-load[195]: Inserted module 'overlay' Jan 30 13:14:38.918511 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:14:38.921242 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:14:38.928284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:14:38.932056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:14:38.938712 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:14:38.941136 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:14:38.944538 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:14:38.946342 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:14:38.951299 kernel: Bridge firewalling registered Jan 30 13:14:38.951285 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 30 13:14:38.967772 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:14:38.970728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:14:38.973628 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:14:38.980486 dracut-cmdline[222]: dracut-dracut-053 Jan 30 13:14:38.983543 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:14:38.986553 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:14:38.997605 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:14:39.029158 systemd-resolved[249]: Positive Trust Anchors: Jan 30 13:14:39.029174 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:14:39.029204 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:14:39.040159 systemd-resolved[249]: Defaulting to hostname 'linux'. Jan 30 13:14:39.042247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:14:39.044435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:14:39.070494 kernel: SCSI subsystem initialized Jan 30 13:14:39.079610 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:14:39.090489 kernel: iscsi: registered transport (tcp) Jan 30 13:14:39.111774 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:14:39.111857 kernel: QLogic iSCSI HBA Driver Jan 30 13:14:39.160738 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:14:39.176667 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:14:39.201326 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:14:39.201366 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:14:39.201385 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:14:39.243497 kernel: raid6: avx2x4 gen() 30021 MB/s Jan 30 13:14:39.260484 kernel: raid6: avx2x2 gen() 30459 MB/s Jan 30 13:14:39.277620 kernel: raid6: avx2x1 gen() 25281 MB/s Jan 30 13:14:39.277661 kernel: raid6: using algorithm avx2x2 gen() 30459 MB/s Jan 30 13:14:39.295583 kernel: raid6: .... xor() 19439 MB/s, rmw enabled Jan 30 13:14:39.295616 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:14:39.316482 kernel: xor: automatically using best checksumming function avx Jan 30 13:14:39.466512 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:14:39.479051 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:14:39.490692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:14:39.502621 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 30 13:14:39.507247 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:14:39.511112 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:14:39.533391 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 30 13:14:39.567628 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:14:39.578672 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:14:39.644284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:14:39.653655 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:14:39.670730 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:14:39.673160 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:14:39.674509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:14:39.675879 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:14:39.685656 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:14:39.690405 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:14:39.718265 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:14:39.718415 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:14:39.718428 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:14:39.718438 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:14:39.718450 kernel: GPT:9289727 != 19775487 Jan 30 13:14:39.719337 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:14:39.719359 kernel: GPT:9289727 != 19775487 Jan 30 13:14:39.719370 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:14:39.719380 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:14:39.719391 kernel: AES CTR mode by8 optimization enabled Jan 30 13:14:39.719402 kernel: libata version 3.00 loaded. Jan 30 13:14:39.695880 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:14:39.695941 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:14:39.700959 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:14:39.702137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:14:39.702195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:14:39.703438 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:14:39.714979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:14:39.723066 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:14:39.736441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:14:39.739303 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:14:39.760432 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:14:39.760449 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:14:39.760620 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:14:39.760766 kernel: scsi host0: ahci Jan 30 13:14:39.760916 kernel: scsi host1: ahci Jan 30 13:14:39.761061 kernel: scsi host2: ahci Jan 30 13:14:39.761235 kernel: scsi host3: ahci Jan 30 13:14:39.761394 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (472) Jan 30 13:14:39.761406 kernel: scsi host4: ahci Jan 30 13:14:39.761568 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (469) Jan 30 13:14:39.761580 kernel: scsi host5: ahci Jan 30 13:14:39.761735 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 13:14:39.761750 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 13:14:39.761763 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 13:14:39.761776 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 13:14:39.761789 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 13:14:39.761807 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 13:14:39.759247 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:14:39.773965 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:14:39.778994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:14:39.782888 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:14:39.782949 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:14:39.805566 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:14:39.805637 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:14:39.805688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:14:39.808882 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:14:39.811441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:14:39.826626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:14:39.829053 disk-uuid[560]: Primary Header is updated. Jan 30 13:14:39.829053 disk-uuid[560]: Secondary Entries is updated. Jan 30 13:14:39.829053 disk-uuid[560]: Secondary Header is updated. Jan 30 13:14:39.832173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:14:39.835475 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:14:39.837658 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:14:39.858332 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:14:40.067506 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:14:40.067586 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:14:40.075489 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:14:40.075573 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:14:40.076487 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:14:40.077488 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:14:40.077515 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:14:40.078481 kernel: ata3.00: applying bridge limits Jan 30 13:14:40.078493 kernel: ata3.00: configured for UDMA/100 Jan 30 13:14:40.079482 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:14:40.156775 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:14:40.169298 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:14:40.169317 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:14:40.882159 disk-uuid[563]: The operation has completed successfully. Jan 30 13:14:40.883422 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:14:40.909111 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:14:40.909242 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:14:40.933638 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:14:40.936920 sh[600]: Success Jan 30 13:14:40.949487 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:14:40.982052 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:14:41.012928 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:14:41.015739 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:14:41.026889 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:14:41.026935 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:14:41.026947 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:14:41.027962 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:14:41.028698 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:14:41.033756 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:14:41.036075 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:14:41.043586 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:14:41.045683 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:14:41.054224 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:14:41.054249 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:14:41.054260 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:14:41.057670 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:14:41.066113 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:14:41.067884 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:14:41.154895 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:14:41.173628 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:14:41.197511 systemd-networkd[778]: lo: Link UP Jan 30 13:14:41.197519 systemd-networkd[778]: lo: Gained carrier Jan 30 13:14:41.199040 systemd-networkd[778]: Enumeration completed Jan 30 13:14:41.199149 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:14:41.199423 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:14:41.199428 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:14:41.211963 systemd-networkd[778]: eth0: Link UP Jan 30 13:14:41.211967 systemd-networkd[778]: eth0: Gained carrier Jan 30 13:14:41.211974 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:14:41.213312 systemd[1]: Reached target network.target - Network. Jan 30 13:14:41.228511 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:14:41.335763 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:14:41.351621 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:14:41.400559 ignition[783]: Ignition 2.20.0 Jan 30 13:14:41.400574 ignition[783]: Stage: fetch-offline Jan 30 13:14:41.400620 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:14:41.400631 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:14:41.400738 ignition[783]: parsed url from cmdline: "" Jan 30 13:14:41.400742 ignition[783]: no config URL provided Jan 30 13:14:41.400748 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:14:41.400756 ignition[783]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:14:41.400787 ignition[783]: op(1): [started] loading QEMU firmware config module Jan 30 13:14:41.400792 ignition[783]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:14:41.408902 ignition[783]: op(1): [finished] loading QEMU firmware config module Jan 30 13:14:41.447199 ignition[783]: parsing config with SHA512: b22f8655905c1833fefd25f98bf387c89e84c201e2df03e022a399bbe8463ce5b1dd5957591aaa41e85061160fe7758ad1c6143f426a9f6111345fc5a827cddd Jan 30 13:14:41.451227 unknown[783]: fetched base config from "system" Jan 30 13:14:41.451246 unknown[783]: fetched user config from "qemu" Jan 30 13:14:41.451972 ignition[783]: fetch-offline: fetch-offline passed Jan 30 13:14:41.452218 ignition[783]: Ignition finished successfully Jan 30 13:14:41.454767 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:14:41.456941 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:14:41.464633 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:14:41.478975 ignition[794]: Ignition 2.20.0 Jan 30 13:14:41.478985 ignition[794]: Stage: kargs Jan 30 13:14:41.479182 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:14:41.479196 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:14:41.482987 ignition[794]: kargs: kargs passed Jan 30 13:14:41.483036 ignition[794]: Ignition finished successfully Jan 30 13:14:41.487170 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:14:41.502603 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:14:41.515099 ignition[803]: Ignition 2.20.0 Jan 30 13:14:41.515111 ignition[803]: Stage: disks Jan 30 13:14:41.515268 ignition[803]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:14:41.515280 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:14:41.518514 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:14:41.516052 ignition[803]: disks: disks passed Jan 30 13:14:41.520006 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:14:41.516105 ignition[803]: Ignition finished successfully Jan 30 13:14:41.521905 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:14:41.523774 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:14:41.525861 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:14:41.526944 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:14:41.541685 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:14:41.557110 systemd-fsck[813]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:14:41.564711 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:14:41.567364 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:14:41.680487 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:14:41.681433 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:14:41.682997 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:14:41.690556 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:14:41.692276 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:14:41.693754 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:14:41.693799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:14:41.702204 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (821) Jan 30 13:14:41.693827 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:14:41.707520 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:14:41.707537 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:14:41.707548 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:14:41.707558 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:14:41.699881 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:14:41.702940 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:14:41.711365 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:14:41.740211 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:14:41.744152 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:14:41.748242 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:14:41.752063 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:14:41.840154 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:14:41.849665 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:14:41.851444 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:14:41.858502 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:14:41.876220 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:14:41.878926 ignition[935]: INFO : Ignition 2.20.0 Jan 30 13:14:41.878926 ignition[935]: INFO : Stage: mount Jan 30 13:14:41.880664 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:14:41.880664 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:14:41.880664 ignition[935]: INFO : mount: mount passed Jan 30 13:14:41.880664 ignition[935]: INFO : Ignition finished successfully Jan 30 13:14:41.887073 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:14:41.895563 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:14:42.026409 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:14:42.039642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:14:42.047276 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (948) Jan 30 13:14:42.047305 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:14:42.047318 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:14:42.048774 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:14:42.051478 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:14:42.052728 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:14:42.071898 ignition[965]: INFO : Ignition 2.20.0 Jan 30 13:14:42.071898 ignition[965]: INFO : Stage: files Jan 30 13:14:42.073639 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:14:42.073639 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:14:42.076477 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:14:42.077869 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:14:42.077869 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:14:42.081365 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:14:42.082837 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:14:42.084568 unknown[965]: wrote ssh authorized keys file for user: core Jan 30 13:14:42.085662 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:14:42.087961 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:14:42.089857 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:14:42.125813 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:14:42.203571 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:14:42.203571 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:14:42.207517 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:14:42.544413 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:14:42.659211 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:14:42.661653 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:14:42.961122 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:14:43.244605 systemd-networkd[778]: eth0: Gained IPv6LL Jan 30 13:14:43.464742 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:14:43.464742 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:14:43.469630 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:14:43.469630 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:14:43.469630 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:14:43.469630 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:14:43.469630 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:14:43.469630 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:14:43.469630 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:14:43.469630 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:14:43.491231 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:14:43.498294 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:14:43.499902 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:14:43.499902 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:14:43.499902 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:14:43.499902 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:14:43.499902 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:14:43.499902 ignition[965]: INFO : files: files passed Jan 30 13:14:43.499902 ignition[965]: INFO : Ignition finished successfully Jan 30 13:14:43.509662 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:14:43.518605 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:14:43.520442 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:14:43.522292 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:14:43.522397 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:14:43.529599 initrd-setup-root-after-ignition[994]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:14:43.532239 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:14:43.532239 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:14:43.535589 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:14:43.538304 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:14:43.538572 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:14:43.558651 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:14:43.582806 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:14:43.582928 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:14:43.585576 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:14:43.586843 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:14:43.589990 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:14:43.591174 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:14:43.621003 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:14:43.628625 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:14:43.637717 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:14:43.639033 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:14:43.641717 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:14:43.644150 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:14:43.644259 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:14:43.646879 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:14:43.648943 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:14:43.651103 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:14:43.653233 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:14:43.655361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:14:43.657706 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:14:43.659898 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:14:43.662296 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:14:43.664513 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:14:43.666906 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:14:43.668796 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:14:43.668933 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:14:43.671506 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:14:43.673063 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:14:43.675292 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:14:43.675411 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:14:43.677618 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:14:43.677765 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:14:43.680222 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:14:43.680348 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:14:43.682251 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:14:43.684171 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:14:43.689530 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:14:43.691219 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:14:43.693062 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:14:43.695107 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:14:43.695210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:14:43.697612 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:14:43.697728 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:14:43.699603 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:14:43.699729 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:14:43.701785 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:14:43.701884 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:14:43.717678 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:14:43.718735 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:14:43.718859 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:14:43.722328 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:14:43.724083 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:14:43.754093 ignition[1021]: INFO : Ignition 2.20.0 Jan 30 13:14:43.754093 ignition[1021]: INFO : Stage: umount Jan 30 13:14:43.754093 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:14:43.754093 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:14:43.754093 ignition[1021]: INFO : umount: umount passed Jan 30 13:14:43.754093 ignition[1021]: INFO : Ignition finished successfully Jan 30 13:14:43.724242 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:14:43.726397 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:14:43.726518 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:14:43.756119 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:14:43.756235 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:14:43.759968 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:14:43.760080 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:14:43.762053 systemd[1]: Stopped target network.target - Network. Jan 30 13:14:43.763519 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:14:43.763586 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:14:43.765893 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:14:43.765941 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:14:43.767031 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:14:43.767076 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:14:43.769113 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:14:43.769157 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:14:43.771429 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:14:43.773489 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:14:43.777578 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 30 13:14:43.779200 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:14:43.779339 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:14:43.782077 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:14:43.782213 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:14:43.785198 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:14:43.785258 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:14:43.798589 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:14:43.800266 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:14:43.800323 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:14:43.835601 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:14:43.835663 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:14:43.838404 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:14:43.838474 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:14:43.839908 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:14:43.839957 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:14:43.842761 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:14:43.870426 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:14:43.875715 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:14:43.875880 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:14:43.878715 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:14:43.878769 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:14:43.880281 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:14:43.880326 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:14:43.882867 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:14:43.882921 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:14:43.884146 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:14:43.884194 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:14:43.886554 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:14:43.886613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:14:43.897634 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:14:43.898819 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:14:43.898886 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:14:43.901187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:14:43.901246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:14:43.904003 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:14:43.904159 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:14:43.905933 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:14:43.906044 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:14:44.520934 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:14:44.521117 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:14:44.534021 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:14:44.535795 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:14:44.535848 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:14:44.544595 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:14:44.552862 systemd[1]: Switching root. Jan 30 13:14:44.588044 systemd-journald[194]: Journal stopped Jan 30 13:14:46.333002 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 30 13:14:46.333084 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:14:46.333106 kernel: SELinux: policy capability open_perms=1 Jan 30 13:14:46.333122 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:14:46.333142 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:14:46.333157 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:14:46.333173 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:14:46.333195 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:14:46.333210 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:14:46.333226 kernel: audit: type=1403 audit(1738242885.573:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:14:46.333246 systemd[1]: Successfully loaded SELinux policy in 39.038ms. Jan 30 13:14:46.333270 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.745ms. Jan 30 13:14:46.333294 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:14:46.333311 systemd[1]: Detected virtualization kvm. Jan 30 13:14:46.333327 systemd[1]: Detected architecture x86-64. Jan 30 13:14:46.333343 systemd[1]: Detected first boot. Jan 30 13:14:46.333359 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:14:46.333376 zram_generator::config[1065]: No configuration found. Jan 30 13:14:46.333393 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:14:46.333415 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:14:46.333433 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:14:46.333449 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:14:46.333483 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:14:46.333501 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:14:46.333517 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:14:46.333533 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:14:46.333550 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:14:46.333570 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:14:46.333589 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:14:46.333611 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:14:46.333628 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:14:46.333644 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:14:46.333660 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:14:46.333677 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:14:46.333693 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:14:46.333710 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:14:46.333730 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:14:46.333746 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:14:46.333763 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:14:46.333779 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:14:46.333795 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:14:46.333811 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:14:46.333828 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:14:46.333844 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:14:46.333863 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:14:46.333880 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:14:46.333896 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:14:46.333912 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:14:46.333928 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:14:46.333955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:14:46.333971 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:14:46.333988 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:14:46.334003 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:14:46.334028 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:14:46.334045 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:14:46.334061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:14:46.334077 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:14:46.334093 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:14:46.334109 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:14:46.334126 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:14:46.334142 systemd[1]: Reached target machines.target - Containers. Jan 30 13:14:46.334162 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:14:46.334177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:14:46.334193 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:14:46.334209 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:14:46.334225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:14:46.334242 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:14:46.334258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:14:46.334275 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:14:46.334291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:14:46.334314 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:14:46.334332 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:14:46.334348 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:14:46.334364 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:14:46.334380 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:14:46.334397 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:14:46.334412 kernel: loop: module loaded Jan 30 13:14:46.334428 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:14:46.334444 kernel: fuse: init (API version 7.39) Jan 30 13:14:46.334500 systemd-journald[1128]: Collecting audit messages is disabled. Jan 30 13:14:46.334534 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:14:46.334553 systemd-journald[1128]: Journal started Jan 30 13:14:46.334584 systemd-journald[1128]: Runtime Journal (/run/log/journal/12918e2f2adf4615a6c7040109eafb85) is 6.0M, max 48.2M, 42.2M free. Jan 30 13:14:46.090594 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:14:46.109579 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:14:46.110017 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:14:46.339857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:14:46.343563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:14:46.345868 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:14:46.345902 systemd[1]: Stopped verity-setup.service. Jan 30 13:14:46.366766 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:14:46.370316 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:14:46.372178 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:14:46.373493 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:14:46.374752 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:14:46.375871 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:14:46.377317 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:14:46.378918 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:14:46.380551 kernel: ACPI: bus type drm_connector registered Jan 30 13:14:46.380902 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:14:46.382707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:14:46.382880 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:14:46.384482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:14:46.384687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:14:46.386122 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:14:46.386328 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:14:46.387755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:14:46.387948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:14:46.389728 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:14:46.389918 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:14:46.391292 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:14:46.391576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:14:46.392937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:14:46.394427 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:14:46.396017 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:14:46.410813 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:14:46.417567 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:14:46.420177 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:14:46.421569 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:14:46.421617 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:14:46.424195 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:14:46.426865 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:14:46.429606 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:14:46.431061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:14:46.433603 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:14:46.436509 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:14:46.438016 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:14:46.441589 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:14:46.442814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:14:46.445067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:14:46.456124 systemd-journald[1128]: Time spent on flushing to /var/log/journal/12918e2f2adf4615a6c7040109eafb85 is 27.088ms for 1044 entries. Jan 30 13:14:46.456124 systemd-journald[1128]: System Journal (/var/log/journal/12918e2f2adf4615a6c7040109eafb85) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:14:46.829323 systemd-journald[1128]: Received client request to flush runtime journal. Jan 30 13:14:46.829377 kernel: loop0: detected capacity change from 0 to 138184 Jan 30 13:14:46.829393 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:14:46.829406 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 13:14:46.829419 kernel: loop2: detected capacity change from 0 to 141000 Jan 30 13:14:46.829431 kernel: loop3: detected capacity change from 0 to 138184 Jan 30 13:14:46.829444 kernel: loop4: detected capacity change from 0 to 218376 Jan 30 13:14:46.829489 kernel: loop5: detected capacity change from 0 to 141000 Jan 30 13:14:46.448595 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:14:46.451351 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:14:46.455865 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:14:46.458924 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:14:46.478689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:14:46.491597 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:14:46.500655 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:14:46.509535 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:14:46.758284 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:14:46.758884 (sd-merge)[1184]: Merged extensions into '/usr'. Jan 30 13:14:46.763167 systemd[1]: Reloading requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:14:46.763177 systemd[1]: Reloading... Jan 30 13:14:46.852936 zram_generator::config[1225]: No configuration found. Jan 30 13:14:46.931631 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:14:46.972984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:14:47.023258 systemd[1]: Reloading finished in 259 ms. Jan 30 13:14:47.055541 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:14:47.057198 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:14:47.058793 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:14:47.060434 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:14:47.061956 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:14:47.069378 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:14:47.080672 systemd[1]: Starting ensure-sysext.service... Jan 30 13:14:47.083150 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:14:47.087394 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:14:47.091310 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:14:47.091326 systemd[1]: Reloading... Jan 30 13:14:47.146476 zram_generator::config[1290]: No configuration found. Jan 30 13:14:47.623939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:14:47.673892 systemd[1]: Reloading finished in 582 ms. Jan 30 13:14:47.700055 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:14:47.709387 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:14:47.712157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:14:47.715170 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:14:47.715344 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:14:47.716547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:14:47.723523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:14:47.729036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:14:47.731127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:14:47.731349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:14:47.732388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:14:47.732582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:14:47.734786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:14:47.734970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:14:47.736890 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:14:47.737114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:14:47.741999 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jan 30 13:14:47.742016 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jan 30 13:14:47.743339 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:14:47.743616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:14:47.748732 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:14:47.749045 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:14:47.749782 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:14:47.751098 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:14:47.751370 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 30 13:14:47.751448 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 30 13:14:47.752224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:14:47.754395 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:14:47.755695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:14:47.755929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:14:47.756821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:14:47.758643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:14:47.758824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:14:47.760032 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:14:47.760046 systemd-tmpfiles[1328]: Skipping /boot Jan 30 13:14:47.760679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:14:47.760873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:14:47.762666 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:14:47.762827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:14:47.769621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:14:47.769858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:14:47.776362 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:14:47.776378 systemd-tmpfiles[1328]: Skipping /boot Jan 30 13:14:47.782707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:14:47.785449 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:14:47.788022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:14:47.792484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:14:47.800447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:14:47.800766 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:14:47.801917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:14:47.802347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:14:47.804196 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:14:47.804444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:14:47.809665 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:14:47.812443 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:14:47.813420 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:14:47.815160 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:14:47.815371 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:14:47.817135 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:14:47.817342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:14:47.821169 systemd[1]: Finished ensure-sysext.service. Jan 30 13:14:47.833665 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:14:47.836321 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:14:47.840122 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:14:47.841388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:14:47.841492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:14:47.843985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:14:47.854661 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:14:47.865646 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:14:47.867656 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:14:47.874434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:14:47.877802 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:14:47.879432 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:14:47.888658 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:14:47.890253 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:14:47.896249 augenrules[1385]: No rules Jan 30 13:14:47.897417 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:14:47.897720 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:14:47.909662 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:14:47.914015 systemd-udevd[1378]: Using default interface naming scheme 'v255'. Jan 30 13:14:47.918510 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:14:47.920473 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:14:47.929408 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:14:47.948682 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:14:47.957785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:14:47.999426 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:14:48.045487 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1411) Jan 30 13:14:48.095433 systemd-networkd[1405]: lo: Link UP Jan 30 13:14:48.095442 systemd-networkd[1405]: lo: Gained carrier Jan 30 13:14:48.097064 systemd-networkd[1405]: Enumeration completed Jan 30 13:14:48.097562 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:14:48.100618 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:14:48.100671 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:14:48.101469 systemd-networkd[1405]: eth0: Link UP Jan 30 13:14:48.101542 systemd-networkd[1405]: eth0: Gained carrier Jan 30 13:14:48.101607 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:14:48.165733 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:14:48.171566 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:14:48.183985 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:14:48.188405 systemd-resolved[1359]: Positive Trust Anchors: Jan 30 13:14:48.197648 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:14:48.188439 systemd-resolved[1359]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:14:48.188516 systemd-resolved[1359]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:14:48.194662 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:14:48.196155 systemd-resolved[1359]: Defaulting to hostname 'linux'. Jan 30 13:14:48.197746 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:14:49.081697 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:14:49.079607 systemd-timesyncd[1365]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:14:49.086718 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:14:49.086928 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:14:49.087085 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:14:49.087356 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:14:49.079667 systemd-timesyncd[1365]: Initial clock synchronization to Thu 2025-01-30 13:14:49.079471 UTC. Jan 30 13:14:49.082000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:14:49.086533 systemd-resolved[1359]: Clock change detected. Flushing caches. Jan 30 13:14:49.089674 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:14:49.096044 systemd[1]: Reached target network.target - Network. Jan 30 13:14:49.097545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:14:49.099042 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:14:49.114871 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:14:49.113721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:14:49.115519 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:14:49.197283 kernel: kvm_amd: TSC scaling supported Jan 30 13:14:49.197340 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:14:49.197385 kernel: kvm_amd: Nested Paging enabled Jan 30 13:14:49.198184 kernel: kvm_amd: LBR virtualization supported Jan 30 13:14:49.198215 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:14:49.199126 kernel: kvm_amd: Virtual GIF supported Jan 30 13:14:49.219699 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:14:49.234511 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:14:49.256869 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:14:49.269894 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:14:49.279941 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:14:49.316313 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:14:49.317940 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:14:49.319078 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:14:49.320309 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:14:49.321578 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:14:49.323046 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:14:49.324267 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:14:49.325530 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:14:49.326773 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:14:49.326804 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:14:49.327744 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:14:49.329467 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:14:49.332245 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:14:49.346009 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:14:49.348449 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:14:49.350308 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:14:49.351648 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:14:49.352845 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:14:49.353971 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:14:49.353994 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:14:49.372782 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:14:49.374929 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:14:49.376813 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:14:49.377356 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:14:49.379792 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:14:49.380870 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:14:49.382545 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:14:49.386739 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:14:49.392848 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:14:49.396136 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:14:49.396465 jq[1452]: false Jan 30 13:14:49.401904 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:14:49.403374 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:14:49.403730 extend-filesystems[1453]: Found loop3 Jan 30 13:14:49.404819 extend-filesystems[1453]: Found loop4 Jan 30 13:14:49.404819 extend-filesystems[1453]: Found loop5 Jan 30 13:14:49.404819 extend-filesystems[1453]: Found sr0 Jan 30 13:14:49.404819 extend-filesystems[1453]: Found vda Jan 30 13:14:49.404819 extend-filesystems[1453]: Found vda1 Jan 30 13:14:49.403866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:14:49.409404 extend-filesystems[1453]: Found vda2 Jan 30 13:14:49.409404 extend-filesystems[1453]: Found vda3 Jan 30 13:14:49.409404 extend-filesystems[1453]: Found usr Jan 30 13:14:49.409404 extend-filesystems[1453]: Found vda4 Jan 30 13:14:49.409404 extend-filesystems[1453]: Found vda6 Jan 30 13:14:49.409404 extend-filesystems[1453]: Found vda7 Jan 30 13:14:49.409404 extend-filesystems[1453]: Found vda9 Jan 30 13:14:49.409404 extend-filesystems[1453]: Checking size of /dev/vda9 Jan 30 13:14:49.406452 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:14:49.417777 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:14:49.420305 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:14:49.424165 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:14:49.424405 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:14:49.425279 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:14:49.425481 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:14:49.425793 jq[1466]: true Jan 30 13:14:49.430096 dbus-daemon[1451]: [system] SELinux support is enabled Jan 30 13:14:49.430600 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:14:49.441730 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1407) Jan 30 13:14:49.444405 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:14:49.449150 extend-filesystems[1453]: Resized partition /dev/vda9 Jan 30 13:14:49.457032 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:14:49.454247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:14:49.461292 jq[1475]: true Jan 30 13:14:49.475821 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:14:49.454272 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:14:49.475928 update_engine[1462]: I20250130 13:14:49.472645 1462 main.cc:92] Flatcar Update Engine starting Jan 30 13:14:49.475928 update_engine[1462]: I20250130 13:14:49.474484 1462 update_check_scheduler.cc:74] Next update check in 5m47s Jan 30 13:14:49.457845 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:14:49.482916 tar[1473]: linux-amd64/LICENSE Jan 30 13:14:49.482916 tar[1473]: linux-amd64/helm Jan 30 13:14:49.457862 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:14:49.469244 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:14:49.469497 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:14:49.484327 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:14:49.509055 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:14:49.523746 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:14:49.539772 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 13:14:49.539811 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:14:49.540377 systemd-logind[1459]: New seat seat0. Jan 30 13:14:49.545087 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:14:49.550701 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:14:49.574052 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:14:49.582964 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:14:49.582964 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:14:49.582964 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:14:49.587868 extend-filesystems[1453]: Resized filesystem in /dev/vda9 Jan 30 13:14:49.586704 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:14:49.588848 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:14:49.589042 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:14:49.592410 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:14:49.603167 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:14:49.604844 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:14:49.608754 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:14:49.609559 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:14:49.609820 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:14:49.614398 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:14:49.666447 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:14:49.676025 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:14:49.680922 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:14:49.683884 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:14:49.811013 containerd[1476]: time="2025-01-30T13:14:49.810928028Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:14:49.833528 containerd[1476]: time="2025-01-30T13:14:49.833390343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:14:49.835495 containerd[1476]: time="2025-01-30T13:14:49.835438454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:14:49.835495 containerd[1476]: time="2025-01-30T13:14:49.835470945Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:14:49.835495 containerd[1476]: time="2025-01-30T13:14:49.835496743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:14:49.835760 containerd[1476]: time="2025-01-30T13:14:49.835729990Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:14:49.835825 containerd[1476]: time="2025-01-30T13:14:49.835758464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:14:49.835866 containerd[1476]: time="2025-01-30T13:14:49.835846028Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:14:49.835887 containerd[1476]: time="2025-01-30T13:14:49.835866777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:14:49.836135 containerd[1476]: time="2025-01-30T13:14:49.836108460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:14:49.836157 containerd[1476]: time="2025-01-30T13:14:49.836132575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:14:49.836157 containerd[1476]: time="2025-01-30T13:14:49.836150569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:14:49.836199 containerd[1476]: time="2025-01-30T13:14:49.836164515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:14:49.836299 containerd[1476]: time="2025-01-30T13:14:49.836276916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:14:49.836584 containerd[1476]: time="2025-01-30T13:14:49.836560919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:14:49.836759 containerd[1476]: time="2025-01-30T13:14:49.836734885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:14:49.836785 containerd[1476]: time="2025-01-30T13:14:49.836756936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:14:49.836906 containerd[1476]: time="2025-01-30T13:14:49.836883253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:14:49.836973 containerd[1476]: time="2025-01-30T13:14:49.836957983Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:14:49.843393 containerd[1476]: time="2025-01-30T13:14:49.843362113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:14:49.843431 containerd[1476]: time="2025-01-30T13:14:49.843404884Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:14:49.843481 containerd[1476]: time="2025-01-30T13:14:49.843445380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:14:49.843517 containerd[1476]: time="2025-01-30T13:14:49.843491947Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:14:49.843537 containerd[1476]: time="2025-01-30T13:14:49.843514149Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:14:49.843714 containerd[1476]: time="2025-01-30T13:14:49.843687604Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:14:49.844006 containerd[1476]: time="2025-01-30T13:14:49.843969703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:14:49.844154 containerd[1476]: time="2025-01-30T13:14:49.844125705Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:14:49.844154 containerd[1476]: time="2025-01-30T13:14:49.844148628Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:14:49.844237 containerd[1476]: time="2025-01-30T13:14:49.844167834Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:14:49.844237 containerd[1476]: time="2025-01-30T13:14:49.844186379Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:14:49.844237 containerd[1476]: time="2025-01-30T13:14:49.844204493Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:14:49.844237 containerd[1476]: time="2025-01-30T13:14:49.844221184Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:14:49.844341 containerd[1476]: time="2025-01-30T13:14:49.844238327Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:14:49.844341 containerd[1476]: time="2025-01-30T13:14:49.844256891Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:14:49.844341 containerd[1476]: time="2025-01-30T13:14:49.844274414Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:14:49.844341 containerd[1476]: time="2025-01-30T13:14:49.844291035Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:14:49.844341 containerd[1476]: time="2025-01-30T13:14:49.844305813Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:14:49.844341 containerd[1476]: time="2025-01-30T13:14:49.844330399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844490 containerd[1476]: time="2025-01-30T13:14:49.844348694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844490 containerd[1476]: time="2025-01-30T13:14:49.844365215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844490 containerd[1476]: time="2025-01-30T13:14:49.844381866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844490 containerd[1476]: time="2025-01-30T13:14:49.844398287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844490 containerd[1476]: time="2025-01-30T13:14:49.844414858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844490 containerd[1476]: time="2025-01-30T13:14:49.844431820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844490 containerd[1476]: time="2025-01-30T13:14:49.844449302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844490 containerd[1476]: time="2025-01-30T13:14:49.844470783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844490 containerd[1476]: time="2025-01-30T13:14:49.844491021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844506139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844522730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844538349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844556493Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844581630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844599183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844614452Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844690314Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844712065Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844725420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844742332Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:14:49.844755 containerd[1476]: time="2025-01-30T13:14:49.844754465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.845049 containerd[1476]: time="2025-01-30T13:14:49.844769423Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:14:49.845049 containerd[1476]: time="2025-01-30T13:14:49.844782427Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:14:49.845049 containerd[1476]: time="2025-01-30T13:14:49.844802785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:14:49.845291 containerd[1476]: time="2025-01-30T13:14:49.845158502Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:14:49.845291 containerd[1476]: time="2025-01-30T13:14:49.845220048Z" level=info msg="Connect containerd service" Jan 30 13:14:49.845291 containerd[1476]: time="2025-01-30T13:14:49.845262327Z" level=info msg="using legacy CRI server" Jan 30 13:14:49.845291 containerd[1476]: time="2025-01-30T13:14:49.845271494Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:14:49.845649 containerd[1476]: time="2025-01-30T13:14:49.845395767Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:14:49.846087 containerd[1476]: time="2025-01-30T13:14:49.846051457Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:14:49.846322 containerd[1476]: time="2025-01-30T13:14:49.846244138Z" level=info msg="Start subscribing containerd event" Jan 30 13:14:49.847217 containerd[1476]: time="2025-01-30T13:14:49.847173491Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:14:49.847273 containerd[1476]: time="2025-01-30T13:14:49.847243422Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:14:49.856012 containerd[1476]: time="2025-01-30T13:14:49.855973504Z" level=info msg="Start recovering state" Jan 30 13:14:49.856127 containerd[1476]: time="2025-01-30T13:14:49.856105532Z" level=info msg="Start event monitor" Jan 30 13:14:49.856160 containerd[1476]: time="2025-01-30T13:14:49.856143393Z" level=info msg="Start snapshots syncer" Jan 30 13:14:49.856196 containerd[1476]: time="2025-01-30T13:14:49.856157720Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:14:49.856196 containerd[1476]: time="2025-01-30T13:14:49.856172968Z" level=info msg="Start streaming server" Jan 30 13:14:49.856369 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:14:49.858114 containerd[1476]: time="2025-01-30T13:14:49.858077881Z" level=info msg="containerd successfully booted in 0.048731s" Jan 30 13:14:49.996290 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:14:50.002965 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:58378.service - OpenSSH per-connection server daemon (10.0.0.1:58378). Jan 30 13:14:50.063573 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 58378 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:14:50.067160 sshd-session[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:50.077406 systemd-logind[1459]: New session 1 of user core. Jan 30 13:14:50.078917 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:14:50.124096 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:14:50.143674 tar[1473]: linux-amd64/README.md Jan 30 13:14:50.160415 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:14:50.165482 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:14:50.167180 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:14:50.173772 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:14:50.300551 systemd[1546]: Queued start job for default target default.target. Jan 30 13:14:50.313205 systemd[1546]: Created slice app.slice - User Application Slice. Jan 30 13:14:50.313234 systemd[1546]: Reached target paths.target - Paths. Jan 30 13:14:50.313249 systemd[1546]: Reached target timers.target - Timers. Jan 30 13:14:50.314926 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:14:50.327811 systemd-networkd[1405]: eth0: Gained IPv6LL Jan 30 13:14:50.329157 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:14:50.329350 systemd[1546]: Reached target sockets.target - Sockets. Jan 30 13:14:50.329371 systemd[1546]: Reached target basic.target - Basic System. Jan 30 13:14:50.329407 systemd[1546]: Reached target default.target - Main User Target. Jan 30 13:14:50.329441 systemd[1546]: Startup finished in 148ms. Jan 30 13:14:50.330238 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:14:50.332185 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:14:50.349956 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:14:50.351221 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:14:50.353812 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:14:50.356392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:14:50.358593 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:14:50.381203 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:14:50.381457 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:14:50.383208 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:14:50.385568 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:14:50.436983 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:45396.service - OpenSSH per-connection server daemon (10.0.0.1:45396). Jan 30 13:14:50.478540 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 45396 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:14:50.480464 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:50.484649 systemd-logind[1459]: New session 2 of user core. Jan 30 13:14:50.495784 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:14:50.562590 sshd[1577]: Connection closed by 10.0.0.1 port 45396 Jan 30 13:14:50.562944 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:50.580503 systemd[1]: sshd@1-10.0.0.150:22-10.0.0.1:45396.service: Deactivated successfully. Jan 30 13:14:50.582145 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:14:50.583376 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:14:50.584541 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:45402.service - OpenSSH per-connection server daemon (10.0.0.1:45402). Jan 30 13:14:50.586603 systemd-logind[1459]: Removed session 2. Jan 30 13:14:50.623908 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 45402 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:14:50.625443 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:50.629516 systemd-logind[1459]: New session 3 of user core. Jan 30 13:14:50.643817 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:14:50.701836 sshd[1584]: Connection closed by 10.0.0.1 port 45402 Jan 30 13:14:50.702228 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:50.705424 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:45402.service: Deactivated successfully. Jan 30 13:14:50.707437 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:14:50.708932 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:14:50.709868 systemd-logind[1459]: Removed session 3. Jan 30 13:14:51.549359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:14:51.551146 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:14:51.552512 systemd[1]: Startup finished in 739ms (kernel) + 6.858s (initrd) + 5.142s (userspace) = 12.739s. Jan 30 13:14:51.557910 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:14:51.570265 agetty[1535]: failed to open credentials directory Jan 30 13:14:51.573563 agetty[1534]: failed to open credentials directory Jan 30 13:14:52.200584 kubelet[1593]: E0130 13:14:52.200506 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:14:52.204929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:14:52.205155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:14:52.205476 systemd[1]: kubelet.service: Consumed 1.697s CPU time. Jan 30 13:15:00.713450 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:51756.service - OpenSSH per-connection server daemon (10.0.0.1:51756). Jan 30 13:15:00.754152 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 51756 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:00.756159 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:00.761235 systemd-logind[1459]: New session 4 of user core. Jan 30 13:15:00.775974 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:15:00.830188 sshd[1608]: Connection closed by 10.0.0.1 port 51756 Jan 30 13:15:00.830610 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:00.843075 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:51756.service: Deactivated successfully. Jan 30 13:15:00.845183 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:15:00.846887 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:15:00.858106 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:51768.service - OpenSSH per-connection server daemon (10.0.0.1:51768). Jan 30 13:15:00.859190 systemd-logind[1459]: Removed session 4. Jan 30 13:15:00.892430 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 51768 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:00.894153 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:00.898286 systemd-logind[1459]: New session 5 of user core. Jan 30 13:15:00.913914 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:15:00.965447 sshd[1615]: Connection closed by 10.0.0.1 port 51768 Jan 30 13:15:00.965868 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:00.977719 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:51768.service: Deactivated successfully. Jan 30 13:15:00.979421 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:15:00.981125 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:15:00.987947 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:51784.service - OpenSSH per-connection server daemon (10.0.0.1:51784). Jan 30 13:15:00.988983 systemd-logind[1459]: Removed session 5. Jan 30 13:15:01.020210 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 51784 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:01.021590 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:01.025577 systemd-logind[1459]: New session 6 of user core. Jan 30 13:15:01.034828 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:15:01.089176 sshd[1622]: Connection closed by 10.0.0.1 port 51784 Jan 30 13:15:01.089534 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:01.104550 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:51784.service: Deactivated successfully. Jan 30 13:15:01.106360 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:15:01.107744 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:15:01.118895 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:51788.service - OpenSSH per-connection server daemon (10.0.0.1:51788). Jan 30 13:15:01.119708 systemd-logind[1459]: Removed session 6. Jan 30 13:15:01.150359 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 51788 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:01.151833 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:01.155425 systemd-logind[1459]: New session 7 of user core. Jan 30 13:15:01.164778 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:15:01.221628 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:15:01.221994 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:15:01.246645 sudo[1630]: pam_unix(sudo:session): session closed for user root Jan 30 13:15:01.248228 sshd[1629]: Connection closed by 10.0.0.1 port 51788 Jan 30 13:15:01.248623 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:01.267235 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:51788.service: Deactivated successfully. Jan 30 13:15:01.269185 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:15:01.270723 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:15:01.284949 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:51792.service - OpenSSH per-connection server daemon (10.0.0.1:51792). Jan 30 13:15:01.285849 systemd-logind[1459]: Removed session 7. Jan 30 13:15:01.317555 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 51792 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:01.319209 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:01.323664 systemd-logind[1459]: New session 8 of user core. Jan 30 13:15:01.336819 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:15:01.390915 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:15:01.391251 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:15:01.394588 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 30 13:15:01.400079 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:15:01.400404 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:15:01.419038 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:15:01.449737 augenrules[1661]: No rules Jan 30 13:15:01.451423 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:15:01.451648 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:15:01.453215 sudo[1638]: pam_unix(sudo:session): session closed for user root Jan 30 13:15:01.454775 sshd[1637]: Connection closed by 10.0.0.1 port 51792 Jan 30 13:15:01.455144 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:01.470605 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:51792.service: Deactivated successfully. Jan 30 13:15:01.472218 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:15:01.473857 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:15:01.482951 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:51794.service - OpenSSH per-connection server daemon (10.0.0.1:51794). Jan 30 13:15:01.483848 systemd-logind[1459]: Removed session 8. Jan 30 13:15:01.515625 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 51794 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:01.517053 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:01.520979 systemd-logind[1459]: New session 9 of user core. Jan 30 13:15:01.530796 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:15:01.584044 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:15:01.584365 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:15:01.977930 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:15:01.978050 (dockerd)[1693]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:15:02.226316 dockerd[1693]: time="2025-01-30T13:15:02.226022452Z" level=info msg="Starting up" Jan 30 13:15:02.230367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:15:02.238884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:02.480184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:02.485737 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:15:02.762953 kubelet[1725]: E0130 13:15:02.762282 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:15:02.768490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:15:02.768704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:15:02.785139 dockerd[1693]: time="2025-01-30T13:15:02.785099921Z" level=info msg="Loading containers: start." Jan 30 13:15:02.960677 kernel: Initializing XFRM netlink socket Jan 30 13:15:03.039471 systemd-networkd[1405]: docker0: Link UP Jan 30 13:15:03.222144 dockerd[1693]: time="2025-01-30T13:15:03.222103078Z" level=info msg="Loading containers: done." Jan 30 13:15:03.236065 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1992348374-merged.mount: Deactivated successfully. Jan 30 13:15:03.239896 dockerd[1693]: time="2025-01-30T13:15:03.239845572Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:15:03.240172 dockerd[1693]: time="2025-01-30T13:15:03.239986907Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:15:03.240172 dockerd[1693]: time="2025-01-30T13:15:03.240120588Z" level=info msg="Daemon has completed initialization" Jan 30 13:15:03.282930 dockerd[1693]: time="2025-01-30T13:15:03.282843328Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:15:03.283085 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:15:03.816946 containerd[1476]: time="2025-01-30T13:15:03.816897891Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:15:04.367957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount122654864.mount: Deactivated successfully. Jan 30 13:15:05.262803 containerd[1476]: time="2025-01-30T13:15:05.262737346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:05.263379 containerd[1476]: time="2025-01-30T13:15:05.263318406Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 13:15:05.264525 containerd[1476]: time="2025-01-30T13:15:05.264493650Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:05.267300 containerd[1476]: time="2025-01-30T13:15:05.267253095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:05.268357 containerd[1476]: time="2025-01-30T13:15:05.268323081Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.451377962s" Jan 30 13:15:05.268405 containerd[1476]: time="2025-01-30T13:15:05.268359229Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 13:15:05.269050 containerd[1476]: time="2025-01-30T13:15:05.269023936Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:15:06.635310 containerd[1476]: time="2025-01-30T13:15:06.635247212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:06.636164 containerd[1476]: time="2025-01-30T13:15:06.636124407Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 13:15:06.637266 containerd[1476]: time="2025-01-30T13:15:06.637232725Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:06.640147 containerd[1476]: time="2025-01-30T13:15:06.640093621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:06.641306 containerd[1476]: time="2025-01-30T13:15:06.641249669Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.372194054s" Jan 30 13:15:06.641306 containerd[1476]: time="2025-01-30T13:15:06.641287570Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 13:15:06.641824 containerd[1476]: time="2025-01-30T13:15:06.641787387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:15:08.711576 containerd[1476]: time="2025-01-30T13:15:08.711488626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:08.712859 containerd[1476]: time="2025-01-30T13:15:08.712806527Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 13:15:08.714280 containerd[1476]: time="2025-01-30T13:15:08.714252038Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:08.717985 containerd[1476]: time="2025-01-30T13:15:08.717951175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:08.719211 containerd[1476]: time="2025-01-30T13:15:08.719173227Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 2.077283609s" Jan 30 13:15:08.719211 containerd[1476]: time="2025-01-30T13:15:08.719205798Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 13:15:08.719882 containerd[1476]: time="2025-01-30T13:15:08.719855426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:15:10.218944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396381161.mount: Deactivated successfully. Jan 30 13:15:11.855297 containerd[1476]: time="2025-01-30T13:15:11.855188525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:11.856819 containerd[1476]: time="2025-01-30T13:15:11.856768789Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:15:11.859116 containerd[1476]: time="2025-01-30T13:15:11.859076607Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:11.862565 containerd[1476]: time="2025-01-30T13:15:11.862523401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:11.863450 containerd[1476]: time="2025-01-30T13:15:11.863386670Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 3.143491308s" Jan 30 13:15:11.863450 containerd[1476]: time="2025-01-30T13:15:11.863442324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:15:11.864064 containerd[1476]: time="2025-01-30T13:15:11.864015559Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:15:12.399409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount474802953.mount: Deactivated successfully. Jan 30 13:15:13.019333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:15:13.047818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:13.227992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:13.234869 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:15:13.306609 kubelet[1999]: E0130 13:15:13.306392 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:15:13.311318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:15:13.311536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:15:15.081076 containerd[1476]: time="2025-01-30T13:15:15.081012036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:15.082442 containerd[1476]: time="2025-01-30T13:15:15.082400349Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 13:15:15.083672 containerd[1476]: time="2025-01-30T13:15:15.083631117Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:15.087502 containerd[1476]: time="2025-01-30T13:15:15.087460178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:15.092434 containerd[1476]: time="2025-01-30T13:15:15.092395083Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.228336222s" Jan 30 13:15:15.092434 containerd[1476]: time="2025-01-30T13:15:15.092431281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 13:15:15.092988 containerd[1476]: time="2025-01-30T13:15:15.092920859Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:15:15.552495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941762572.mount: Deactivated successfully. Jan 30 13:15:15.559487 containerd[1476]: time="2025-01-30T13:15:15.559438666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:15.560457 containerd[1476]: time="2025-01-30T13:15:15.560386484Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:15:15.561639 containerd[1476]: time="2025-01-30T13:15:15.561597876Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:15.563647 containerd[1476]: time="2025-01-30T13:15:15.563608717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:15.564319 containerd[1476]: time="2025-01-30T13:15:15.564286979Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 471.335251ms" Jan 30 13:15:15.564319 containerd[1476]: time="2025-01-30T13:15:15.564315572Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:15:15.564804 containerd[1476]: time="2025-01-30T13:15:15.564766398Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:15:16.065942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount425534474.mount: Deactivated successfully. Jan 30 13:15:19.200331 containerd[1476]: time="2025-01-30T13:15:19.200220542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:19.203161 containerd[1476]: time="2025-01-30T13:15:19.203096816Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 13:15:19.220503 containerd[1476]: time="2025-01-30T13:15:19.220341195Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:19.224613 containerd[1476]: time="2025-01-30T13:15:19.224561921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:19.225876 containerd[1476]: time="2025-01-30T13:15:19.225820020Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.661025159s" Jan 30 13:15:19.225876 containerd[1476]: time="2025-01-30T13:15:19.225857420Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 13:15:21.126714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:21.134882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:21.161599 systemd[1]: Reloading requested from client PID 2136 ('systemctl') (unit session-9.scope)... Jan 30 13:15:21.161612 systemd[1]: Reloading... Jan 30 13:15:21.239786 zram_generator::config[2175]: No configuration found. Jan 30 13:15:21.575311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:15:21.674255 systemd[1]: Reloading finished in 512 ms. Jan 30 13:15:21.726068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:21.728857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:21.731900 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:15:21.732256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:21.734541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:21.898256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:21.904494 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:15:21.944315 kubelet[2225]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:15:21.944315 kubelet[2225]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:15:21.944315 kubelet[2225]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:15:21.944711 kubelet[2225]: I0130 13:15:21.944395 2225 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:15:22.144948 kubelet[2225]: I0130 13:15:22.144899 2225 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:15:22.144948 kubelet[2225]: I0130 13:15:22.144928 2225 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:15:22.145208 kubelet[2225]: I0130 13:15:22.145177 2225 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:15:22.165822 kubelet[2225]: E0130 13:15:22.165705 2225 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:22.167573 kubelet[2225]: I0130 13:15:22.167533 2225 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:15:22.173439 kubelet[2225]: E0130 13:15:22.173396 2225 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:15:22.173439 kubelet[2225]: I0130 13:15:22.173439 2225 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:15:22.180583 kubelet[2225]: I0130 13:15:22.180559 2225 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:15:22.180930 kubelet[2225]: I0130 13:15:22.180893 2225 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:15:22.181108 kubelet[2225]: I0130 13:15:22.180925 2225 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:15:22.181227 kubelet[2225]: I0130 13:15:22.181117 2225 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:15:22.181227 kubelet[2225]: I0130 13:15:22.181127 2225 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:15:22.181799 kubelet[2225]: I0130 13:15:22.181776 2225 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:15:22.184148 kubelet[2225]: I0130 13:15:22.184123 2225 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:15:22.184148 kubelet[2225]: I0130 13:15:22.184144 2225 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:15:22.184216 kubelet[2225]: I0130 13:15:22.184169 2225 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:15:22.184216 kubelet[2225]: I0130 13:15:22.184185 2225 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:15:22.187048 kubelet[2225]: W0130 13:15:22.186993 2225 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jan 30 13:15:22.187090 kubelet[2225]: E0130 13:15:22.187057 2225 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:22.187849 kubelet[2225]: I0130 13:15:22.187808 2225 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:15:22.188238 kubelet[2225]: I0130 13:15:22.188213 2225 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:15:22.188238 kubelet[2225]: W0130 13:15:22.188213 2225 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jan 30 13:15:22.188238 kubelet[2225]: W0130 13:15:22.188293 2225 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:15:22.188238 kubelet[2225]: E0130 13:15:22.188310 2225 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:22.190535 kubelet[2225]: I0130 13:15:22.190504 2225 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:15:22.190578 kubelet[2225]: I0130 13:15:22.190546 2225 server.go:1287] "Started kubelet" Jan 30 13:15:22.191479 kubelet[2225]: I0130 13:15:22.190701 2225 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:15:22.191479 kubelet[2225]: I0130 13:15:22.190861 2225 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:15:22.191479 kubelet[2225]: I0130 13:15:22.191226 2225 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:15:22.192001 kubelet[2225]: I0130 13:15:22.191972 2225 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:15:22.193184 kubelet[2225]: I0130 13:15:22.192688 2225 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:15:22.193184 kubelet[2225]: I0130 13:15:22.192989 2225 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:15:22.194067 kubelet[2225]: E0130 13:15:22.194037 2225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:22.194105 kubelet[2225]: I0130 13:15:22.194078 2225 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:15:22.194315 kubelet[2225]: I0130 13:15:22.194288 2225 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:15:22.194380 kubelet[2225]: I0130 13:15:22.194372 2225 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:15:22.195066 kubelet[2225]: W0130 13:15:22.194711 2225 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jan 30 13:15:22.195066 kubelet[2225]: E0130 13:15:22.194763 2225 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:22.195230 kubelet[2225]: E0130 13:15:22.194148 2225 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7aba6680cbb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:15:22.190523316 +0000 UTC m=+0.280731381,LastTimestamp:2025-01-30 13:15:22.190523316 +0000 UTC m=+0.280731381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:15:22.195682 kubelet[2225]: E0130 13:15:22.195219 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Jan 30 13:15:22.196039 kubelet[2225]: I0130 13:15:22.196018 2225 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:15:22.196117 kubelet[2225]: I0130 13:15:22.196102 2225 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:15:22.197043 kubelet[2225]: E0130 13:15:22.197009 2225 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:15:22.198099 kubelet[2225]: I0130 13:15:22.197626 2225 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:15:22.211142 kubelet[2225]: I0130 13:15:22.211067 2225 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:15:22.211142 kubelet[2225]: I0130 13:15:22.211118 2225 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:15:22.211142 kubelet[2225]: I0130 13:15:22.211142 2225 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:15:22.211790 kubelet[2225]: I0130 13:15:22.211681 2225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:15:22.213668 kubelet[2225]: I0130 13:15:22.213096 2225 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:15:22.213668 kubelet[2225]: I0130 13:15:22.213146 2225 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:15:22.213668 kubelet[2225]: I0130 13:15:22.213172 2225 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:15:22.213668 kubelet[2225]: I0130 13:15:22.213182 2225 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:15:22.213668 kubelet[2225]: E0130 13:15:22.213237 2225 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:15:22.213668 kubelet[2225]: W0130 13:15:22.213603 2225 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jan 30 13:15:22.213668 kubelet[2225]: E0130 13:15:22.213630 2225 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:22.295165 kubelet[2225]: E0130 13:15:22.295100 2225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:22.314307 kubelet[2225]: E0130 13:15:22.314265 2225 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:15:22.395695 kubelet[2225]: E0130 13:15:22.395643 2225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:22.397218 kubelet[2225]: E0130 13:15:22.397174 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Jan 30 13:15:22.495847 kubelet[2225]: E0130 13:15:22.495716 2225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:22.515042 kubelet[2225]: E0130 13:15:22.514977 2225 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:15:22.596480 kubelet[2225]: E0130 13:15:22.596430 2225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:22.624213 kubelet[2225]: I0130 13:15:22.624168 2225 policy_none.go:49] "None policy: Start" Jan 30 13:15:22.624283 kubelet[2225]: I0130 13:15:22.624224 2225 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:15:22.624283 kubelet[2225]: I0130 13:15:22.624246 2225 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:15:22.631047 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:15:22.645387 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:15:22.648454 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:15:22.662353 kubelet[2225]: I0130 13:15:22.662177 2225 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:15:22.662500 kubelet[2225]: I0130 13:15:22.662467 2225 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:15:22.662548 kubelet[2225]: I0130 13:15:22.662490 2225 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:15:22.662839 kubelet[2225]: I0130 13:15:22.662802 2225 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:15:22.663694 kubelet[2225]: E0130 13:15:22.663669 2225 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:15:22.663758 kubelet[2225]: E0130 13:15:22.663733 2225 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:15:22.764404 kubelet[2225]: I0130 13:15:22.764275 2225 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:15:22.764814 kubelet[2225]: E0130 13:15:22.764785 2225 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jan 30 13:15:22.798919 kubelet[2225]: E0130 13:15:22.798856 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Jan 30 13:15:22.924636 systemd[1]: Created slice kubepods-burstable-poda864e30205ec1d5cab941e48b2931a5d.slice - libcontainer container kubepods-burstable-poda864e30205ec1d5cab941e48b2931a5d.slice. Jan 30 13:15:22.940789 kubelet[2225]: E0130 13:15:22.940745 2225 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:15:22.943701 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 30 13:15:22.953007 kubelet[2225]: E0130 13:15:22.952953 2225 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:15:22.955389 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 30 13:15:22.957117 kubelet[2225]: E0130 13:15:22.957077 2225 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:15:22.966476 kubelet[2225]: I0130 13:15:22.966436 2225 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:15:22.966847 kubelet[2225]: E0130 13:15:22.966816 2225 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jan 30 13:15:23.000462 kubelet[2225]: I0130 13:15:23.000401 2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:15:23.000462 kubelet[2225]: I0130 13:15:23.000452 2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:23.000462 kubelet[2225]: I0130 13:15:23.000475 2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a864e30205ec1d5cab941e48b2931a5d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a864e30205ec1d5cab941e48b2931a5d\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:23.000646 kubelet[2225]: I0130 13:15:23.000490 2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a864e30205ec1d5cab941e48b2931a5d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a864e30205ec1d5cab941e48b2931a5d\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:23.000646 kubelet[2225]: I0130 13:15:23.000507 2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:23.000646 kubelet[2225]: I0130 13:15:23.000520 2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:23.000646 kubelet[2225]: I0130 13:15:23.000538 2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:23.000646 kubelet[2225]: I0130 13:15:23.000619 2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:23.000785 kubelet[2225]: I0130 13:15:23.000686 2225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a864e30205ec1d5cab941e48b2931a5d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a864e30205ec1d5cab941e48b2931a5d\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:23.241958 kubelet[2225]: E0130 13:15:23.241830 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:23.242840 containerd[1476]: time="2025-01-30T13:15:23.242778298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a864e30205ec1d5cab941e48b2931a5d,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:23.253936 kubelet[2225]: E0130 13:15:23.253910 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:23.254555 containerd[1476]: time="2025-01-30T13:15:23.254418524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:23.257922 kubelet[2225]: E0130 13:15:23.257893 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:23.258436 containerd[1476]: time="2025-01-30T13:15:23.258386211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:23.276393 kubelet[2225]: W0130 13:15:23.276321 2225 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jan 30 13:15:23.276462 kubelet[2225]: E0130 13:15:23.276403 2225 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:23.279117 kubelet[2225]: E0130 13:15:23.278982 2225 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7aba6680cbb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:15:22.190523316 +0000 UTC m=+0.280731381,LastTimestamp:2025-01-30 13:15:22.190523316 +0000 UTC m=+0.280731381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:15:23.327117 kubelet[2225]: W0130 13:15:23.327032 2225 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jan 30 13:15:23.327117 kubelet[2225]: E0130 13:15:23.327120 2225 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:23.368616 kubelet[2225]: I0130 13:15:23.368561 2225 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:15:23.368928 kubelet[2225]: E0130 13:15:23.368903 2225 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jan 30 13:15:23.536888 kubelet[2225]: W0130 13:15:23.536711 2225 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jan 30 13:15:23.536888 kubelet[2225]: E0130 13:15:23.536796 2225 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:23.599844 kubelet[2225]: E0130 13:15:23.599798 2225 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Jan 30 13:15:23.654497 kubelet[2225]: W0130 13:15:23.654458 2225 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Jan 30 13:15:23.654613 kubelet[2225]: E0130 13:15:23.654509 2225 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:24.061731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482976390.mount: Deactivated successfully. Jan 30 13:15:24.066682 containerd[1476]: time="2025-01-30T13:15:24.066629451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:15:24.069678 containerd[1476]: time="2025-01-30T13:15:24.069622366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:15:24.070605 containerd[1476]: time="2025-01-30T13:15:24.070567240Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:15:24.072490 containerd[1476]: time="2025-01-30T13:15:24.072458850Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:15:24.073256 containerd[1476]: time="2025-01-30T13:15:24.073209561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:15:24.074217 containerd[1476]: time="2025-01-30T13:15:24.074160877Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:15:24.075065 containerd[1476]: time="2025-01-30T13:15:24.075020607Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:15:24.075995 containerd[1476]: time="2025-01-30T13:15:24.075961613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:15:24.076882 containerd[1476]: time="2025-01-30T13:15:24.076850869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 833.939074ms" Jan 30 13:15:24.079001 containerd[1476]: time="2025-01-30T13:15:24.078976730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 824.446081ms" Jan 30 13:15:24.080968 containerd[1476]: time="2025-01-30T13:15:24.080943435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 822.485355ms" Jan 30 13:15:24.170909 kubelet[2225]: I0130 13:15:24.170882 2225 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:15:24.172414 kubelet[2225]: E0130 13:15:24.172388 2225 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Jan 30 13:15:24.230588 containerd[1476]: time="2025-01-30T13:15:24.230505864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:24.230785 containerd[1476]: time="2025-01-30T13:15:24.230560960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:24.230785 containerd[1476]: time="2025-01-30T13:15:24.230618461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:24.230885 containerd[1476]: time="2025-01-30T13:15:24.230841439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:24.231711 containerd[1476]: time="2025-01-30T13:15:24.229611567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:24.231711 containerd[1476]: time="2025-01-30T13:15:24.231688765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:24.231711 containerd[1476]: time="2025-01-30T13:15:24.231705998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:24.232803 containerd[1476]: time="2025-01-30T13:15:24.231869482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:24.232869 containerd[1476]: time="2025-01-30T13:15:24.232775380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:24.232869 containerd[1476]: time="2025-01-30T13:15:24.232846687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:24.233138 containerd[1476]: time="2025-01-30T13:15:24.232894990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:24.233534 containerd[1476]: time="2025-01-30T13:15:24.233503828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:24.255791 systemd[1]: Started cri-containerd-00e8224fdcb4e1e9831e475bd7059bb55de3260598e10c5f617710223315085c.scope - libcontainer container 00e8224fdcb4e1e9831e475bd7059bb55de3260598e10c5f617710223315085c. Jan 30 13:15:24.260475 systemd[1]: Started cri-containerd-994b1686cac3454388e8a4416f3bbdcfd2db1566b4deae770052293750a3a4ee.scope - libcontainer container 994b1686cac3454388e8a4416f3bbdcfd2db1566b4deae770052293750a3a4ee. Jan 30 13:15:24.262274 systemd[1]: Started cri-containerd-ac57e6f493ae2ab234926d71ecba725c085f8ac3f492ff1945958e7190e06561.scope - libcontainer container ac57e6f493ae2ab234926d71ecba725c085f8ac3f492ff1945958e7190e06561. Jan 30 13:15:24.292747 containerd[1476]: time="2025-01-30T13:15:24.292698312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a864e30205ec1d5cab941e48b2931a5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"00e8224fdcb4e1e9831e475bd7059bb55de3260598e10c5f617710223315085c\"" Jan 30 13:15:24.295845 kubelet[2225]: E0130 13:15:24.295780 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:24.299449 containerd[1476]: time="2025-01-30T13:15:24.299408441Z" level=info msg="CreateContainer within sandbox \"00e8224fdcb4e1e9831e475bd7059bb55de3260598e10c5f617710223315085c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:15:24.305175 containerd[1476]: time="2025-01-30T13:15:24.305131846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"994b1686cac3454388e8a4416f3bbdcfd2db1566b4deae770052293750a3a4ee\"" Jan 30 13:15:24.305341 containerd[1476]: time="2025-01-30T13:15:24.305282746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac57e6f493ae2ab234926d71ecba725c085f8ac3f492ff1945958e7190e06561\"" Jan 30 13:15:24.305904 kubelet[2225]: E0130 13:15:24.305870 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:24.305980 kubelet[2225]: E0130 13:15:24.305918 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:24.307957 containerd[1476]: time="2025-01-30T13:15:24.307934435Z" level=info msg="CreateContainer within sandbox \"ac57e6f493ae2ab234926d71ecba725c085f8ac3f492ff1945958e7190e06561\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:15:24.308679 containerd[1476]: time="2025-01-30T13:15:24.308623217Z" level=info msg="CreateContainer within sandbox \"994b1686cac3454388e8a4416f3bbdcfd2db1566b4deae770052293750a3a4ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:15:24.322195 containerd[1476]: time="2025-01-30T13:15:24.322114652Z" level=info msg="CreateContainer within sandbox \"00e8224fdcb4e1e9831e475bd7059bb55de3260598e10c5f617710223315085c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"71f90070dc2548af0f52fd988b9aa38ab28eaa2476d0eaf12cd1cbf98a1a279d\"" Jan 30 13:15:24.322830 containerd[1476]: time="2025-01-30T13:15:24.322798534Z" level=info msg="StartContainer for \"71f90070dc2548af0f52fd988b9aa38ab28eaa2476d0eaf12cd1cbf98a1a279d\"" Jan 30 13:15:24.335143 containerd[1476]: time="2025-01-30T13:15:24.335038908Z" level=info msg="CreateContainer within sandbox \"ac57e6f493ae2ab234926d71ecba725c085f8ac3f492ff1945958e7190e06561\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc37fbe4c8156aadce6c2a8e3714d5245869594bc0a429b0c149197217ee172c\"" Jan 30 13:15:24.335601 containerd[1476]: time="2025-01-30T13:15:24.335574626Z" level=info msg="StartContainer for \"fc37fbe4c8156aadce6c2a8e3714d5245869594bc0a429b0c149197217ee172c\"" Jan 30 13:15:24.340031 containerd[1476]: time="2025-01-30T13:15:24.339963029Z" level=info msg="CreateContainer within sandbox \"994b1686cac3454388e8a4416f3bbdcfd2db1566b4deae770052293750a3a4ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f2a62c203afc39bd194876e3259de69bd23a2cfff611043c0f4bc672384f452\"" Jan 30 13:15:24.340770 containerd[1476]: time="2025-01-30T13:15:24.340746893Z" level=info msg="StartContainer for \"3f2a62c203afc39bd194876e3259de69bd23a2cfff611043c0f4bc672384f452\"" Jan 30 13:15:24.350084 systemd[1]: Started cri-containerd-71f90070dc2548af0f52fd988b9aa38ab28eaa2476d0eaf12cd1cbf98a1a279d.scope - libcontainer container 71f90070dc2548af0f52fd988b9aa38ab28eaa2476d0eaf12cd1cbf98a1a279d. Jan 30 13:15:24.362839 systemd[1]: Started cri-containerd-fc37fbe4c8156aadce6c2a8e3714d5245869594bc0a429b0c149197217ee172c.scope - libcontainer container fc37fbe4c8156aadce6c2a8e3714d5245869594bc0a429b0c149197217ee172c. Jan 30 13:15:24.366633 kubelet[2225]: E0130 13:15:24.366514 2225 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.150:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:15:24.375835 systemd[1]: Started cri-containerd-3f2a62c203afc39bd194876e3259de69bd23a2cfff611043c0f4bc672384f452.scope - libcontainer container 3f2a62c203afc39bd194876e3259de69bd23a2cfff611043c0f4bc672384f452. Jan 30 13:15:24.398032 containerd[1476]: time="2025-01-30T13:15:24.397983599Z" level=info msg="StartContainer for \"71f90070dc2548af0f52fd988b9aa38ab28eaa2476d0eaf12cd1cbf98a1a279d\" returns successfully" Jan 30 13:15:24.422367 containerd[1476]: time="2025-01-30T13:15:24.422155693Z" level=info msg="StartContainer for \"fc37fbe4c8156aadce6c2a8e3714d5245869594bc0a429b0c149197217ee172c\" returns successfully" Jan 30 13:15:24.422367 containerd[1476]: time="2025-01-30T13:15:24.422275152Z" level=info msg="StartContainer for \"3f2a62c203afc39bd194876e3259de69bd23a2cfff611043c0f4bc672384f452\" returns successfully" Jan 30 13:15:25.225568 kubelet[2225]: E0130 13:15:25.224952 2225 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:15:25.225568 kubelet[2225]: E0130 13:15:25.225061 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:25.226070 kubelet[2225]: E0130 13:15:25.225904 2225 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:15:25.226070 kubelet[2225]: E0130 13:15:25.225979 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:25.230223 kubelet[2225]: E0130 13:15:25.229607 2225 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:15:25.230223 kubelet[2225]: E0130 13:15:25.229724 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:25.425113 kubelet[2225]: E0130 13:15:25.425072 2225 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:15:25.774285 kubelet[2225]: I0130 13:15:25.774235 2225 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:15:25.879905 kubelet[2225]: I0130 13:15:25.879852 2225 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:15:25.879905 kubelet[2225]: E0130 13:15:25.879902 2225 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 13:15:25.882828 kubelet[2225]: E0130 13:15:25.882808 2225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:25.983511 kubelet[2225]: E0130 13:15:25.983466 2225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:26.084230 kubelet[2225]: E0130 13:15:26.084105 2225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:26.185028 kubelet[2225]: E0130 13:15:26.184984 2225 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:26.228883 kubelet[2225]: E0130 13:15:26.228851 2225 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:15:26.229242 kubelet[2225]: E0130 13:15:26.229001 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:26.229457 kubelet[2225]: E0130 13:15:26.229443 2225 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:15:26.229872 kubelet[2225]: E0130 13:15:26.229859 2225 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:26.295523 kubelet[2225]: I0130 13:15:26.295480 2225 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:26.300338 kubelet[2225]: E0130 13:15:26.300265 2225 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:26.300338 kubelet[2225]: I0130 13:15:26.300301 2225 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:26.301959 kubelet[2225]: E0130 13:15:26.301921 2225 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:26.301959 kubelet[2225]: I0130 13:15:26.301942 2225 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:15:26.303023 kubelet[2225]: E0130 13:15:26.302957 2225 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:15:27.190346 kubelet[2225]: I0130 13:15:27.190296 2225 apiserver.go:52] "Watching apiserver" Jan 30 13:15:27.195296 kubelet[2225]: I0130 13:15:27.195244 2225 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:15:27.839896 systemd[1]: Reloading requested from client PID 2506 ('systemctl') (unit session-9.scope)... Jan 30 13:15:27.839910 systemd[1]: Reloading... Jan 30 13:15:27.908949 zram_generator::config[2546]: No configuration found. Jan 30 13:15:28.016808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:15:28.106801 systemd[1]: Reloading finished in 266 ms. Jan 30 13:15:28.156305 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:28.172231 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:15:28.172537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:28.187320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:28.340964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:28.345501 (kubelet)[2590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:15:28.383127 kubelet[2590]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:15:28.383127 kubelet[2590]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:15:28.383127 kubelet[2590]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:15:28.383480 kubelet[2590]: I0130 13:15:28.383112 2590 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:15:28.389033 kubelet[2590]: I0130 13:15:28.389001 2590 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:15:28.389033 kubelet[2590]: I0130 13:15:28.389024 2590 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:15:28.389273 kubelet[2590]: I0130 13:15:28.389253 2590 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:15:28.390546 kubelet[2590]: I0130 13:15:28.390528 2590 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:15:28.392553 kubelet[2590]: I0130 13:15:28.392528 2590 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:15:28.395568 kubelet[2590]: E0130 13:15:28.395539 2590 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:15:28.395568 kubelet[2590]: I0130 13:15:28.395564 2590 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:15:28.399799 kubelet[2590]: I0130 13:15:28.399780 2590 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:15:28.400044 kubelet[2590]: I0130 13:15:28.400010 2590 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:15:28.400197 kubelet[2590]: I0130 13:15:28.400037 2590 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:15:28.400280 kubelet[2590]: I0130 13:15:28.400197 2590 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:15:28.400280 kubelet[2590]: I0130 13:15:28.400206 2590 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:15:28.400280 kubelet[2590]: I0130 13:15:28.400242 2590 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:15:28.400414 kubelet[2590]: I0130 13:15:28.400398 2590 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:15:28.400414 kubelet[2590]: I0130 13:15:28.400412 2590 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:15:28.400461 kubelet[2590]: I0130 13:15:28.400427 2590 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:15:28.400461 kubelet[2590]: I0130 13:15:28.400435 2590 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:15:28.401321 kubelet[2590]: I0130 13:15:28.401303 2590 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:15:28.401620 kubelet[2590]: I0130 13:15:28.401599 2590 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:15:28.401983 kubelet[2590]: I0130 13:15:28.401971 2590 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:15:28.402031 kubelet[2590]: I0130 13:15:28.401993 2590 server.go:1287] "Started kubelet" Jan 30 13:15:28.402906 kubelet[2590]: I0130 13:15:28.402854 2590 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:15:28.406161 kubelet[2590]: I0130 13:15:28.405952 2590 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:15:28.406161 kubelet[2590]: I0130 13:15:28.403460 2590 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:15:28.409047 kubelet[2590]: I0130 13:15:28.403000 2590 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:15:28.409533 kubelet[2590]: I0130 13:15:28.403836 2590 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:15:28.410330 kubelet[2590]: I0130 13:15:28.410313 2590 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:15:28.410778 kubelet[2590]: I0130 13:15:28.410719 2590 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:15:28.412366 kubelet[2590]: I0130 13:15:28.411592 2590 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:15:28.413102 kubelet[2590]: E0130 13:15:28.413066 2590 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:15:28.413521 kubelet[2590]: I0130 13:15:28.413504 2590 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:15:28.415876 kubelet[2590]: E0130 13:15:28.415853 2590 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:15:28.416236 kubelet[2590]: I0130 13:15:28.416218 2590 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:15:28.416412 kubelet[2590]: I0130 13:15:28.416390 2590 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:15:28.418015 kubelet[2590]: I0130 13:15:28.417993 2590 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:15:28.421023 kubelet[2590]: I0130 13:15:28.420981 2590 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:15:28.422229 kubelet[2590]: I0130 13:15:28.422215 2590 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:15:28.422314 kubelet[2590]: I0130 13:15:28.422305 2590 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:15:28.422383 kubelet[2590]: I0130 13:15:28.422373 2590 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:15:28.422511 kubelet[2590]: I0130 13:15:28.422501 2590 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:15:28.422805 kubelet[2590]: E0130 13:15:28.422598 2590 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:15:28.451720 kubelet[2590]: I0130 13:15:28.451692 2590 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:15:28.451720 kubelet[2590]: I0130 13:15:28.451710 2590 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:15:28.451720 kubelet[2590]: I0130 13:15:28.451729 2590 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:15:28.451892 kubelet[2590]: I0130 13:15:28.451856 2590 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:15:28.451892 kubelet[2590]: I0130 13:15:28.451866 2590 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:15:28.451892 kubelet[2590]: I0130 13:15:28.451882 2590 policy_none.go:49] "None policy: Start" Jan 30 13:15:28.451892 kubelet[2590]: I0130 13:15:28.451891 2590 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:15:28.451975 kubelet[2590]: I0130 13:15:28.451901 2590 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:15:28.451996 kubelet[2590]: I0130 13:15:28.451985 2590 state_mem.go:75] "Updated machine memory state" Jan 30 13:15:28.455635 kubelet[2590]: I0130 13:15:28.455601 2590 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:15:28.456024 kubelet[2590]: I0130 13:15:28.455803 2590 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:15:28.456024 kubelet[2590]: I0130 13:15:28.455818 2590 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:15:28.456024 kubelet[2590]: I0130 13:15:28.456010 2590 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:15:28.457144 kubelet[2590]: E0130 13:15:28.457116 2590 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:15:28.524102 kubelet[2590]: I0130 13:15:28.524055 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:15:28.524102 kubelet[2590]: I0130 13:15:28.524079 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:28.524254 kubelet[2590]: I0130 13:15:28.524057 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:28.560310 kubelet[2590]: I0130 13:15:28.560261 2590 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:15:28.565267 kubelet[2590]: I0130 13:15:28.565239 2590 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 30 13:15:28.565362 kubelet[2590]: I0130 13:15:28.565324 2590 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:15:28.612715 kubelet[2590]: I0130 13:15:28.612637 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a864e30205ec1d5cab941e48b2931a5d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a864e30205ec1d5cab941e48b2931a5d\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:28.612715 kubelet[2590]: I0130 13:15:28.612708 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a864e30205ec1d5cab941e48b2931a5d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a864e30205ec1d5cab941e48b2931a5d\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:28.612908 kubelet[2590]: I0130 13:15:28.612740 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:28.612908 kubelet[2590]: I0130 13:15:28.612812 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:28.612908 kubelet[2590]: I0130 13:15:28.612854 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:28.612908 kubelet[2590]: I0130 13:15:28.612873 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:28.612908 kubelet[2590]: I0130 13:15:28.612895 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:28.613062 kubelet[2590]: I0130 13:15:28.612925 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:15:28.613062 kubelet[2590]: I0130 13:15:28.612944 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a864e30205ec1d5cab941e48b2931a5d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a864e30205ec1d5cab941e48b2931a5d\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:28.830215 kubelet[2590]: E0130 13:15:28.830113 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:28.830215 kubelet[2590]: E0130 13:15:28.830192 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:28.832010 kubelet[2590]: E0130 13:15:28.831966 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:28.841923 sudo[2626]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:15:28.842365 sudo[2626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:15:29.293636 sudo[2626]: pam_unix(sudo:session): session closed for user root Jan 30 13:15:29.401751 kubelet[2590]: I0130 13:15:29.401714 2590 apiserver.go:52] "Watching apiserver" Jan 30 13:15:29.411038 kubelet[2590]: I0130 13:15:29.411000 2590 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:15:29.435754 kubelet[2590]: I0130 13:15:29.435493 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:29.436086 kubelet[2590]: I0130 13:15:29.436073 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:15:29.436362 kubelet[2590]: I0130 13:15:29.436285 2590 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:29.545153 kubelet[2590]: E0130 13:15:29.544937 2590 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:15:29.546158 kubelet[2590]: E0130 13:15:29.545554 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:29.546158 kubelet[2590]: E0130 13:15:29.545746 2590 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:15:29.546158 kubelet[2590]: E0130 13:15:29.545893 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:29.546158 kubelet[2590]: E0130 13:15:29.545997 2590 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:15:29.546158 kubelet[2590]: E0130 13:15:29.546106 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:30.105841 kubelet[2590]: I0130 13:15:30.105764 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.105703226 podStartE2EDuration="2.105703226s" podCreationTimestamp="2025-01-30 13:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:15:30.105691193 +0000 UTC m=+1.756207302" watchObservedRunningTime="2025-01-30 13:15:30.105703226 +0000 UTC m=+1.756219335" Jan 30 13:15:30.112853 kubelet[2590]: I0130 13:15:30.112730 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.11270738 podStartE2EDuration="2.11270738s" podCreationTimestamp="2025-01-30 13:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:15:30.112617719 +0000 UTC m=+1.763133828" watchObservedRunningTime="2025-01-30 13:15:30.11270738 +0000 UTC m=+1.763223499" Jan 30 13:15:30.128355 kubelet[2590]: I0130 13:15:30.128248 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.128227479 podStartE2EDuration="2.128227479s" podCreationTimestamp="2025-01-30 13:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:15:30.119593019 +0000 UTC m=+1.770109128" watchObservedRunningTime="2025-01-30 13:15:30.128227479 +0000 UTC m=+1.778743608" Jan 30 13:15:30.437420 kubelet[2590]: E0130 13:15:30.437277 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:30.437420 kubelet[2590]: E0130 13:15:30.437406 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:30.437884 kubelet[2590]: E0130 13:15:30.437501 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:30.616177 sudo[1672]: pam_unix(sudo:session): session closed for user root Jan 30 13:15:30.617539 sshd[1671]: Connection closed by 10.0.0.1 port 51794 Jan 30 13:15:30.618054 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:30.620957 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:51794.service: Deactivated successfully. Jan 30 13:15:30.623870 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:15:30.624056 systemd[1]: session-9.scope: Consumed 3.996s CPU time, 155.2M memory peak, 0B memory swap peak. Jan 30 13:15:30.625466 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:15:30.626319 systemd-logind[1459]: Removed session 9. Jan 30 13:15:31.439492 kubelet[2590]: E0130 13:15:31.439462 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:33.008419 kubelet[2590]: E0130 13:15:33.008378 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:33.381894 kubelet[2590]: I0130 13:15:33.381785 2590 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:15:33.382316 containerd[1476]: time="2025-01-30T13:15:33.382271491Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:15:33.382734 kubelet[2590]: I0130 13:15:33.382416 2590 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:15:34.147321 kubelet[2590]: I0130 13:15:34.147278 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94035250-82ce-461b-9571-a1258e9c40ee-lib-modules\") pod \"kube-proxy-bqppc\" (UID: \"94035250-82ce-461b-9571-a1258e9c40ee\") " pod="kube-system/kube-proxy-bqppc" Jan 30 13:15:34.147321 kubelet[2590]: I0130 13:15:34.147318 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-config-path\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.147812 kubelet[2590]: I0130 13:15:34.147342 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xfsn\" (UniqueName: \"kubernetes.io/projected/1241a703-f12e-45a9-a38d-0fddcc34b1d3-kube-api-access-8xfsn\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.147812 kubelet[2590]: I0130 13:15:34.147359 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94035250-82ce-461b-9571-a1258e9c40ee-xtables-lock\") pod \"kube-proxy-bqppc\" (UID: \"94035250-82ce-461b-9571-a1258e9c40ee\") " pod="kube-system/kube-proxy-bqppc" Jan 30 13:15:34.147812 kubelet[2590]: I0130 13:15:34.147374 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-host-proc-sys-kernel\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.147812 kubelet[2590]: I0130 13:15:34.147391 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-bpf-maps\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.147812 kubelet[2590]: I0130 13:15:34.147405 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-xtables-lock\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.147812 kubelet[2590]: I0130 13:15:34.147418 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94035250-82ce-461b-9571-a1258e9c40ee-kube-proxy\") pod \"kube-proxy-bqppc\" (UID: \"94035250-82ce-461b-9571-a1258e9c40ee\") " pod="kube-system/kube-proxy-bqppc" Jan 30 13:15:34.148011 kubelet[2590]: I0130 13:15:34.147479 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s725x\" (UniqueName: \"kubernetes.io/projected/94035250-82ce-461b-9571-a1258e9c40ee-kube-api-access-s725x\") pod \"kube-proxy-bqppc\" (UID: \"94035250-82ce-461b-9571-a1258e9c40ee\") " pod="kube-system/kube-proxy-bqppc" Jan 30 13:15:34.148011 kubelet[2590]: I0130 13:15:34.147517 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-etc-cni-netd\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.148011 kubelet[2590]: I0130 13:15:34.147534 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-host-proc-sys-net\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.148011 kubelet[2590]: I0130 13:15:34.147550 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-run\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.148011 kubelet[2590]: I0130 13:15:34.147564 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-cgroup\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.148172 kubelet[2590]: I0130 13:15:34.147577 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1241a703-f12e-45a9-a38d-0fddcc34b1d3-clustermesh-secrets\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.148172 kubelet[2590]: I0130 13:15:34.147592 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-hostproc\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.148172 kubelet[2590]: I0130 13:15:34.147607 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cni-path\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.148172 kubelet[2590]: I0130 13:15:34.147620 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-lib-modules\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.148172 kubelet[2590]: I0130 13:15:34.147635 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1241a703-f12e-45a9-a38d-0fddcc34b1d3-hubble-tls\") pod \"cilium-2mwbx\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " pod="kube-system/cilium-2mwbx" Jan 30 13:15:34.148487 systemd[1]: Created slice kubepods-besteffort-pod94035250_82ce_461b_9571_a1258e9c40ee.slice - libcontainer container kubepods-besteffort-pod94035250_82ce_461b_9571_a1258e9c40ee.slice. Jan 30 13:15:34.160104 systemd[1]: Created slice kubepods-burstable-pod1241a703_f12e_45a9_a38d_0fddcc34b1d3.slice - libcontainer container kubepods-burstable-pod1241a703_f12e_45a9_a38d_0fddcc34b1d3.slice. Jan 30 13:15:34.400717 systemd[1]: Created slice kubepods-besteffort-pod1549af23_943a_4e88_a923_20425b4cdf74.slice - libcontainer container kubepods-besteffort-pod1549af23_943a_4e88_a923_20425b4cdf74.slice. Jan 30 13:15:34.451077 kubelet[2590]: I0130 13:15:34.451028 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1549af23-943a-4e88-a923-20425b4cdf74-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-q2njb\" (UID: \"1549af23-943a-4e88-a923-20425b4cdf74\") " pod="kube-system/cilium-operator-6c4d7847fc-q2njb" Jan 30 13:15:34.451077 kubelet[2590]: I0130 13:15:34.451063 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2hv\" (UniqueName: \"kubernetes.io/projected/1549af23-943a-4e88-a923-20425b4cdf74-kube-api-access-9z2hv\") pod \"cilium-operator-6c4d7847fc-q2njb\" (UID: \"1549af23-943a-4e88-a923-20425b4cdf74\") " pod="kube-system/cilium-operator-6c4d7847fc-q2njb" Jan 30 13:15:34.458211 kubelet[2590]: E0130 13:15:34.458184 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:34.459180 containerd[1476]: time="2025-01-30T13:15:34.458742475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqppc,Uid:94035250-82ce-461b-9571-a1258e9c40ee,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:34.462510 kubelet[2590]: E0130 13:15:34.462477 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:34.462993 containerd[1476]: time="2025-01-30T13:15:34.462963018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mwbx,Uid:1241a703-f12e-45a9-a38d-0fddcc34b1d3,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:34.678806 containerd[1476]: time="2025-01-30T13:15:34.678593726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:34.678806 containerd[1476]: time="2025-01-30T13:15:34.678648960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:34.678806 containerd[1476]: time="2025-01-30T13:15:34.678678406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:34.678806 containerd[1476]: time="2025-01-30T13:15:34.678755141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:34.681680 containerd[1476]: time="2025-01-30T13:15:34.681579163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:34.681762 containerd[1476]: time="2025-01-30T13:15:34.681703379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:34.681762 containerd[1476]: time="2025-01-30T13:15:34.681723377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:34.683980 containerd[1476]: time="2025-01-30T13:15:34.682614389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:34.704601 kubelet[2590]: E0130 13:15:34.704537 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:34.704994 containerd[1476]: time="2025-01-30T13:15:34.704953477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q2njb,Uid:1549af23-943a-4e88-a923-20425b4cdf74,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:34.705788 systemd[1]: Started cri-containerd-05d5be6533b5673197c5be25e386d108b9da61324d058cb88888bdb77cbce6f7.scope - libcontainer container 05d5be6533b5673197c5be25e386d108b9da61324d058cb88888bdb77cbce6f7. Jan 30 13:15:34.709459 systemd[1]: Started cri-containerd-9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e.scope - libcontainer container 9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e. Jan 30 13:15:34.737693 containerd[1476]: time="2025-01-30T13:15:34.737373084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqppc,Uid:94035250-82ce-461b-9571-a1258e9c40ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"05d5be6533b5673197c5be25e386d108b9da61324d058cb88888bdb77cbce6f7\"" Jan 30 13:15:34.738091 kubelet[2590]: E0130 13:15:34.738067 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:34.742190 containerd[1476]: time="2025-01-30T13:15:34.742015728Z" level=info msg="CreateContainer within sandbox \"05d5be6533b5673197c5be25e386d108b9da61324d058cb88888bdb77cbce6f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:15:34.744418 containerd[1476]: time="2025-01-30T13:15:34.744086299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:34.744418 containerd[1476]: time="2025-01-30T13:15:34.744136293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:34.744418 containerd[1476]: time="2025-01-30T13:15:34.744146472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:34.744418 containerd[1476]: time="2025-01-30T13:15:34.744213470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:34.745091 containerd[1476]: time="2025-01-30T13:15:34.744627666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mwbx,Uid:1241a703-f12e-45a9-a38d-0fddcc34b1d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\"" Jan 30 13:15:34.746056 kubelet[2590]: E0130 13:15:34.746022 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:34.747033 containerd[1476]: time="2025-01-30T13:15:34.746978519Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:15:34.767939 containerd[1476]: time="2025-01-30T13:15:34.767886752Z" level=info msg="CreateContainer within sandbox \"05d5be6533b5673197c5be25e386d108b9da61324d058cb88888bdb77cbce6f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"94d5a613f6575e74170e069bc57a0ca526ed70ae2d3bc96c5026007b531c6eaf\"" Jan 30 13:15:34.768552 containerd[1476]: time="2025-01-30T13:15:34.768361092Z" level=info msg="StartContainer for \"94d5a613f6575e74170e069bc57a0ca526ed70ae2d3bc96c5026007b531c6eaf\"" Jan 30 13:15:34.768815 systemd[1]: Started cri-containerd-fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d.scope - libcontainer container fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d. Jan 30 13:15:34.809796 systemd[1]: Started cri-containerd-94d5a613f6575e74170e069bc57a0ca526ed70ae2d3bc96c5026007b531c6eaf.scope - libcontainer container 94d5a613f6575e74170e069bc57a0ca526ed70ae2d3bc96c5026007b531c6eaf. Jan 30 13:15:34.813839 update_engine[1462]: I20250130 13:15:34.813704 1462 update_attempter.cc:509] Updating boot flags... Jan 30 13:15:34.822065 containerd[1476]: time="2025-01-30T13:15:34.822020711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q2njb,Uid:1549af23-943a-4e88-a923-20425b4cdf74,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d\"" Jan 30 13:15:34.824468 kubelet[2590]: E0130 13:15:34.823994 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:34.846266 containerd[1476]: time="2025-01-30T13:15:34.846213710Z" level=info msg="StartContainer for \"94d5a613f6575e74170e069bc57a0ca526ed70ae2d3bc96c5026007b531c6eaf\" returns successfully" Jan 30 13:15:34.850907 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2831) Jan 30 13:15:34.925040 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2831) Jan 30 13:15:35.450227 kubelet[2590]: E0130 13:15:35.450202 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:35.458708 kubelet[2590]: I0130 13:15:35.458611 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bqppc" podStartSLOduration=1.4585889650000001 podStartE2EDuration="1.458588965s" podCreationTimestamp="2025-01-30 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:15:35.457987404 +0000 UTC m=+7.108503533" watchObservedRunningTime="2025-01-30 13:15:35.458588965 +0000 UTC m=+7.109105075" Jan 30 13:15:35.509831 kubelet[2590]: E0130 13:15:35.509778 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:36.451999 kubelet[2590]: E0130 13:15:36.451966 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:39.795192 kubelet[2590]: E0130 13:15:39.795105 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:41.658567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20479848.mount: Deactivated successfully. Jan 30 13:15:43.013812 kubelet[2590]: E0130 13:15:43.013774 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:43.465575 kubelet[2590]: E0130 13:15:43.465535 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:44.895479 containerd[1476]: time="2025-01-30T13:15:44.895415392Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:44.896305 containerd[1476]: time="2025-01-30T13:15:44.896254466Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:15:44.897455 containerd[1476]: time="2025-01-30T13:15:44.897418924Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:44.899244 containerd[1476]: time="2025-01-30T13:15:44.899203943Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.152178325s" Jan 30 13:15:44.899244 containerd[1476]: time="2025-01-30T13:15:44.899240061Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:15:44.906058 containerd[1476]: time="2025-01-30T13:15:44.906017122Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:15:44.907377 containerd[1476]: time="2025-01-30T13:15:44.907336963Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:15:44.926081 containerd[1476]: time="2025-01-30T13:15:44.926032521Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476\"" Jan 30 13:15:44.929148 containerd[1476]: time="2025-01-30T13:15:44.929117012Z" level=info msg="StartContainer for \"7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476\"" Jan 30 13:15:44.965806 systemd[1]: Started cri-containerd-7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476.scope - libcontainer container 7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476. Jan 30 13:15:44.995036 containerd[1476]: time="2025-01-30T13:15:44.994974675Z" level=info msg="StartContainer for \"7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476\" returns successfully" Jan 30 13:15:45.005438 systemd[1]: cri-containerd-7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476.scope: Deactivated successfully. Jan 30 13:15:45.049424 containerd[1476]: time="2025-01-30T13:15:45.049352003Z" level=info msg="shim disconnected" id=7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476 namespace=k8s.io Jan 30 13:15:45.049424 containerd[1476]: time="2025-01-30T13:15:45.049422256Z" level=warning msg="cleaning up after shim disconnected" id=7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476 namespace=k8s.io Jan 30 13:15:45.049424 containerd[1476]: time="2025-01-30T13:15:45.049432325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:15:45.472578 kubelet[2590]: E0130 13:15:45.472546 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:45.474951 containerd[1476]: time="2025-01-30T13:15:45.474906786Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:15:45.496272 containerd[1476]: time="2025-01-30T13:15:45.496208867Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50\"" Jan 30 13:15:45.496726 containerd[1476]: time="2025-01-30T13:15:45.496697088Z" level=info msg="StartContainer for \"fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50\"" Jan 30 13:15:45.525808 systemd[1]: Started cri-containerd-fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50.scope - libcontainer container fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50. Jan 30 13:15:45.598576 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:15:45.598989 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:15:45.599147 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:15:45.604928 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:15:45.605116 systemd[1]: cri-containerd-fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50.scope: Deactivated successfully. Jan 30 13:15:45.620434 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:15:45.637450 containerd[1476]: time="2025-01-30T13:15:45.637399871Z" level=info msg="StartContainer for \"fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50\" returns successfully" Jan 30 13:15:45.776732 containerd[1476]: time="2025-01-30T13:15:45.776558401Z" level=info msg="shim disconnected" id=fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50 namespace=k8s.io Jan 30 13:15:45.776732 containerd[1476]: time="2025-01-30T13:15:45.776624957Z" level=warning msg="cleaning up after shim disconnected" id=fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50 namespace=k8s.io Jan 30 13:15:45.776732 containerd[1476]: time="2025-01-30T13:15:45.776635386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:15:45.918949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476-rootfs.mount: Deactivated successfully. Jan 30 13:15:46.448689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218507149.mount: Deactivated successfully. Jan 30 13:15:46.476208 kubelet[2590]: E0130 13:15:46.476173 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:46.479201 containerd[1476]: time="2025-01-30T13:15:46.479134901Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:15:46.500751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333211261.mount: Deactivated successfully. Jan 30 13:15:46.505271 containerd[1476]: time="2025-01-30T13:15:46.505237810Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7\"" Jan 30 13:15:46.505945 containerd[1476]: time="2025-01-30T13:15:46.505921350Z" level=info msg="StartContainer for \"f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7\"" Jan 30 13:15:46.540813 systemd[1]: Started cri-containerd-f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7.scope - libcontainer container f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7. Jan 30 13:15:46.577305 systemd[1]: cri-containerd-f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7.scope: Deactivated successfully. Jan 30 13:15:46.578544 containerd[1476]: time="2025-01-30T13:15:46.578489493Z" level=info msg="StartContainer for \"f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7\" returns successfully" Jan 30 13:15:46.692492 containerd[1476]: time="2025-01-30T13:15:46.692426606Z" level=info msg="shim disconnected" id=f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7 namespace=k8s.io Jan 30 13:15:46.692492 containerd[1476]: time="2025-01-30T13:15:46.692483825Z" level=warning msg="cleaning up after shim disconnected" id=f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7 namespace=k8s.io Jan 30 13:15:46.692492 containerd[1476]: time="2025-01-30T13:15:46.692492952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:15:46.710806 containerd[1476]: time="2025-01-30T13:15:46.710357965Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:15:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:15:47.381365 containerd[1476]: time="2025-01-30T13:15:47.381310595Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:47.382106 containerd[1476]: time="2025-01-30T13:15:47.382071249Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:15:47.384225 containerd[1476]: time="2025-01-30T13:15:47.384190034Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:15:47.385604 containerd[1476]: time="2025-01-30T13:15:47.385571799Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.47952521s" Jan 30 13:15:47.385686 containerd[1476]: time="2025-01-30T13:15:47.385604661Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:15:47.387289 containerd[1476]: time="2025-01-30T13:15:47.387242459Z" level=info msg="CreateContainer within sandbox \"fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:15:47.402036 containerd[1476]: time="2025-01-30T13:15:47.401997276Z" level=info msg="CreateContainer within sandbox \"fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60\"" Jan 30 13:15:47.402467 containerd[1476]: time="2025-01-30T13:15:47.402445752Z" level=info msg="StartContainer for \"945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60\"" Jan 30 13:15:47.433794 systemd[1]: Started cri-containerd-945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60.scope - libcontainer container 945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60. Jan 30 13:15:47.459083 containerd[1476]: time="2025-01-30T13:15:47.459028278Z" level=info msg="StartContainer for \"945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60\" returns successfully" Jan 30 13:15:47.479791 kubelet[2590]: E0130 13:15:47.479747 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:47.482976 kubelet[2590]: E0130 13:15:47.482934 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:47.484784 containerd[1476]: time="2025-01-30T13:15:47.483921049Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:15:47.503030 containerd[1476]: time="2025-01-30T13:15:47.502976195Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810\"" Jan 30 13:15:47.503697 containerd[1476]: time="2025-01-30T13:15:47.503448444Z" level=info msg="StartContainer for \"dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810\"" Jan 30 13:15:47.538912 systemd[1]: Started cri-containerd-dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810.scope - libcontainer container dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810. Jan 30 13:15:47.573543 systemd[1]: cri-containerd-dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810.scope: Deactivated successfully. Jan 30 13:15:47.578053 containerd[1476]: time="2025-01-30T13:15:47.578002944Z" level=info msg="StartContainer for \"dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810\" returns successfully" Jan 30 13:15:47.820285 containerd[1476]: time="2025-01-30T13:15:47.820124431Z" level=info msg="shim disconnected" id=dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810 namespace=k8s.io Jan 30 13:15:47.820285 containerd[1476]: time="2025-01-30T13:15:47.820216364Z" level=warning msg="cleaning up after shim disconnected" id=dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810 namespace=k8s.io Jan 30 13:15:47.820285 containerd[1476]: time="2025-01-30T13:15:47.820230271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:15:48.486558 kubelet[2590]: E0130 13:15:48.486528 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:48.487252 kubelet[2590]: E0130 13:15:48.486587 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:48.488885 containerd[1476]: time="2025-01-30T13:15:48.488840804Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:15:48.501347 kubelet[2590]: I0130 13:15:48.501264 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-q2njb" podStartSLOduration=1.940436338 podStartE2EDuration="14.501236836s" podCreationTimestamp="2025-01-30 13:15:34 +0000 UTC" firstStartedPulling="2025-01-30 13:15:34.825411578 +0000 UTC m=+6.475927677" lastFinishedPulling="2025-01-30 13:15:47.386212066 +0000 UTC m=+19.036728175" observedRunningTime="2025-01-30 13:15:47.507268136 +0000 UTC m=+19.157784245" watchObservedRunningTime="2025-01-30 13:15:48.501236836 +0000 UTC m=+20.151752966" Jan 30 13:15:48.509511 containerd[1476]: time="2025-01-30T13:15:48.509471937Z" level=info msg="CreateContainer within sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6\"" Jan 30 13:15:48.509980 containerd[1476]: time="2025-01-30T13:15:48.509912779Z" level=info msg="StartContainer for \"76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6\"" Jan 30 13:15:48.593766 systemd[1]: Started cri-containerd-76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6.scope - libcontainer container 76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6. Jan 30 13:15:48.624587 containerd[1476]: time="2025-01-30T13:15:48.624538212Z" level=info msg="StartContainer for \"76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6\" returns successfully" Jan 30 13:15:48.765425 kubelet[2590]: I0130 13:15:48.764972 2590 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:15:48.807469 systemd[1]: Created slice kubepods-burstable-pod6eeb6726_e4bd_4270_b243_c8e85d8972a2.slice - libcontainer container kubepods-burstable-pod6eeb6726_e4bd_4270_b243_c8e85d8972a2.slice. Jan 30 13:15:48.814512 systemd[1]: Created slice kubepods-burstable-pod8b7b4391_b9f3_4244_b6a7_d00f5e5f48fb.slice - libcontainer container kubepods-burstable-pod8b7b4391_b9f3_4244_b6a7_d00f5e5f48fb.slice. Jan 30 13:15:48.855412 kubelet[2590]: I0130 13:15:48.855370 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx825\" (UniqueName: \"kubernetes.io/projected/8b7b4391-b9f3-4244-b6a7-d00f5e5f48fb-kube-api-access-lx825\") pod \"coredns-668d6bf9bc-zwknq\" (UID: \"8b7b4391-b9f3-4244-b6a7-d00f5e5f48fb\") " pod="kube-system/coredns-668d6bf9bc-zwknq" Jan 30 13:15:48.855412 kubelet[2590]: I0130 13:15:48.855406 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6k94\" (UniqueName: \"kubernetes.io/projected/6eeb6726-e4bd-4270-b243-c8e85d8972a2-kube-api-access-k6k94\") pod \"coredns-668d6bf9bc-8st7m\" (UID: \"6eeb6726-e4bd-4270-b243-c8e85d8972a2\") " pod="kube-system/coredns-668d6bf9bc-8st7m" Jan 30 13:15:48.855571 kubelet[2590]: I0130 13:15:48.855422 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeb6726-e4bd-4270-b243-c8e85d8972a2-config-volume\") pod \"coredns-668d6bf9bc-8st7m\" (UID: \"6eeb6726-e4bd-4270-b243-c8e85d8972a2\") " pod="kube-system/coredns-668d6bf9bc-8st7m" Jan 30 13:15:48.855571 kubelet[2590]: I0130 13:15:48.855437 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b7b4391-b9f3-4244-b6a7-d00f5e5f48fb-config-volume\") pod \"coredns-668d6bf9bc-zwknq\" (UID: \"8b7b4391-b9f3-4244-b6a7-d00f5e5f48fb\") " pod="kube-system/coredns-668d6bf9bc-zwknq" Jan 30 13:15:48.919053 systemd[1]: run-containerd-runc-k8s.io-76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6-runc.1ULN6h.mount: Deactivated successfully. Jan 30 13:15:49.114318 kubelet[2590]: E0130 13:15:49.114215 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:49.115158 containerd[1476]: time="2025-01-30T13:15:49.115110809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8st7m,Uid:6eeb6726-e4bd-4270-b243-c8e85d8972a2,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:49.118974 kubelet[2590]: E0130 13:15:49.118935 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:49.120535 containerd[1476]: time="2025-01-30T13:15:49.120495885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwknq,Uid:8b7b4391-b9f3-4244-b6a7-d00f5e5f48fb,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:49.503623 kubelet[2590]: E0130 13:15:49.503584 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:49.522609 kubelet[2590]: I0130 13:15:49.522193 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2mwbx" podStartSLOduration=5.362959683 podStartE2EDuration="15.52217269s" podCreationTimestamp="2025-01-30 13:15:34 +0000 UTC" firstStartedPulling="2025-01-30 13:15:34.746613586 +0000 UTC m=+6.397129695" lastFinishedPulling="2025-01-30 13:15:44.905826593 +0000 UTC m=+16.556342702" observedRunningTime="2025-01-30 13:15:49.521940553 +0000 UTC m=+21.172456662" watchObservedRunningTime="2025-01-30 13:15:49.52217269 +0000 UTC m=+21.172688799" Jan 30 13:15:50.500910 kubelet[2590]: E0130 13:15:50.500867 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:50.811114 systemd-networkd[1405]: cilium_host: Link UP Jan 30 13:15:50.811856 systemd-networkd[1405]: cilium_net: Link UP Jan 30 13:15:50.812755 systemd-networkd[1405]: cilium_net: Gained carrier Jan 30 13:15:50.813008 systemd-networkd[1405]: cilium_host: Gained carrier Jan 30 13:15:50.813192 systemd-networkd[1405]: cilium_net: Gained IPv6LL Jan 30 13:15:50.813394 systemd-networkd[1405]: cilium_host: Gained IPv6LL Jan 30 13:15:50.909059 systemd-networkd[1405]: cilium_vxlan: Link UP Jan 30 13:15:50.909072 systemd-networkd[1405]: cilium_vxlan: Gained carrier Jan 30 13:15:51.129678 kernel: NET: Registered PF_ALG protocol family Jan 30 13:15:51.502704 kubelet[2590]: E0130 13:15:51.502573 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:51.814120 systemd-networkd[1405]: lxc_health: Link UP Jan 30 13:15:51.822690 systemd-networkd[1405]: lxc_health: Gained carrier Jan 30 13:15:52.171137 systemd-networkd[1405]: lxc08d80f0cac4e: Link UP Jan 30 13:15:52.178680 kernel: eth0: renamed from tmp58fb3 Jan 30 13:15:52.186141 systemd-networkd[1405]: lxc08d80f0cac4e: Gained carrier Jan 30 13:15:52.190710 systemd-networkd[1405]: lxcc7e1cb762be5: Link UP Jan 30 13:15:52.199738 kernel: eth0: renamed from tmp1994f Jan 30 13:15:52.207644 systemd-networkd[1405]: lxcc7e1cb762be5: Gained carrier Jan 30 13:15:52.505440 kubelet[2590]: E0130 13:15:52.504784 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:52.791811 systemd-networkd[1405]: cilium_vxlan: Gained IPv6LL Jan 30 13:15:52.919789 systemd-networkd[1405]: lxc_health: Gained IPv6LL Jan 30 13:15:53.367784 systemd-networkd[1405]: lxcc7e1cb762be5: Gained IPv6LL Jan 30 13:15:53.505975 kubelet[2590]: E0130 13:15:53.505940 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:53.687826 systemd-networkd[1405]: lxc08d80f0cac4e: Gained IPv6LL Jan 30 13:15:54.507092 kubelet[2590]: E0130 13:15:54.507060 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:55.537216 containerd[1476]: time="2025-01-30T13:15:55.536552985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:55.537216 containerd[1476]: time="2025-01-30T13:15:55.537172019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:55.537216 containerd[1476]: time="2025-01-30T13:15:55.537188159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:55.538094 containerd[1476]: time="2025-01-30T13:15:55.537268331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:55.539041 containerd[1476]: time="2025-01-30T13:15:55.538856087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:55.539041 containerd[1476]: time="2025-01-30T13:15:55.538893098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:55.539041 containerd[1476]: time="2025-01-30T13:15:55.538902475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:55.539041 containerd[1476]: time="2025-01-30T13:15:55.538970423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:55.557275 systemd[1]: run-containerd-runc-k8s.io-58fb3a50e984d63c2119fea353e7be0c45f7fc9bc9d3f84157e71e42031d8809-runc.SGQCcf.mount: Deactivated successfully. Jan 30 13:15:55.566801 systemd[1]: Started cri-containerd-1994f572556c1a7b258ee28c12747edc1be941930634a4ffefbfb37273396ee8.scope - libcontainer container 1994f572556c1a7b258ee28c12747edc1be941930634a4ffefbfb37273396ee8. Jan 30 13:15:55.568359 systemd[1]: Started cri-containerd-58fb3a50e984d63c2119fea353e7be0c45f7fc9bc9d3f84157e71e42031d8809.scope - libcontainer container 58fb3a50e984d63c2119fea353e7be0c45f7fc9bc9d3f84157e71e42031d8809. Jan 30 13:15:55.579830 systemd-resolved[1359]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:15:55.581446 systemd-resolved[1359]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:15:55.605392 containerd[1476]: time="2025-01-30T13:15:55.605345846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zwknq,Uid:8b7b4391-b9f3-4244-b6a7-d00f5e5f48fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1994f572556c1a7b258ee28c12747edc1be941930634a4ffefbfb37273396ee8\"" Jan 30 13:15:55.605957 kubelet[2590]: E0130 13:15:55.605926 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:55.609091 containerd[1476]: time="2025-01-30T13:15:55.609062908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8st7m,Uid:6eeb6726-e4bd-4270-b243-c8e85d8972a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"58fb3a50e984d63c2119fea353e7be0c45f7fc9bc9d3f84157e71e42031d8809\"" Jan 30 13:15:55.610202 kubelet[2590]: E0130 13:15:55.610087 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:55.610257 containerd[1476]: time="2025-01-30T13:15:55.610133542Z" level=info msg="CreateContainer within sandbox \"1994f572556c1a7b258ee28c12747edc1be941930634a4ffefbfb37273396ee8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:15:55.612279 containerd[1476]: time="2025-01-30T13:15:55.612257889Z" level=info msg="CreateContainer within sandbox \"58fb3a50e984d63c2119fea353e7be0c45f7fc9bc9d3f84157e71e42031d8809\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:15:55.785235 containerd[1476]: time="2025-01-30T13:15:55.785186911Z" level=info msg="CreateContainer within sandbox \"1994f572556c1a7b258ee28c12747edc1be941930634a4ffefbfb37273396ee8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42d0d8e57f6f053cf7979e7bf91d8d1c4af0f6477b269c6c0f1250035a06e57f\"" Jan 30 13:15:55.785706 containerd[1476]: time="2025-01-30T13:15:55.785682894Z" level=info msg="StartContainer for \"42d0d8e57f6f053cf7979e7bf91d8d1c4af0f6477b269c6c0f1250035a06e57f\"" Jan 30 13:15:55.810298 containerd[1476]: time="2025-01-30T13:15:55.810200850Z" level=info msg="CreateContainer within sandbox \"58fb3a50e984d63c2119fea353e7be0c45f7fc9bc9d3f84157e71e42031d8809\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e9f47840a4d77fee0a6e70afac47c02c4861f02921de2e3bbbf077b47160364\"" Jan 30 13:15:55.811008 containerd[1476]: time="2025-01-30T13:15:55.810711019Z" level=info msg="StartContainer for \"0e9f47840a4d77fee0a6e70afac47c02c4861f02921de2e3bbbf077b47160364\"" Jan 30 13:15:55.814017 systemd[1]: Started cri-containerd-42d0d8e57f6f053cf7979e7bf91d8d1c4af0f6477b269c6c0f1250035a06e57f.scope - libcontainer container 42d0d8e57f6f053cf7979e7bf91d8d1c4af0f6477b269c6c0f1250035a06e57f. Jan 30 13:15:55.837475 systemd[1]: Started cri-containerd-0e9f47840a4d77fee0a6e70afac47c02c4861f02921de2e3bbbf077b47160364.scope - libcontainer container 0e9f47840a4d77fee0a6e70afac47c02c4861f02921de2e3bbbf077b47160364. Jan 30 13:15:55.845990 containerd[1476]: time="2025-01-30T13:15:55.845941489Z" level=info msg="StartContainer for \"42d0d8e57f6f053cf7979e7bf91d8d1c4af0f6477b269c6c0f1250035a06e57f\" returns successfully" Jan 30 13:15:55.867026 containerd[1476]: time="2025-01-30T13:15:55.866950182Z" level=info msg="StartContainer for \"0e9f47840a4d77fee0a6e70afac47c02c4861f02921de2e3bbbf077b47160364\" returns successfully" Jan 30 13:15:56.517174 kubelet[2590]: E0130 13:15:56.516962 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:56.519602 kubelet[2590]: E0130 13:15:56.519557 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:56.823829 kubelet[2590]: I0130 13:15:56.823617 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zwknq" podStartSLOduration=22.823590339 podStartE2EDuration="22.823590339s" podCreationTimestamp="2025-01-30 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:15:56.66363325 +0000 UTC m=+28.314149359" watchObservedRunningTime="2025-01-30 13:15:56.823590339 +0000 UTC m=+28.474106448" Jan 30 13:15:56.837535 kubelet[2590]: I0130 13:15:56.837474 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8st7m" podStartSLOduration=22.837453033 podStartE2EDuration="22.837453033s" podCreationTimestamp="2025-01-30 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:15:56.83659558 +0000 UTC m=+28.487111689" watchObservedRunningTime="2025-01-30 13:15:56.837453033 +0000 UTC m=+28.487969142" Jan 30 13:15:57.358894 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:37436.service - OpenSSH per-connection server daemon (10.0.0.1:37436). Jan 30 13:15:57.401074 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 37436 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:57.402669 sshd-session[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:57.406604 systemd-logind[1459]: New session 10 of user core. Jan 30 13:15:57.414935 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:15:57.520572 kubelet[2590]: E0130 13:15:57.520528 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:57.520814 kubelet[2590]: E0130 13:15:57.520786 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:57.640100 sshd[3997]: Connection closed by 10.0.0.1 port 37436 Jan 30 13:15:57.640406 sshd-session[3995]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:57.643962 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:37436.service: Deactivated successfully. Jan 30 13:15:57.646002 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:15:57.646726 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:15:57.647542 systemd-logind[1459]: Removed session 10. Jan 30 13:15:58.521966 kubelet[2590]: E0130 13:15:58.521915 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:58.522359 kubelet[2590]: E0130 13:15:58.522030 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:02.651560 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Jan 30 13:16:02.687967 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:02.689279 sshd-session[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:02.692987 systemd-logind[1459]: New session 11 of user core. Jan 30 13:16:02.702785 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:16:02.805488 sshd[4017]: Connection closed by 10.0.0.1 port 37452 Jan 30 13:16:02.805873 sshd-session[4015]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:02.809603 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:37452.service: Deactivated successfully. Jan 30 13:16:02.811436 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:16:02.812083 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:16:02.813008 systemd-logind[1459]: Removed session 11. Jan 30 13:16:07.820634 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:52014.service - OpenSSH per-connection server daemon (10.0.0.1:52014). Jan 30 13:16:07.856355 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 52014 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:07.857630 sshd-session[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:07.861121 systemd-logind[1459]: New session 12 of user core. Jan 30 13:16:07.871786 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:16:07.973907 sshd[4035]: Connection closed by 10.0.0.1 port 52014 Jan 30 13:16:07.974254 sshd-session[4033]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:07.977681 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:52014.service: Deactivated successfully. Jan 30 13:16:07.979437 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:16:07.980025 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:16:07.980852 systemd-logind[1459]: Removed session 12. Jan 30 13:16:12.985640 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:52018.service - OpenSSH per-connection server daemon (10.0.0.1:52018). Jan 30 13:16:13.022490 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 52018 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:13.024044 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:13.027900 systemd-logind[1459]: New session 13 of user core. Jan 30 13:16:13.040828 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:16:13.149105 sshd[4050]: Connection closed by 10.0.0.1 port 52018 Jan 30 13:16:13.149545 sshd-session[4048]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:13.172083 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:52018.service: Deactivated successfully. Jan 30 13:16:13.174180 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:16:13.176380 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:16:13.190883 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:52022.service - OpenSSH per-connection server daemon (10.0.0.1:52022). Jan 30 13:16:13.191996 systemd-logind[1459]: Removed session 13. Jan 30 13:16:13.223302 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 52022 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:13.224718 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:13.228897 systemd-logind[1459]: New session 14 of user core. Jan 30 13:16:13.238787 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:16:13.380500 sshd[4065]: Connection closed by 10.0.0.1 port 52022 Jan 30 13:16:13.381491 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:13.392166 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:52022.service: Deactivated successfully. Jan 30 13:16:13.397774 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:16:13.401811 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:16:13.410497 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:52026.service - OpenSSH per-connection server daemon (10.0.0.1:52026). Jan 30 13:16:13.412329 systemd-logind[1459]: Removed session 14. Jan 30 13:16:13.457392 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 52026 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:13.459077 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:13.463056 systemd-logind[1459]: New session 15 of user core. Jan 30 13:16:13.470780 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:16:13.583474 sshd[4077]: Connection closed by 10.0.0.1 port 52026 Jan 30 13:16:13.583795 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:13.587604 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:52026.service: Deactivated successfully. Jan 30 13:16:13.589783 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:16:13.590505 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:16:13.591462 systemd-logind[1459]: Removed session 15. Jan 30 13:16:18.596755 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:35854.service - OpenSSH per-connection server daemon (10.0.0.1:35854). Jan 30 13:16:18.632145 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 35854 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:18.633446 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:18.637161 systemd-logind[1459]: New session 16 of user core. Jan 30 13:16:18.646820 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:16:18.752822 sshd[4092]: Connection closed by 10.0.0.1 port 35854 Jan 30 13:16:18.753196 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:18.756829 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:35854.service: Deactivated successfully. Jan 30 13:16:18.759051 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:16:18.759743 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:16:18.760640 systemd-logind[1459]: Removed session 16. Jan 30 13:16:23.764379 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:35870.service - OpenSSH per-connection server daemon (10.0.0.1:35870). Jan 30 13:16:23.800090 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 35870 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:23.801749 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:23.805486 systemd-logind[1459]: New session 17 of user core. Jan 30 13:16:23.816778 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:16:23.920247 sshd[4107]: Connection closed by 10.0.0.1 port 35870 Jan 30 13:16:23.920591 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:23.934594 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:35870.service: Deactivated successfully. Jan 30 13:16:23.936587 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:16:23.938176 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:16:23.943924 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:35876.service - OpenSSH per-connection server daemon (10.0.0.1:35876). Jan 30 13:16:23.944941 systemd-logind[1459]: Removed session 17. Jan 30 13:16:23.975329 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 35876 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:23.976612 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:23.980341 systemd-logind[1459]: New session 18 of user core. Jan 30 13:16:23.984762 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:16:24.140911 sshd[4122]: Connection closed by 10.0.0.1 port 35876 Jan 30 13:16:24.141258 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:24.162317 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:35876.service: Deactivated successfully. Jan 30 13:16:24.163976 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:16:24.165464 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:16:24.166667 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:35892.service - OpenSSH per-connection server daemon (10.0.0.1:35892). Jan 30 13:16:24.167595 systemd-logind[1459]: Removed session 18. Jan 30 13:16:24.206908 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 35892 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:24.208153 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:24.211835 systemd-logind[1459]: New session 19 of user core. Jan 30 13:16:24.221759 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:16:24.929422 sshd[4135]: Connection closed by 10.0.0.1 port 35892 Jan 30 13:16:24.930473 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:24.940135 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:35892.service: Deactivated successfully. Jan 30 13:16:24.943292 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:16:24.946043 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:16:24.958461 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:35902.service - OpenSSH per-connection server daemon (10.0.0.1:35902). Jan 30 13:16:24.959516 systemd-logind[1459]: Removed session 19. Jan 30 13:16:24.990864 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 35902 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:24.992293 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:24.995889 systemd-logind[1459]: New session 20 of user core. Jan 30 13:16:25.005776 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:16:25.241393 sshd[4154]: Connection closed by 10.0.0.1 port 35902 Jan 30 13:16:25.242432 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:25.249958 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:35902.service: Deactivated successfully. Jan 30 13:16:25.251695 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:16:25.253046 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:16:25.254251 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:35908.service - OpenSSH per-connection server daemon (10.0.0.1:35908). Jan 30 13:16:25.254924 systemd-logind[1459]: Removed session 20. Jan 30 13:16:25.290057 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 35908 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:25.291442 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:25.295001 systemd-logind[1459]: New session 21 of user core. Jan 30 13:16:25.302771 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:16:25.405545 sshd[4167]: Connection closed by 10.0.0.1 port 35908 Jan 30 13:16:25.405876 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:25.409220 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:35908.service: Deactivated successfully. Jan 30 13:16:25.411049 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:16:25.411625 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:16:25.412546 systemd-logind[1459]: Removed session 21. Jan 30 13:16:30.417332 systemd[1]: Started sshd@21-10.0.0.150:22-10.0.0.1:49548.service - OpenSSH per-connection server daemon (10.0.0.1:49548). Jan 30 13:16:30.453185 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 49548 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:30.454451 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:30.458086 systemd-logind[1459]: New session 22 of user core. Jan 30 13:16:30.467779 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:16:30.567997 sshd[4184]: Connection closed by 10.0.0.1 port 49548 Jan 30 13:16:30.568343 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:30.571911 systemd[1]: sshd@21-10.0.0.150:22-10.0.0.1:49548.service: Deactivated successfully. Jan 30 13:16:30.573954 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:16:30.574516 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:16:30.575407 systemd-logind[1459]: Removed session 22. Jan 30 13:16:35.581717 systemd[1]: Started sshd@22-10.0.0.150:22-10.0.0.1:49552.service - OpenSSH per-connection server daemon (10.0.0.1:49552). Jan 30 13:16:35.623093 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 49552 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:35.624981 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:35.629168 systemd-logind[1459]: New session 23 of user core. Jan 30 13:16:35.641814 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:16:35.745570 sshd[4203]: Connection closed by 10.0.0.1 port 49552 Jan 30 13:16:35.745987 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:35.750354 systemd[1]: sshd@22-10.0.0.150:22-10.0.0.1:49552.service: Deactivated successfully. Jan 30 13:16:35.752307 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:16:35.753124 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:16:35.754168 systemd-logind[1459]: Removed session 23. Jan 30 13:16:40.757485 systemd[1]: Started sshd@23-10.0.0.150:22-10.0.0.1:36132.service - OpenSSH per-connection server daemon (10.0.0.1:36132). Jan 30 13:16:40.793938 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 36132 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:40.795263 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:40.798908 systemd-logind[1459]: New session 24 of user core. Jan 30 13:16:40.814767 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:16:40.918890 sshd[4217]: Connection closed by 10.0.0.1 port 36132 Jan 30 13:16:40.919214 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:40.923012 systemd[1]: sshd@23-10.0.0.150:22-10.0.0.1:36132.service: Deactivated successfully. Jan 30 13:16:40.925162 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:16:40.925776 systemd-logind[1459]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:16:40.926529 systemd-logind[1459]: Removed session 24. Jan 30 13:16:45.932866 systemd[1]: Started sshd@24-10.0.0.150:22-10.0.0.1:36148.service - OpenSSH per-connection server daemon (10.0.0.1:36148). Jan 30 13:16:45.968775 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 36148 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:45.970176 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:45.973972 systemd-logind[1459]: New session 25 of user core. Jan 30 13:16:45.985807 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:16:46.089786 sshd[4232]: Connection closed by 10.0.0.1 port 36148 Jan 30 13:16:46.090189 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:46.100498 systemd[1]: sshd@24-10.0.0.150:22-10.0.0.1:36148.service: Deactivated successfully. Jan 30 13:16:46.102368 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:16:46.104240 systemd-logind[1459]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:16:46.105882 systemd[1]: Started sshd@25-10.0.0.150:22-10.0.0.1:36162.service - OpenSSH per-connection server daemon (10.0.0.1:36162). Jan 30 13:16:46.107427 systemd-logind[1459]: Removed session 25. Jan 30 13:16:46.142254 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 36162 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:46.143550 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:46.147045 systemd-logind[1459]: New session 26 of user core. Jan 30 13:16:46.156770 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:16:47.497139 containerd[1476]: time="2025-01-30T13:16:47.496949947Z" level=info msg="StopContainer for \"945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60\" with timeout 30 (s)" Jan 30 13:16:47.498081 containerd[1476]: time="2025-01-30T13:16:47.498020632Z" level=info msg="Stop container \"945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60\" with signal terminated" Jan 30 13:16:47.503459 containerd[1476]: time="2025-01-30T13:16:47.503402593Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:16:47.506845 containerd[1476]: time="2025-01-30T13:16:47.506807708Z" level=info msg="StopContainer for \"76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6\" with timeout 2 (s)" Jan 30 13:16:47.507017 containerd[1476]: time="2025-01-30T13:16:47.506997110Z" level=info msg="Stop container \"76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6\" with signal terminated" Jan 30 13:16:47.508908 systemd[1]: cri-containerd-945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60.scope: Deactivated successfully. Jan 30 13:16:47.514773 systemd-networkd[1405]: lxc_health: Link DOWN Jan 30 13:16:47.514780 systemd-networkd[1405]: lxc_health: Lost carrier Jan 30 13:16:47.530469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60-rootfs.mount: Deactivated successfully. Jan 30 13:16:47.540303 containerd[1476]: time="2025-01-30T13:16:47.540234535Z" level=info msg="shim disconnected" id=945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60 namespace=k8s.io Jan 30 13:16:47.540478 containerd[1476]: time="2025-01-30T13:16:47.540304379Z" level=warning msg="cleaning up after shim disconnected" id=945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60 namespace=k8s.io Jan 30 13:16:47.540478 containerd[1476]: time="2025-01-30T13:16:47.540314849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:47.543268 systemd[1]: cri-containerd-76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6.scope: Deactivated successfully. Jan 30 13:16:47.543538 systemd[1]: cri-containerd-76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6.scope: Consumed 6.690s CPU time. Jan 30 13:16:47.557911 containerd[1476]: time="2025-01-30T13:16:47.557862993Z" level=info msg="StopContainer for \"945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60\" returns successfully" Jan 30 13:16:47.562576 containerd[1476]: time="2025-01-30T13:16:47.562464122Z" level=info msg="StopPodSandbox for \"fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d\"" Jan 30 13:16:47.563370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6-rootfs.mount: Deactivated successfully. Jan 30 13:16:47.566167 containerd[1476]: time="2025-01-30T13:16:47.562521912Z" level=info msg="Container to stop \"945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.567336 containerd[1476]: time="2025-01-30T13:16:47.567289139Z" level=info msg="shim disconnected" id=76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6 namespace=k8s.io Jan 30 13:16:47.567477 containerd[1476]: time="2025-01-30T13:16:47.567408327Z" level=warning msg="cleaning up after shim disconnected" id=76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6 namespace=k8s.io Jan 30 13:16:47.567477 containerd[1476]: time="2025-01-30T13:16:47.567422775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:47.567856 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d-shm.mount: Deactivated successfully. Jan 30 13:16:47.572783 systemd[1]: cri-containerd-fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d.scope: Deactivated successfully. Jan 30 13:16:47.583833 containerd[1476]: time="2025-01-30T13:16:47.583780994Z" level=info msg="StopContainer for \"76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6\" returns successfully" Jan 30 13:16:47.584236 containerd[1476]: time="2025-01-30T13:16:47.584214183Z" level=info msg="StopPodSandbox for \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\"" Jan 30 13:16:47.585726 containerd[1476]: time="2025-01-30T13:16:47.584377845Z" level=info msg="Container to stop \"7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.585726 containerd[1476]: time="2025-01-30T13:16:47.584414936Z" level=info msg="Container to stop \"f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.585726 containerd[1476]: time="2025-01-30T13:16:47.584423823Z" level=info msg="Container to stop \"dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.585726 containerd[1476]: time="2025-01-30T13:16:47.584432379Z" level=info msg="Container to stop \"76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.585726 containerd[1476]: time="2025-01-30T13:16:47.584441437Z" level=info msg="Container to stop \"fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.586169 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e-shm.mount: Deactivated successfully. Jan 30 13:16:47.592619 systemd[1]: cri-containerd-9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e.scope: Deactivated successfully. Jan 30 13:16:47.605997 containerd[1476]: time="2025-01-30T13:16:47.605945117Z" level=info msg="shim disconnected" id=fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d namespace=k8s.io Jan 30 13:16:47.606440 containerd[1476]: time="2025-01-30T13:16:47.606272421Z" level=warning msg="cleaning up after shim disconnected" id=fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d namespace=k8s.io Jan 30 13:16:47.606440 containerd[1476]: time="2025-01-30T13:16:47.606288182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:47.613990 containerd[1476]: time="2025-01-30T13:16:47.613920081Z" level=info msg="shim disconnected" id=9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e namespace=k8s.io Jan 30 13:16:47.613990 containerd[1476]: time="2025-01-30T13:16:47.613971189Z" level=warning msg="cleaning up after shim disconnected" id=9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e namespace=k8s.io Jan 30 13:16:47.613990 containerd[1476]: time="2025-01-30T13:16:47.613979675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:47.621266 containerd[1476]: time="2025-01-30T13:16:47.621216129Z" level=info msg="TearDown network for sandbox \"fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d\" successfully" Jan 30 13:16:47.621266 containerd[1476]: time="2025-01-30T13:16:47.621252228Z" level=info msg="StopPodSandbox for \"fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d\" returns successfully" Jan 30 13:16:47.628798 containerd[1476]: time="2025-01-30T13:16:47.628622789Z" level=info msg="TearDown network for sandbox \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" successfully" Jan 30 13:16:47.628798 containerd[1476]: time="2025-01-30T13:16:47.628646885Z" level=info msg="StopPodSandbox for \"9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e\" returns successfully" Jan 30 13:16:47.703585 kubelet[2590]: I0130 13:16:47.703527 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xfsn\" (UniqueName: \"kubernetes.io/projected/1241a703-f12e-45a9-a38d-0fddcc34b1d3-kube-api-access-8xfsn\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704067 kubelet[2590]: I0130 13:16:47.703598 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1549af23-943a-4e88-a923-20425b4cdf74-cilium-config-path\") pod \"1549af23-943a-4e88-a923-20425b4cdf74\" (UID: \"1549af23-943a-4e88-a923-20425b4cdf74\") " Jan 30 13:16:47.704067 kubelet[2590]: I0130 13:16:47.703617 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z2hv\" (UniqueName: \"kubernetes.io/projected/1549af23-943a-4e88-a923-20425b4cdf74-kube-api-access-9z2hv\") pod \"1549af23-943a-4e88-a923-20425b4cdf74\" (UID: \"1549af23-943a-4e88-a923-20425b4cdf74\") " Jan 30 13:16:47.704067 kubelet[2590]: I0130 13:16:47.703634 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-config-path\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704067 kubelet[2590]: I0130 13:16:47.703648 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-bpf-maps\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704067 kubelet[2590]: I0130 13:16:47.703680 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-host-proc-sys-net\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704067 kubelet[2590]: I0130 13:16:47.703694 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-run\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704217 kubelet[2590]: I0130 13:16:47.703707 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-host-proc-sys-kernel\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704217 kubelet[2590]: I0130 13:16:47.703722 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-lib-modules\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704217 kubelet[2590]: I0130 13:16:47.703738 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1241a703-f12e-45a9-a38d-0fddcc34b1d3-hubble-tls\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704217 kubelet[2590]: I0130 13:16:47.703756 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1241a703-f12e-45a9-a38d-0fddcc34b1d3-clustermesh-secrets\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704217 kubelet[2590]: I0130 13:16:47.703771 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cni-path\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704217 kubelet[2590]: I0130 13:16:47.703788 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-etc-cni-netd\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704357 kubelet[2590]: I0130 13:16:47.703800 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-cgroup\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704357 kubelet[2590]: I0130 13:16:47.703816 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-hostproc\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704357 kubelet[2590]: I0130 13:16:47.703831 2590 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-xtables-lock\") pod \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\" (UID: \"1241a703-f12e-45a9-a38d-0fddcc34b1d3\") " Jan 30 13:16:47.704357 kubelet[2590]: I0130 13:16:47.703907 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.704357 kubelet[2590]: I0130 13:16:47.704115 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.707480 kubelet[2590]: I0130 13:16:47.707445 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1549af23-943a-4e88-a923-20425b4cdf74-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1549af23-943a-4e88-a923-20425b4cdf74" (UID: "1549af23-943a-4e88-a923-20425b4cdf74"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 13:16:47.707528 kubelet[2590]: I0130 13:16:47.707492 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.707528 kubelet[2590]: I0130 13:16:47.707509 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.707528 kubelet[2590]: I0130 13:16:47.707523 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.707606 kubelet[2590]: I0130 13:16:47.707539 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.707606 kubelet[2590]: I0130 13:16:47.707564 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cni-path" (OuterVolumeSpecName: "cni-path") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.708249 kubelet[2590]: I0130 13:16:47.707836 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 13:16:47.708249 kubelet[2590]: I0130 13:16:47.707881 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.708249 kubelet[2590]: I0130 13:16:47.707898 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.708249 kubelet[2590]: I0130 13:16:47.708007 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-hostproc" (OuterVolumeSpecName: "hostproc") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.709672 kubelet[2590]: I0130 13:16:47.709463 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1241a703-f12e-45a9-a38d-0fddcc34b1d3-kube-api-access-8xfsn" (OuterVolumeSpecName: "kube-api-access-8xfsn") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "kube-api-access-8xfsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:16:47.710107 kubelet[2590]: I0130 13:16:47.710080 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1241a703-f12e-45a9-a38d-0fddcc34b1d3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:16:47.710898 kubelet[2590]: I0130 13:16:47.710865 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1549af23-943a-4e88-a923-20425b4cdf74-kube-api-access-9z2hv" (OuterVolumeSpecName: "kube-api-access-9z2hv") pod "1549af23-943a-4e88-a923-20425b4cdf74" (UID: "1549af23-943a-4e88-a923-20425b4cdf74"). InnerVolumeSpecName "kube-api-access-9z2hv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:16:47.710936 kubelet[2590]: I0130 13:16:47.710910 2590 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1241a703-f12e-45a9-a38d-0fddcc34b1d3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1241a703-f12e-45a9-a38d-0fddcc34b1d3" (UID: "1241a703-f12e-45a9-a38d-0fddcc34b1d3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 13:16:47.804278 kubelet[2590]: I0130 13:16:47.804207 2590 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z2hv\" (UniqueName: \"kubernetes.io/projected/1549af23-943a-4e88-a923-20425b4cdf74-kube-api-access-9z2hv\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804278 kubelet[2590]: I0130 13:16:47.804227 2590 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804278 kubelet[2590]: I0130 13:16:47.804236 2590 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804278 kubelet[2590]: I0130 13:16:47.804245 2590 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804278 kubelet[2590]: I0130 13:16:47.804253 2590 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804278 kubelet[2590]: I0130 13:16:47.804261 2590 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804278 kubelet[2590]: I0130 13:16:47.804270 2590 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804278 kubelet[2590]: I0130 13:16:47.804278 2590 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1241a703-f12e-45a9-a38d-0fddcc34b1d3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804487 kubelet[2590]: I0130 13:16:47.804286 2590 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1241a703-f12e-45a9-a38d-0fddcc34b1d3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804487 kubelet[2590]: I0130 13:16:47.804295 2590 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804487 kubelet[2590]: I0130 13:16:47.804303 2590 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804487 kubelet[2590]: I0130 13:16:47.804311 2590 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804487 kubelet[2590]: I0130 13:16:47.804319 2590 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804487 kubelet[2590]: I0130 13:16:47.804328 2590 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1241a703-f12e-45a9-a38d-0fddcc34b1d3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804487 kubelet[2590]: I0130 13:16:47.804335 2590 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8xfsn\" (UniqueName: \"kubernetes.io/projected/1241a703-f12e-45a9-a38d-0fddcc34b1d3-kube-api-access-8xfsn\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:47.804487 kubelet[2590]: I0130 13:16:47.804344 2590 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1549af23-943a-4e88-a923-20425b4cdf74-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:16:48.424155 kubelet[2590]: E0130 13:16:48.424103 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:48.432535 systemd[1]: Removed slice kubepods-burstable-pod1241a703_f12e_45a9_a38d_0fddcc34b1d3.slice - libcontainer container kubepods-burstable-pod1241a703_f12e_45a9_a38d_0fddcc34b1d3.slice. Jan 30 13:16:48.432803 systemd[1]: kubepods-burstable-pod1241a703_f12e_45a9_a38d_0fddcc34b1d3.slice: Consumed 6.795s CPU time. Jan 30 13:16:48.434015 systemd[1]: Removed slice kubepods-besteffort-pod1549af23_943a_4e88_a923_20425b4cdf74.slice - libcontainer container kubepods-besteffort-pod1549af23_943a_4e88_a923_20425b4cdf74.slice. Jan 30 13:16:48.473576 kubelet[2590]: E0130 13:16:48.473539 2590 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:16:48.481034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa521c7b85b19d54a0a7e51e7e0efdbabcb50cbad12bf3f8ffac87a1b4346a5d-rootfs.mount: Deactivated successfully. Jan 30 13:16:48.481158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ff0bc27b7832515a2ae8fe913df3002c8d291dd172d01498cb448e4a6d1639e-rootfs.mount: Deactivated successfully. Jan 30 13:16:48.481244 systemd[1]: var-lib-kubelet-pods-1549af23\x2d943a\x2d4e88\x2da923\x2d20425b4cdf74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9z2hv.mount: Deactivated successfully. Jan 30 13:16:48.481327 systemd[1]: var-lib-kubelet-pods-1241a703\x2df12e\x2d45a9\x2da38d\x2d0fddcc34b1d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8xfsn.mount: Deactivated successfully. Jan 30 13:16:48.481414 systemd[1]: var-lib-kubelet-pods-1241a703\x2df12e\x2d45a9\x2da38d\x2d0fddcc34b1d3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:16:48.481504 systemd[1]: var-lib-kubelet-pods-1241a703\x2df12e\x2d45a9\x2da38d\x2d0fddcc34b1d3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:16:48.613110 kubelet[2590]: I0130 13:16:48.612927 2590 scope.go:117] "RemoveContainer" containerID="945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60" Jan 30 13:16:48.619972 containerd[1476]: time="2025-01-30T13:16:48.619926311Z" level=info msg="RemoveContainer for \"945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60\"" Jan 30 13:16:48.623722 containerd[1476]: time="2025-01-30T13:16:48.623621867Z" level=info msg="RemoveContainer for \"945922408922e4850781e8861233299b958e4a5616a13bc4eeb7ab7a86d37c60\" returns successfully" Jan 30 13:16:48.624223 kubelet[2590]: I0130 13:16:48.624178 2590 scope.go:117] "RemoveContainer" containerID="76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6" Jan 30 13:16:48.625811 containerd[1476]: time="2025-01-30T13:16:48.625784657Z" level=info msg="RemoveContainer for \"76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6\"" Jan 30 13:16:48.629263 containerd[1476]: time="2025-01-30T13:16:48.629233332Z" level=info msg="RemoveContainer for \"76fb16ccc4d591e9e6e1f51506aa143b0c73b35c27d612ffb98c96bacfd7fbb6\" returns successfully" Jan 30 13:16:48.629432 kubelet[2590]: I0130 13:16:48.629394 2590 scope.go:117] "RemoveContainer" containerID="dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810" Jan 30 13:16:48.630436 containerd[1476]: time="2025-01-30T13:16:48.630417043Z" level=info msg="RemoveContainer for \"dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810\"" Jan 30 13:16:48.636891 containerd[1476]: time="2025-01-30T13:16:48.636868100Z" level=info msg="RemoveContainer for \"dd919996b8881b4e14f36c6416229fb1332d0d1789769f66b9b1f4e4b5279810\" returns successfully" Jan 30 13:16:48.637055 kubelet[2590]: I0130 13:16:48.637022 2590 scope.go:117] "RemoveContainer" containerID="f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7" Jan 30 13:16:48.637843 containerd[1476]: time="2025-01-30T13:16:48.637824517Z" level=info msg="RemoveContainer for \"f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7\"" Jan 30 13:16:48.641614 containerd[1476]: time="2025-01-30T13:16:48.641588665Z" level=info msg="RemoveContainer for \"f03e959ae59bab74a435b484bb175561f79505187c09b0df117932b5da0b2db7\" returns successfully" Jan 30 13:16:48.641794 kubelet[2590]: I0130 13:16:48.641774 2590 scope.go:117] "RemoveContainer" containerID="fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50" Jan 30 13:16:48.642560 containerd[1476]: time="2025-01-30T13:16:48.642540993Z" level=info msg="RemoveContainer for \"fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50\"" Jan 30 13:16:48.645916 containerd[1476]: time="2025-01-30T13:16:48.645892363Z" level=info msg="RemoveContainer for \"fc2ed19c450c1fb39ae4f094b7e7f5f4cb45e4b5f93e221b580fbb91d98bfc50\" returns successfully" Jan 30 13:16:48.646084 kubelet[2590]: I0130 13:16:48.646050 2590 scope.go:117] "RemoveContainer" containerID="7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476" Jan 30 13:16:48.647037 containerd[1476]: time="2025-01-30T13:16:48.647013854Z" level=info msg="RemoveContainer for \"7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476\"" Jan 30 13:16:48.649895 containerd[1476]: time="2025-01-30T13:16:48.649870499Z" level=info msg="RemoveContainer for \"7f2bf5c9cef6a018709e5fd4b51fb3f1cc0ad72cf79d8edcb0d584823c9f7476\" returns successfully" Jan 30 13:16:49.448317 sshd[4246]: Connection closed by 10.0.0.1 port 36162 Jan 30 13:16:49.448849 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:49.465422 systemd[1]: sshd@25-10.0.0.150:22-10.0.0.1:36162.service: Deactivated successfully. Jan 30 13:16:49.468179 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:16:49.470310 systemd-logind[1459]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:16:49.479062 systemd[1]: Started sshd@26-10.0.0.150:22-10.0.0.1:52452.service - OpenSSH per-connection server daemon (10.0.0.1:52452). Jan 30 13:16:49.480192 systemd-logind[1459]: Removed session 26. Jan 30 13:16:49.515102 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 52452 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:49.516728 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:49.521264 systemd-logind[1459]: New session 27 of user core. Jan 30 13:16:49.531848 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:16:50.079688 sshd[4408]: Connection closed by 10.0.0.1 port 52452 Jan 30 13:16:50.079979 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:50.095003 kubelet[2590]: I0130 13:16:50.091451 2590 memory_manager.go:355] "RemoveStaleState removing state" podUID="1241a703-f12e-45a9-a38d-0fddcc34b1d3" containerName="cilium-agent" Jan 30 13:16:50.095003 kubelet[2590]: I0130 13:16:50.091484 2590 memory_manager.go:355] "RemoveStaleState removing state" podUID="1549af23-943a-4e88-a923-20425b4cdf74" containerName="cilium-operator" Jan 30 13:16:50.093924 systemd[1]: sshd@26-10.0.0.150:22-10.0.0.1:52452.service: Deactivated successfully. Jan 30 13:16:50.096189 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:16:50.100088 systemd-logind[1459]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:16:50.113007 systemd[1]: Started sshd@27-10.0.0.150:22-10.0.0.1:52460.service - OpenSSH per-connection server daemon (10.0.0.1:52460). Jan 30 13:16:50.117014 systemd-logind[1459]: Removed session 27. Jan 30 13:16:50.117363 systemd[1]: Created slice kubepods-burstable-pod89d42eba_647f_4c1a_aa92_17383e071254.slice - libcontainer container kubepods-burstable-pod89d42eba_647f_4c1a_aa92_17383e071254.slice. Jan 30 13:16:50.129430 kubelet[2590]: I0130 13:16:50.129368 2590 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:16:50Z","lastTransitionTime":"2025-01-30T13:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:16:50.162098 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 52460 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:50.163383 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:50.167175 systemd-logind[1459]: New session 28 of user core. Jan 30 13:16:50.180787 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:16:50.220625 kubelet[2590]: I0130 13:16:50.220581 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89d42eba-647f-4c1a-aa92-17383e071254-hubble-tls\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220625 kubelet[2590]: I0130 13:16:50.220625 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-hostproc\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220748 kubelet[2590]: I0130 13:16:50.220641 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-etc-cni-netd\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220748 kubelet[2590]: I0130 13:16:50.220678 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/89d42eba-647f-4c1a-aa92-17383e071254-cilium-ipsec-secrets\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220748 kubelet[2590]: I0130 13:16:50.220704 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-xtables-lock\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220843 kubelet[2590]: I0130 13:16:50.220796 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89d42eba-647f-4c1a-aa92-17383e071254-cilium-config-path\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220868 kubelet[2590]: I0130 13:16:50.220842 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-host-proc-sys-kernel\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220900 kubelet[2590]: I0130 13:16:50.220868 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcb8c\" (UniqueName: \"kubernetes.io/projected/89d42eba-647f-4c1a-aa92-17383e071254-kube-api-access-kcb8c\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220900 kubelet[2590]: I0130 13:16:50.220893 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-cilium-run\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220942 kubelet[2590]: I0130 13:16:50.220909 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-cni-path\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220942 kubelet[2590]: I0130 13:16:50.220926 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-bpf-maps\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220991 kubelet[2590]: I0130 13:16:50.220942 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89d42eba-647f-4c1a-aa92-17383e071254-clustermesh-secrets\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220991 kubelet[2590]: I0130 13:16:50.220957 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-host-proc-sys-net\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.220991 kubelet[2590]: I0130 13:16:50.220981 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-cilium-cgroup\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.221056 kubelet[2590]: I0130 13:16:50.220996 2590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89d42eba-647f-4c1a-aa92-17383e071254-lib-modules\") pod \"cilium-zmjb8\" (UID: \"89d42eba-647f-4c1a-aa92-17383e071254\") " pod="kube-system/cilium-zmjb8" Jan 30 13:16:50.229848 sshd[4420]: Connection closed by 10.0.0.1 port 52460 Jan 30 13:16:50.230197 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Jan 30 13:16:50.243586 systemd[1]: sshd@27-10.0.0.150:22-10.0.0.1:52460.service: Deactivated successfully. Jan 30 13:16:50.245524 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:16:50.247233 systemd-logind[1459]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:16:50.255880 systemd[1]: Started sshd@28-10.0.0.150:22-10.0.0.1:52470.service - OpenSSH per-connection server daemon (10.0.0.1:52470). Jan 30 13:16:50.256726 systemd-logind[1459]: Removed session 28. Jan 30 13:16:50.287449 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 52470 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:16:50.288813 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:16:50.292632 systemd-logind[1459]: New session 29 of user core. Jan 30 13:16:50.304845 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 13:16:50.425533 kubelet[2590]: I0130 13:16:50.425484 2590 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1241a703-f12e-45a9-a38d-0fddcc34b1d3" path="/var/lib/kubelet/pods/1241a703-f12e-45a9-a38d-0fddcc34b1d3/volumes" Jan 30 13:16:50.426336 kubelet[2590]: I0130 13:16:50.426308 2590 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1549af23-943a-4e88-a923-20425b4cdf74" path="/var/lib/kubelet/pods/1549af23-943a-4e88-a923-20425b4cdf74/volumes" Jan 30 13:16:50.426809 kubelet[2590]: E0130 13:16:50.426786 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:50.427233 containerd[1476]: time="2025-01-30T13:16:50.427181923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmjb8,Uid:89d42eba-647f-4c1a-aa92-17383e071254,Namespace:kube-system,Attempt:0,}" Jan 30 13:16:50.447715 containerd[1476]: time="2025-01-30T13:16:50.447000043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:16:50.447715 containerd[1476]: time="2025-01-30T13:16:50.447690109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:16:50.447715 containerd[1476]: time="2025-01-30T13:16:50.447713344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:50.447854 containerd[1476]: time="2025-01-30T13:16:50.447791733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:50.467790 systemd[1]: Started cri-containerd-b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814.scope - libcontainer container b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814. Jan 30 13:16:50.488358 containerd[1476]: time="2025-01-30T13:16:50.488316246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmjb8,Uid:89d42eba-647f-4c1a-aa92-17383e071254,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\"" Jan 30 13:16:50.489101 kubelet[2590]: E0130 13:16:50.489053 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:50.491157 containerd[1476]: time="2025-01-30T13:16:50.491122239Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:16:50.505729 containerd[1476]: time="2025-01-30T13:16:50.505670795Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"563c5758766d83af95cd3bba9aab1b832fbab1ea770100a45e1b5c91cb288c49\"" Jan 30 13:16:50.506127 containerd[1476]: time="2025-01-30T13:16:50.506106035Z" level=info msg="StartContainer for \"563c5758766d83af95cd3bba9aab1b832fbab1ea770100a45e1b5c91cb288c49\"" Jan 30 13:16:50.534785 systemd[1]: Started cri-containerd-563c5758766d83af95cd3bba9aab1b832fbab1ea770100a45e1b5c91cb288c49.scope - libcontainer container 563c5758766d83af95cd3bba9aab1b832fbab1ea770100a45e1b5c91cb288c49. Jan 30 13:16:50.559520 containerd[1476]: time="2025-01-30T13:16:50.559477317Z" level=info msg="StartContainer for \"563c5758766d83af95cd3bba9aab1b832fbab1ea770100a45e1b5c91cb288c49\" returns successfully" Jan 30 13:16:50.569162 systemd[1]: cri-containerd-563c5758766d83af95cd3bba9aab1b832fbab1ea770100a45e1b5c91cb288c49.scope: Deactivated successfully. Jan 30 13:16:50.599061 containerd[1476]: time="2025-01-30T13:16:50.599001721Z" level=info msg="shim disconnected" id=563c5758766d83af95cd3bba9aab1b832fbab1ea770100a45e1b5c91cb288c49 namespace=k8s.io Jan 30 13:16:50.599061 containerd[1476]: time="2025-01-30T13:16:50.599053180Z" level=warning msg="cleaning up after shim disconnected" id=563c5758766d83af95cd3bba9aab1b832fbab1ea770100a45e1b5c91cb288c49 namespace=k8s.io Jan 30 13:16:50.599061 containerd[1476]: time="2025-01-30T13:16:50.599063258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:50.627072 kubelet[2590]: E0130 13:16:50.627035 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:50.628763 containerd[1476]: time="2025-01-30T13:16:50.628726140Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:16:50.640804 containerd[1476]: time="2025-01-30T13:16:50.640758205Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e79eb76ebf8069e209a9d152b46b4dbb77b47da93b330dcd6a7bbda328afbd98\"" Jan 30 13:16:50.641267 containerd[1476]: time="2025-01-30T13:16:50.641239853Z" level=info msg="StartContainer for \"e79eb76ebf8069e209a9d152b46b4dbb77b47da93b330dcd6a7bbda328afbd98\"" Jan 30 13:16:50.668785 systemd[1]: Started cri-containerd-e79eb76ebf8069e209a9d152b46b4dbb77b47da93b330dcd6a7bbda328afbd98.scope - libcontainer container e79eb76ebf8069e209a9d152b46b4dbb77b47da93b330dcd6a7bbda328afbd98. Jan 30 13:16:50.693885 containerd[1476]: time="2025-01-30T13:16:50.693768277Z" level=info msg="StartContainer for \"e79eb76ebf8069e209a9d152b46b4dbb77b47da93b330dcd6a7bbda328afbd98\" returns successfully" Jan 30 13:16:50.700162 systemd[1]: cri-containerd-e79eb76ebf8069e209a9d152b46b4dbb77b47da93b330dcd6a7bbda328afbd98.scope: Deactivated successfully. Jan 30 13:16:50.723011 containerd[1476]: time="2025-01-30T13:16:50.722949208Z" level=info msg="shim disconnected" id=e79eb76ebf8069e209a9d152b46b4dbb77b47da93b330dcd6a7bbda328afbd98 namespace=k8s.io Jan 30 13:16:50.723011 containerd[1476]: time="2025-01-30T13:16:50.723007059Z" level=warning msg="cleaning up after shim disconnected" id=e79eb76ebf8069e209a9d152b46b4dbb77b47da93b330dcd6a7bbda328afbd98 namespace=k8s.io Jan 30 13:16:50.723011 containerd[1476]: time="2025-01-30T13:16:50.723016588Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:51.630291 kubelet[2590]: E0130 13:16:51.630256 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:51.632869 containerd[1476]: time="2025-01-30T13:16:51.631967803Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:16:51.650879 containerd[1476]: time="2025-01-30T13:16:51.650835203Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7f6290b26f1da797c7ef1515cd60b1132a8e6c8e757d9e054e90379c9100aa1b\"" Jan 30 13:16:51.651372 containerd[1476]: time="2025-01-30T13:16:51.651325178Z" level=info msg="StartContainer for \"7f6290b26f1da797c7ef1515cd60b1132a8e6c8e757d9e054e90379c9100aa1b\"" Jan 30 13:16:51.678889 systemd[1]: Started cri-containerd-7f6290b26f1da797c7ef1515cd60b1132a8e6c8e757d9e054e90379c9100aa1b.scope - libcontainer container 7f6290b26f1da797c7ef1515cd60b1132a8e6c8e757d9e054e90379c9100aa1b. Jan 30 13:16:51.714272 containerd[1476]: time="2025-01-30T13:16:51.714223856Z" level=info msg="StartContainer for \"7f6290b26f1da797c7ef1515cd60b1132a8e6c8e757d9e054e90379c9100aa1b\" returns successfully" Jan 30 13:16:51.715954 systemd[1]: cri-containerd-7f6290b26f1da797c7ef1515cd60b1132a8e6c8e757d9e054e90379c9100aa1b.scope: Deactivated successfully. Jan 30 13:16:51.744479 containerd[1476]: time="2025-01-30T13:16:51.744400523Z" level=info msg="shim disconnected" id=7f6290b26f1da797c7ef1515cd60b1132a8e6c8e757d9e054e90379c9100aa1b namespace=k8s.io Jan 30 13:16:51.744479 containerd[1476]: time="2025-01-30T13:16:51.744480765Z" level=warning msg="cleaning up after shim disconnected" id=7f6290b26f1da797c7ef1515cd60b1132a8e6c8e757d9e054e90379c9100aa1b namespace=k8s.io Jan 30 13:16:51.744793 containerd[1476]: time="2025-01-30T13:16:51.744493360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:52.326639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f6290b26f1da797c7ef1515cd60b1132a8e6c8e757d9e054e90379c9100aa1b-rootfs.mount: Deactivated successfully. Jan 30 13:16:52.634138 kubelet[2590]: E0130 13:16:52.634089 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:52.638674 containerd[1476]: time="2025-01-30T13:16:52.637696956Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:16:52.652203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount100505427.mount: Deactivated successfully. Jan 30 13:16:52.654966 containerd[1476]: time="2025-01-30T13:16:52.654918555Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b2160edc93f1e78e330fbd943a867457859a6e885b37163aaef85cb74846d472\"" Jan 30 13:16:52.655975 containerd[1476]: time="2025-01-30T13:16:52.655951073Z" level=info msg="StartContainer for \"b2160edc93f1e78e330fbd943a867457859a6e885b37163aaef85cb74846d472\"" Jan 30 13:16:52.688797 systemd[1]: Started cri-containerd-b2160edc93f1e78e330fbd943a867457859a6e885b37163aaef85cb74846d472.scope - libcontainer container b2160edc93f1e78e330fbd943a867457859a6e885b37163aaef85cb74846d472. Jan 30 13:16:52.714487 systemd[1]: cri-containerd-b2160edc93f1e78e330fbd943a867457859a6e885b37163aaef85cb74846d472.scope: Deactivated successfully. Jan 30 13:16:52.717208 containerd[1476]: time="2025-01-30T13:16:52.717154942Z" level=info msg="StartContainer for \"b2160edc93f1e78e330fbd943a867457859a6e885b37163aaef85cb74846d472\" returns successfully" Jan 30 13:16:52.742750 containerd[1476]: time="2025-01-30T13:16:52.742644170Z" level=info msg="shim disconnected" id=b2160edc93f1e78e330fbd943a867457859a6e885b37163aaef85cb74846d472 namespace=k8s.io Jan 30 13:16:52.742750 containerd[1476]: time="2025-01-30T13:16:52.742744341Z" level=warning msg="cleaning up after shim disconnected" id=b2160edc93f1e78e330fbd943a867457859a6e885b37163aaef85cb74846d472 namespace=k8s.io Jan 30 13:16:52.742750 containerd[1476]: time="2025-01-30T13:16:52.742755602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:53.326692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2160edc93f1e78e330fbd943a867457859a6e885b37163aaef85cb74846d472-rootfs.mount: Deactivated successfully. Jan 30 13:16:53.474816 kubelet[2590]: E0130 13:16:53.474770 2590 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:16:53.638799 kubelet[2590]: E0130 13:16:53.638769 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:53.642112 containerd[1476]: time="2025-01-30T13:16:53.642057835Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:16:53.657172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276310485.mount: Deactivated successfully. Jan 30 13:16:53.659534 containerd[1476]: time="2025-01-30T13:16:53.659489190Z" level=info msg="CreateContainer within sandbox \"b7e2efbf68503a0762e242730fa17455056aefde50fc67fe062ae0642bfdf814\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fee51f33478134864506c73d816e50fc69c3959ce14cfe317959c95f5d3f3e97\"" Jan 30 13:16:53.660272 containerd[1476]: time="2025-01-30T13:16:53.659948596Z" level=info msg="StartContainer for \"fee51f33478134864506c73d816e50fc69c3959ce14cfe317959c95f5d3f3e97\"" Jan 30 13:16:53.694895 systemd[1]: Started cri-containerd-fee51f33478134864506c73d816e50fc69c3959ce14cfe317959c95f5d3f3e97.scope - libcontainer container fee51f33478134864506c73d816e50fc69c3959ce14cfe317959c95f5d3f3e97. Jan 30 13:16:53.725003 containerd[1476]: time="2025-01-30T13:16:53.724960765Z" level=info msg="StartContainer for \"fee51f33478134864506c73d816e50fc69c3959ce14cfe317959c95f5d3f3e97\" returns successfully" Jan 30 13:16:54.145686 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:16:54.423745 kubelet[2590]: E0130 13:16:54.423601 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:54.643675 kubelet[2590]: E0130 13:16:54.643621 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:56.423800 kubelet[2590]: E0130 13:16:56.423766 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:56.428270 kubelet[2590]: E0130 13:16:56.428231 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:57.054797 systemd-networkd[1405]: lxc_health: Link UP Jan 30 13:16:57.064182 systemd-networkd[1405]: lxc_health: Gained carrier Jan 30 13:16:58.136858 systemd-networkd[1405]: lxc_health: Gained IPv6LL Jan 30 13:16:58.428685 kubelet[2590]: E0130 13:16:58.428362 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:58.444310 kubelet[2590]: I0130 13:16:58.444254 2590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zmjb8" podStartSLOduration=8.444232635 podStartE2EDuration="8.444232635s" podCreationTimestamp="2025-01-30 13:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:16:54.656586569 +0000 UTC m=+86.307102748" watchObservedRunningTime="2025-01-30 13:16:58.444232635 +0000 UTC m=+90.094748744" Jan 30 13:16:58.657492 kubelet[2590]: E0130 13:16:58.657224 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:59.659212 kubelet[2590]: E0130 13:16:59.659173 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:17:00.423400 kubelet[2590]: E0130 13:17:00.423367 2590 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:17:04.980935 sshd[4428]: Connection closed by 10.0.0.1 port 52470 Jan 30 13:17:04.981418 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Jan 30 13:17:04.984811 systemd[1]: sshd@28-10.0.0.150:22-10.0.0.1:52470.service: Deactivated successfully. Jan 30 13:17:04.986610 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 13:17:04.987197 systemd-logind[1459]: Session 29 logged out. Waiting for processes to exit. Jan 30 13:17:04.988007 systemd-logind[1459]: Removed session 29.