Jan 29 11:13:13.906535 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:13:13.906567 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:13:13.906583 kernel: BIOS-provided physical RAM map: Jan 29 11:13:13.906592 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:13:13.906600 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:13:13.906608 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:13:13.906619 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:13:13.906628 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:13:13.906636 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 29 11:13:13.906645 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 29 11:13:13.906657 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 29 11:13:13.906666 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 29 11:13:13.906674 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 29 11:13:13.906682 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 29 11:13:13.906693 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 29 11:13:13.906702 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:13:13.906714 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 29 11:13:13.906729 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 29 11:13:13.906743 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 29 11:13:13.906757 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 29 11:13:13.906771 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 29 11:13:13.906786 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:13:13.906800 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 11:13:13.906814 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:13:13.906828 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 29 11:13:13.906845 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:13:13.906859 kernel: NX (Execute Disable) protection: active Jan 29 11:13:13.906879 kernel: APIC: Static calls initialized Jan 29 11:13:13.906895 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 29 11:13:13.906906 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 29 11:13:13.906915 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 29 11:13:13.906923 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 29 11:13:13.906932 kernel: extended physical RAM map: Jan 29 11:13:13.906941 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 11:13:13.906950 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 11:13:13.906959 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 11:13:13.906968 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 11:13:13.906977 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 11:13:13.906989 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 29 11:13:13.906997 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 29 11:13:13.907011 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 29 11:13:13.907020 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 29 11:13:13.907029 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 29 11:13:13.907039 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 29 11:13:13.907048 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 29 11:13:13.907060 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 29 11:13:13.907070 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 29 11:13:13.907083 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 29 11:13:13.907093 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 29 11:13:13.907102 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 11:13:13.907111 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 29 11:13:13.907120 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 29 11:13:13.907130 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 29 11:13:13.907139 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 29 11:13:13.907153 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 29 11:13:13.907162 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 11:13:13.907171 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 11:13:13.907188 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:13:13.907198 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 29 11:13:13.907208 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:13:13.907225 kernel: efi: EFI v2.7 by EDK II Jan 29 11:13:13.907234 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 29 11:13:13.907244 kernel: random: crng init done Jan 29 11:13:13.907253 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 29 11:13:13.907262 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 29 11:13:13.907275 kernel: secureboot: Secure boot disabled Jan 29 11:13:13.907284 kernel: SMBIOS 2.8 present. Jan 29 11:13:13.907294 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 29 11:13:13.907303 kernel: Hypervisor detected: KVM Jan 29 11:13:13.907312 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:13:13.907322 kernel: kvm-clock: using sched offset of 2526366021 cycles Jan 29 11:13:13.907332 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:13:13.907342 kernel: tsc: Detected 2794.748 MHz processor Jan 29 11:13:13.907352 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:13:13.907362 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:13:13.907371 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 29 11:13:13.907385 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 11:13:13.907395 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:13:13.907405 kernel: Using GB pages for direct mapping Jan 29 11:13:13.907415 kernel: ACPI: Early table checksum verification disabled Jan 29 11:13:13.907426 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 29 11:13:13.907459 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:13:13.907470 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:13.907480 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:13.907491 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 29 11:13:13.907505 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:13.907516 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:13.907537 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:13.907547 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:13:13.907558 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:13:13.907568 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 29 11:13:13.907578 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 29 11:13:13.907588 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 29 11:13:13.907599 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 29 11:13:13.907613 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 29 11:13:13.907623 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 29 11:13:13.907633 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 29 11:13:13.907644 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 29 11:13:13.907654 kernel: No NUMA configuration found Jan 29 11:13:13.907664 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 29 11:13:13.907675 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 29 11:13:13.907685 kernel: Zone ranges: Jan 29 11:13:13.907695 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:13:13.907708 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 29 11:13:13.907718 kernel: Normal empty Jan 29 11:13:13.907728 kernel: Movable zone start for each node Jan 29 11:13:13.907739 kernel: Early memory node ranges Jan 29 11:13:13.907749 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 11:13:13.907759 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 29 11:13:13.907769 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 29 11:13:13.907780 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 29 11:13:13.907791 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 29 11:13:13.907807 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 29 11:13:13.907819 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 29 11:13:13.907830 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 29 11:13:13.907840 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 29 11:13:13.907850 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:13:13.907861 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 11:13:13.907882 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 29 11:13:13.907896 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:13:13.907906 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 29 11:13:13.907917 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 29 11:13:13.907928 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 11:13:13.907939 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 29 11:13:13.907953 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 29 11:13:13.907964 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:13:13.907975 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:13:13.907986 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:13:13.907997 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:13:13.908012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:13:13.908023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:13:13.908034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:13:13.908045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:13:13.908056 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:13:13.908067 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:13:13.908078 kernel: TSC deadline timer available Jan 29 11:13:13.908089 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:13:13.908100 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:13:13.908114 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:13:13.908125 kernel: kvm-guest: setup PV sched yield Jan 29 11:13:13.908136 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 29 11:13:13.908147 kernel: Booting paravirtualized kernel on KVM Jan 29 11:13:13.908158 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:13:13.908170 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:13:13.908180 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:13:13.908191 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:13:13.908202 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:13:13.908213 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:13:13.908227 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:13:13.908240 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:13:13.908251 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:13:13.908262 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:13:13.908273 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:13:13.908284 kernel: Fallback order for Node 0: 0 Jan 29 11:13:13.908295 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 29 11:13:13.908306 kernel: Policy zone: DMA32 Jan 29 11:13:13.908321 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:13:13.908333 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 175776K reserved, 0K cma-reserved) Jan 29 11:13:13.908344 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:13:13.908355 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:13:13.908366 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:13:13.908377 kernel: Dynamic Preempt: voluntary Jan 29 11:13:13.908388 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:13:13.908400 kernel: rcu: RCU event tracing is enabled. Jan 29 11:13:13.908411 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:13:13.908425 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:13:13.908462 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:13:13.908474 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:13:13.908485 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:13:13.908496 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:13:13.908507 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:13:13.908518 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:13:13.908536 kernel: Console: colour dummy device 80x25 Jan 29 11:13:13.908547 kernel: printk: console [ttyS0] enabled Jan 29 11:13:13.908563 kernel: ACPI: Core revision 20230628 Jan 29 11:13:13.908574 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:13:13.908584 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:13:13.908595 kernel: x2apic enabled Jan 29 11:13:13.908605 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:13:13.908616 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:13:13.908626 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:13:13.908636 kernel: kvm-guest: setup PV IPIs Jan 29 11:13:13.908646 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:13:13.908660 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:13:13.908671 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 11:13:13.908681 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:13:13.908691 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:13:13.908701 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:13:13.908711 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:13:13.908737 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:13:13.908761 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:13:13.908779 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:13:13.908797 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:13:13.908808 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:13:13.908821 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:13:13.908831 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:13:13.908842 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:13:13.908853 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:13:13.908864 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:13:13.908878 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:13:13.908892 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:13:13.908903 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:13:13.908913 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:13:13.908925 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:13:13.908936 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:13:13.908946 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:13:13.908957 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:13:13.908968 kernel: landlock: Up and running. Jan 29 11:13:13.908979 kernel: SELinux: Initializing. Jan 29 11:13:13.908993 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:13:13.909005 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:13:13.909016 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:13:13.909027 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:13:13.909038 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:13:13.909048 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:13:13.909059 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:13:13.909070 kernel: ... version: 0 Jan 29 11:13:13.909081 kernel: ... bit width: 48 Jan 29 11:13:13.909095 kernel: ... generic registers: 6 Jan 29 11:13:13.909106 kernel: ... value mask: 0000ffffffffffff Jan 29 11:13:13.909117 kernel: ... max period: 00007fffffffffff Jan 29 11:13:13.909128 kernel: ... fixed-purpose events: 0 Jan 29 11:13:13.909138 kernel: ... event mask: 000000000000003f Jan 29 11:13:13.909149 kernel: signal: max sigframe size: 1776 Jan 29 11:13:13.909159 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:13:13.909171 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:13:13.909181 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:13:13.909196 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:13:13.909207 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:13:13.909217 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:13:13.909228 kernel: smpboot: Max logical packages: 1 Jan 29 11:13:13.909240 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 11:13:13.909250 kernel: devtmpfs: initialized Jan 29 11:13:13.909261 kernel: x86/mm: Memory block size: 128MB Jan 29 11:13:13.909273 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 29 11:13:13.909284 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 29 11:13:13.909298 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 29 11:13:13.909310 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 29 11:13:13.909321 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 29 11:13:13.909332 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 29 11:13:13.909343 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:13:13.909354 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:13:13.909365 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:13:13.909376 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:13:13.909387 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:13:13.909402 kernel: audit: type=2000 audit(1738149194.423:1): state=initialized audit_enabled=0 res=1 Jan 29 11:13:13.909413 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:13:13.909424 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:13:13.909451 kernel: cpuidle: using governor menu Jan 29 11:13:13.909461 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:13:13.909471 kernel: dca service started, version 1.12.1 Jan 29 11:13:13.909482 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 29 11:13:13.909492 kernel: PCI: Using configuration type 1 for base access Jan 29 11:13:13.909503 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:13:13.909519 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:13:13.909538 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:13:13.909549 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:13:13.909561 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:13:13.909572 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:13:13.909582 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:13:13.909593 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:13:13.909605 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:13:13.909615 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:13:13.909630 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:13:13.909641 kernel: ACPI: Interpreter enabled Jan 29 11:13:13.909652 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:13:13.909663 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:13:13.909674 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:13:13.909685 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:13:13.909696 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:13:13.909707 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:13:13.909954 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:13:13.910173 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:13:13.910331 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:13:13.910347 kernel: PCI host bridge to bus 0000:00 Jan 29 11:13:13.910521 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:13:13.910676 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:13:13.910825 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:13:13.910969 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 29 11:13:13.911112 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 29 11:13:13.911249 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 29 11:13:13.911387 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:13:13.911585 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:13:13.911750 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:13:13.911908 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 29 11:13:13.912066 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 29 11:13:13.912219 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 29 11:13:13.912370 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 29 11:13:13.912576 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:13:13.912749 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:13:13.912907 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 29 11:13:13.913068 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 29 11:13:13.913225 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 29 11:13:13.913398 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:13:13.913586 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 29 11:13:13.913743 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 29 11:13:13.913899 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 29 11:13:13.914067 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:13:13.914228 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 29 11:13:13.914383 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 29 11:13:13.914597 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 29 11:13:13.914753 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 29 11:13:13.914923 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:13:13.915077 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:13:13.915250 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:13:13.915411 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 29 11:13:13.915593 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 29 11:13:13.915757 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:13:13.915913 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 29 11:13:13.915929 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:13:13.915940 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:13:13.915952 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:13:13.915967 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:13:13.915978 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:13:13.915989 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:13:13.916000 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:13:13.916011 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:13:13.916022 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:13:13.916032 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:13:13.916043 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:13:13.916054 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:13:13.916068 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:13:13.916079 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:13:13.916090 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:13:13.916101 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:13:13.916112 kernel: iommu: Default domain type: Translated Jan 29 11:13:13.916123 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:13:13.916134 kernel: efivars: Registered efivars operations Jan 29 11:13:13.916145 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:13:13.916156 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:13:13.916170 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 29 11:13:13.916180 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 29 11:13:13.916191 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 29 11:13:13.916202 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 29 11:13:13.916213 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 29 11:13:13.916224 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 29 11:13:13.916235 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 29 11:13:13.916246 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 29 11:13:13.916401 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:13:13.916592 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:13:13.916749 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:13:13.916765 kernel: vgaarb: loaded Jan 29 11:13:13.916776 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:13:13.916787 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:13:13.916798 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:13:13.916809 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:13:13.916821 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:13:13.916836 kernel: pnp: PnP ACPI init Jan 29 11:13:13.917000 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 29 11:13:13.917017 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:13:13.917028 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:13:13.917039 kernel: NET: Registered PF_INET protocol family Jan 29 11:13:13.917076 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:13:13.917090 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:13:13.917102 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:13:13.917117 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:13:13.917128 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:13:13.917139 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:13:13.917151 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:13:13.917162 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:13:13.917174 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:13:13.917185 kernel: NET: Registered PF_XDP protocol family Jan 29 11:13:13.917340 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 29 11:13:13.917530 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 29 11:13:13.917683 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:13:13.917824 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:13:13.917963 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:13:13.918104 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 29 11:13:13.918245 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 29 11:13:13.918383 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 29 11:13:13.918398 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:13:13.918410 kernel: Initialise system trusted keyrings Jan 29 11:13:13.918426 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:13:13.918449 kernel: Key type asymmetric registered Jan 29 11:13:13.918461 kernel: Asymmetric key parser 'x509' registered Jan 29 11:13:13.918472 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:13:13.918484 kernel: io scheduler mq-deadline registered Jan 29 11:13:13.918495 kernel: io scheduler kyber registered Jan 29 11:13:13.918506 kernel: io scheduler bfq registered Jan 29 11:13:13.918518 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:13:13.918538 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:13:13.918554 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:13:13.918568 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:13:13.918579 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:13:13.918591 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:13:13.918603 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:13:13.918614 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:13:13.918628 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:13:13.918793 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:13:13.918809 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:13:13.918951 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:13:13.919098 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:13:13 UTC (1738149193) Jan 29 11:13:13.919254 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:13:13.919271 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:13:13.919290 kernel: efifb: probing for efifb Jan 29 11:13:13.919302 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 29 11:13:13.919313 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 29 11:13:13.919324 kernel: efifb: scrolling: redraw Jan 29 11:13:13.919336 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 11:13:13.919350 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:13:13.919362 kernel: fb0: EFI VGA frame buffer device Jan 29 11:13:13.919373 kernel: pstore: Using crash dump compression: deflate Jan 29 11:13:13.919385 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:13:13.919396 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:13:13.919410 kernel: Segment Routing with IPv6 Jan 29 11:13:13.919421 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:13:13.919432 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:13:13.919457 kernel: Key type dns_resolver registered Jan 29 11:13:13.919468 kernel: IPI shorthand broadcast: enabled Jan 29 11:13:13.919480 kernel: sched_clock: Marking stable (601002740, 152983521)->(771426980, -17440719) Jan 29 11:13:13.919492 kernel: registered taskstats version 1 Jan 29 11:13:13.919503 kernel: Loading compiled-in X.509 certificates Jan 29 11:13:13.919514 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:13:13.919537 kernel: Key type .fscrypt registered Jan 29 11:13:13.919548 kernel: Key type fscrypt-provisioning registered Jan 29 11:13:13.919559 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:13:13.919571 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:13:13.919582 kernel: ima: No architecture policies found Jan 29 11:13:13.919593 kernel: clk: Disabling unused clocks Jan 29 11:13:13.919605 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:13:13.919616 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:13:13.919631 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:13:13.919642 kernel: Run /init as init process Jan 29 11:13:13.919653 kernel: with arguments: Jan 29 11:13:13.919665 kernel: /init Jan 29 11:13:13.919676 kernel: with environment: Jan 29 11:13:13.919687 kernel: HOME=/ Jan 29 11:13:13.919698 kernel: TERM=linux Jan 29 11:13:13.919709 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:13:13.919724 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:13:13.919740 systemd[1]: Detected virtualization kvm. Jan 29 11:13:13.919753 systemd[1]: Detected architecture x86-64. Jan 29 11:13:13.919764 systemd[1]: Running in initrd. Jan 29 11:13:13.919776 systemd[1]: No hostname configured, using default hostname. Jan 29 11:13:13.919787 systemd[1]: Hostname set to . Jan 29 11:13:13.919800 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:13:13.919812 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:13:13.919824 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:13:13.919839 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:13:13.919852 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:13:13.919864 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:13:13.919876 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:13:13.919888 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:13:13.919905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:13:13.919920 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:13:13.919933 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:13:13.919945 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:13:13.919957 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:13:13.919969 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:13:13.919981 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:13:13.919993 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:13:13.920005 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:13:13.920017 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:13:13.920032 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:13:13.920044 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:13:13.920056 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:13:13.920068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:13:13.920080 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:13:13.920092 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:13:13.920104 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:13:13.920116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:13:13.920133 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:13:13.920145 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:13:13.920157 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:13:13.920169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:13:13.920181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:13:13.920193 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:13:13.920205 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:13:13.920217 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:13:13.920256 systemd-journald[194]: Collecting audit messages is disabled. Jan 29 11:13:13.920286 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:13:13.920299 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:13:13.920312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:13:13.920324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:13.920336 systemd-journald[194]: Journal started Jan 29 11:13:13.920361 systemd-journald[194]: Runtime Journal (/run/log/journal/7d4e6c699d1e4c2fb59c1f9e09a76f53) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:13:13.907237 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 11:13:13.924624 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:13:13.928984 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:13:13.931934 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:13:13.935686 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:13:13.941467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:13:13.943780 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 11:13:13.944459 kernel: Bridge firewalling registered Jan 29 11:13:13.945411 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:13:13.947634 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:13:13.949788 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:13:13.959808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:13:13.966611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:13:13.969074 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:13:13.972703 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:13:13.990885 dracut-cmdline[232]: dracut-dracut-053 Jan 29 11:13:13.994742 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:13:14.002396 systemd-resolved[226]: Positive Trust Anchors: Jan 29 11:13:14.002413 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:13:14.002458 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:13:14.004891 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 29 11:13:14.005931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:13:14.012183 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:13:14.100492 kernel: SCSI subsystem initialized Jan 29 11:13:14.109469 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:13:14.119478 kernel: iscsi: registered transport (tcp) Jan 29 11:13:14.140476 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:13:14.140533 kernel: QLogic iSCSI HBA Driver Jan 29 11:13:14.192395 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:13:14.204615 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:13:14.229363 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:13:14.229431 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:13:14.229455 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:13:14.270458 kernel: raid6: avx2x4 gen() 24503 MB/s Jan 29 11:13:14.287455 kernel: raid6: avx2x2 gen() 28132 MB/s Jan 29 11:13:14.304709 kernel: raid6: avx2x1 gen() 17254 MB/s Jan 29 11:13:14.304728 kernel: raid6: using algorithm avx2x2 gen() 28132 MB/s Jan 29 11:13:14.322793 kernel: raid6: .... xor() 14076 MB/s, rmw enabled Jan 29 11:13:14.322823 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:13:14.348465 kernel: xor: automatically using best checksumming function avx Jan 29 11:13:14.527478 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:13:14.541632 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:13:14.561649 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:13:14.576851 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 29 11:13:14.582554 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:13:14.590599 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:13:14.604603 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 29 11:13:14.637722 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:13:14.657687 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:13:14.729409 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:13:14.740281 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:13:14.753830 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:13:14.759056 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:13:14.760628 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:13:14.765027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:13:14.773713 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:13:14.777466 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:13:14.796763 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:13:14.822557 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:13:14.822573 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:13:14.822716 kernel: AES CTR mode by8 optimization enabled Jan 29 11:13:14.822727 kernel: libata version 3.00 loaded. Jan 29 11:13:14.822738 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:13:14.835296 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:13:14.835310 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:13:14.835327 kernel: GPT:9289727 != 19775487 Jan 29 11:13:14.835337 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:13:14.835348 kernel: GPT:9289727 != 19775487 Jan 29 11:13:14.835357 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:13:14.835367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:13:14.835377 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:13:14.835550 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:13:14.835686 kernel: scsi host0: ahci Jan 29 11:13:14.835845 kernel: scsi host1: ahci Jan 29 11:13:14.835988 kernel: scsi host2: ahci Jan 29 11:13:14.836130 kernel: scsi host3: ahci Jan 29 11:13:14.836270 kernel: scsi host4: ahci Jan 29 11:13:14.836416 kernel: scsi host5: ahci Jan 29 11:13:14.836588 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 29 11:13:14.836604 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 29 11:13:14.836614 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 29 11:13:14.836624 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 29 11:13:14.836634 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 29 11:13:14.836644 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 29 11:13:14.811927 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:13:14.812085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:13:14.814024 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:13:14.818421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:13:14.818794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:14.823154 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:13:14.832667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:13:14.837405 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:13:14.848463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:13:14.858664 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (471) Jan 29 11:13:14.849089 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:14.864460 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (473) Jan 29 11:13:14.864898 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:13:14.879178 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:13:14.898137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:13:14.904868 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:13:14.907820 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:13:14.929706 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:13:14.933378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:13:14.939811 disk-uuid[555]: Primary Header is updated. Jan 29 11:13:14.939811 disk-uuid[555]: Secondary Entries is updated. Jan 29 11:13:14.939811 disk-uuid[555]: Secondary Header is updated. Jan 29 11:13:14.943383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:13:14.947504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:13:14.954184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:14.967615 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:13:14.988746 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:13:15.148677 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:13:15.148734 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:13:15.149596 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:13:15.149706 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:13:15.151463 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:13:15.151488 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:13:15.152466 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:13:15.153699 kernel: ata3.00: applying bridge limits Jan 29 11:13:15.153722 kernel: ata3.00: configured for UDMA/100 Jan 29 11:13:15.154457 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:13:15.207468 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:13:15.220223 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:13:15.220244 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:13:15.948460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:13:15.948520 disk-uuid[558]: The operation has completed successfully. Jan 29 11:13:15.976723 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:13:15.976853 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:13:16.003608 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:13:16.008956 sh[597]: Success Jan 29 11:13:16.020457 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:13:16.053366 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:13:16.072354 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:13:16.075267 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:13:16.085773 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:13:16.085807 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:13:16.085822 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:13:16.086791 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:13:16.087542 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:13:16.093821 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:13:16.094263 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:13:16.104617 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:13:16.106009 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:13:16.116525 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:13:16.116560 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:13:16.116575 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:13:16.120481 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:13:16.129929 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:13:16.131906 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:13:16.141058 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:13:16.149622 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:13:16.201071 ignition[690]: Ignition 2.20.0 Jan 29 11:13:16.201085 ignition[690]: Stage: fetch-offline Jan 29 11:13:16.201128 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:16.201138 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:16.201222 ignition[690]: parsed url from cmdline: "" Jan 29 11:13:16.201226 ignition[690]: no config URL provided Jan 29 11:13:16.201231 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:13:16.201240 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:13:16.201270 ignition[690]: op(1): [started] loading QEMU firmware config module Jan 29 11:13:16.201275 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:13:16.211404 ignition[690]: op(1): [finished] loading QEMU firmware config module Jan 29 11:13:16.243618 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:13:16.253390 ignition[690]: parsing config with SHA512: 30e405a1f00e93152a14194b391da75d665cf36e9d1504ec393ae6ee25bd27e6a82b4d74253680cc46a712c3b291c2939b98c942dd8d8aeb0870808ddcde4de6 Jan 29 11:13:16.254603 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:13:16.257652 unknown[690]: fetched base config from "system" Jan 29 11:13:16.258042 unknown[690]: fetched user config from "qemu" Jan 29 11:13:16.258491 ignition[690]: fetch-offline: fetch-offline passed Jan 29 11:13:16.258564 ignition[690]: Ignition finished successfully Jan 29 11:13:16.263609 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:13:16.277732 systemd-networkd[785]: lo: Link UP Jan 29 11:13:16.277742 systemd-networkd[785]: lo: Gained carrier Jan 29 11:13:16.279539 systemd-networkd[785]: Enumeration completed Jan 29 11:13:16.280011 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:16.280015 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:13:16.280919 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:13:16.281048 systemd-networkd[785]: eth0: Link UP Jan 29 11:13:16.281053 systemd-networkd[785]: eth0: Gained carrier Jan 29 11:13:16.281061 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:16.287297 systemd[1]: Reached target network.target - Network. Jan 29 11:13:16.289599 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:13:16.297503 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:13:16.297606 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:13:16.316294 ignition[788]: Ignition 2.20.0 Jan 29 11:13:16.316306 ignition[788]: Stage: kargs Jan 29 11:13:16.316484 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:16.316495 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:16.317349 ignition[788]: kargs: kargs passed Jan 29 11:13:16.317393 ignition[788]: Ignition finished successfully Jan 29 11:13:16.321287 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:13:16.332566 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:13:16.347018 ignition[798]: Ignition 2.20.0 Jan 29 11:13:16.347034 ignition[798]: Stage: disks Jan 29 11:13:16.347237 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:16.347252 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:16.348364 ignition[798]: disks: disks passed Jan 29 11:13:16.350670 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:13:16.348421 ignition[798]: Ignition finished successfully Jan 29 11:13:16.352645 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:13:16.354526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:13:16.356521 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:13:16.358617 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:13:16.360858 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:13:16.380622 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:13:16.392983 systemd-resolved[226]: Detected conflict on linux IN A 10.0.0.47 Jan 29 11:13:16.392997 systemd-resolved[226]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 29 11:13:16.395557 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:13:16.401904 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:13:16.414604 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:13:16.498465 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:13:16.498795 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:13:16.500241 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:13:16.517517 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:13:16.519227 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:13:16.520368 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:13:16.520404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:13:16.527946 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) Jan 29 11:13:16.531330 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:13:16.531342 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:13:16.531352 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:13:16.520423 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:13:16.534569 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:13:16.527116 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:13:16.532051 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:13:16.536785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:13:16.571230 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:13:16.575597 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:13:16.580221 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:13:16.585209 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:13:16.670608 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:13:16.681513 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:13:16.682753 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:13:16.693464 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:13:16.706931 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:13:16.715621 ignition[931]: INFO : Ignition 2.20.0 Jan 29 11:13:16.715621 ignition[931]: INFO : Stage: mount Jan 29 11:13:16.717373 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:16.717373 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:16.717373 ignition[931]: INFO : mount: mount passed Jan 29 11:13:16.717373 ignition[931]: INFO : Ignition finished successfully Jan 29 11:13:16.721733 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:13:16.734562 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:13:17.085232 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:13:17.101657 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:13:17.109182 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Jan 29 11:13:17.109215 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:13:17.109229 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:13:17.110712 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:13:17.113474 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:13:17.114639 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:13:17.137744 ignition[961]: INFO : Ignition 2.20.0 Jan 29 11:13:17.137744 ignition[961]: INFO : Stage: files Jan 29 11:13:17.139530 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:17.139530 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:17.139530 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:13:17.142875 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:13:17.142875 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:13:17.147328 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:13:17.148755 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:13:17.150511 unknown[961]: wrote ssh authorized keys file for user: core Jan 29 11:13:17.151664 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:13:17.153787 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:13:17.155658 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:13:17.192573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:13:17.284166 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:13:17.286260 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:13:17.286260 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:13:17.798965 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:13:17.891157 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:13:17.894092 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:13:17.895975 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:13:17.897914 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:13:17.899937 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:13:17.901872 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:13:17.903852 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:13:17.905797 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:13:17.907786 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:13:17.909850 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:13:17.911710 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:13:17.913452 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:13:17.917602 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:13:17.920030 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:13:17.922141 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:13:17.981594 systemd-networkd[785]: eth0: Gained IPv6LL Jan 29 11:13:18.232336 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:13:18.589456 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:13:18.589456 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:13:18.593639 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:13:18.593639 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:13:18.593639 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:13:18.593639 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:13:18.593639 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:13:18.593639 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:13:18.593639 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:13:18.593639 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:13:18.615294 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:13:18.620052 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:13:18.621713 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:13:18.621713 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:13:18.621713 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:13:18.621713 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:13:18.621713 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:13:18.621713 ignition[961]: INFO : files: files passed Jan 29 11:13:18.621713 ignition[961]: INFO : Ignition finished successfully Jan 29 11:13:18.622810 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:13:18.634561 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:13:18.636419 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:13:18.638051 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:13:18.638157 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:13:18.644867 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:13:18.647131 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:13:18.648770 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:13:18.651484 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:13:18.649820 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:13:18.651893 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:13:18.669576 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:13:18.690782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:13:18.690901 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:13:18.693095 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:13:18.693658 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:13:18.696345 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:13:18.699325 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:13:18.716911 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:13:18.735584 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:13:18.745128 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:13:18.746465 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:13:18.748687 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:13:18.750709 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:13:18.750836 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:13:18.752993 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:13:18.754762 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:13:18.756815 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:13:18.758895 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:13:18.760925 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:13:18.763107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:13:18.765247 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:13:18.767540 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:13:18.769605 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:13:18.771831 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:13:18.773644 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:13:18.773777 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:13:18.775908 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:13:18.777544 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:13:18.779560 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:13:18.779699 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:13:18.781758 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:13:18.781885 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:13:18.783983 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:13:18.784106 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:13:18.786125 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:13:18.787856 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:13:18.791505 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:13:18.793592 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:13:18.795597 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:13:18.797334 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:13:18.797466 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:13:18.799391 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:13:18.799528 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:13:18.801861 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:13:18.801989 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:13:18.803973 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:13:18.804078 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:13:18.815565 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:13:18.816487 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:13:18.816597 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:13:18.819377 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:13:18.820457 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:13:18.820614 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:13:18.823064 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:13:18.823357 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:13:18.829132 ignition[1015]: INFO : Ignition 2.20.0 Jan 29 11:13:18.829132 ignition[1015]: INFO : Stage: umount Jan 29 11:13:18.829132 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:13:18.829132 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:13:18.835468 ignition[1015]: INFO : umount: umount passed Jan 29 11:13:18.835468 ignition[1015]: INFO : Ignition finished successfully Jan 29 11:13:18.830373 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:13:18.830543 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:13:18.832603 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:13:18.832705 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:13:18.836067 systemd[1]: Stopped target network.target - Network. Jan 29 11:13:18.837198 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:13:18.837249 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:13:18.839097 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:13:18.839143 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:13:18.841026 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:13:18.841072 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:13:18.843219 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:13:18.843264 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:13:18.845567 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:13:18.847542 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:13:18.850358 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:13:18.854468 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 29 11:13:18.856116 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:13:18.856260 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:13:18.857911 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:13:18.858024 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:13:18.861586 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:13:18.861660 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:13:18.871664 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:13:18.872624 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:13:18.872682 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:13:18.874886 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:13:18.874934 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:13:18.877147 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:13:18.877194 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:13:18.879832 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:13:18.879895 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:13:18.882221 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:13:18.895067 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:13:18.895257 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:13:18.903207 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:13:18.903461 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:13:18.904184 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:13:18.904238 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:13:18.906977 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:13:18.907020 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:13:18.909001 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:13:18.909056 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:13:18.912620 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:13:18.912668 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:13:18.915361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:13:18.915422 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:13:18.927638 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:13:18.928089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:13:18.928151 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:13:18.928497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:13:18.928542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:18.958045 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:13:18.958170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:13:19.052825 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:13:19.052964 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:13:19.053704 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:13:19.054051 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:13:19.054107 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:13:19.072610 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:13:19.080520 systemd[1]: Switching root. Jan 29 11:13:19.105949 systemd-journald[194]: Journal stopped Jan 29 11:13:20.292038 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 29 11:13:20.292103 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:13:20.292117 kernel: SELinux: policy capability open_perms=1 Jan 29 11:13:20.292128 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:13:20.292139 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:13:20.292150 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:13:20.292162 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:13:20.292177 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:13:20.292188 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:13:20.292203 kernel: audit: type=1403 audit(1738149199.548:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:13:20.292215 systemd[1]: Successfully loaded SELinux policy in 39.867ms. Jan 29 11:13:20.292243 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.471ms. Jan 29 11:13:20.292256 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:13:20.292268 systemd[1]: Detected virtualization kvm. Jan 29 11:13:20.292280 systemd[1]: Detected architecture x86-64. Jan 29 11:13:20.292295 systemd[1]: Detected first boot. Jan 29 11:13:20.292306 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:13:20.292318 zram_generator::config[1059]: No configuration found. Jan 29 11:13:20.292330 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:13:20.292342 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:13:20.292373 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:13:20.292386 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:13:20.292399 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:13:20.292411 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:13:20.292425 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:13:20.292449 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:13:20.292462 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:13:20.292474 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:13:20.292486 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:13:20.292498 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:13:20.292510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:13:20.292523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:13:20.292537 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:13:20.292549 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:13:20.292562 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:13:20.292574 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:13:20.292586 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:13:20.292597 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:13:20.292609 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:13:20.292623 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:13:20.292635 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:13:20.292649 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:13:20.292661 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:13:20.292673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:13:20.292685 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:13:20.292696 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:13:20.292708 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:13:20.292720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:13:20.292733 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:13:20.292747 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:13:20.292758 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:13:20.292770 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:13:20.292782 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:13:20.292793 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:13:20.292805 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:13:20.292818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:13:20.292832 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:13:20.292845 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:13:20.292860 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:13:20.292872 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:13:20.292885 systemd[1]: Reached target machines.target - Containers. Jan 29 11:13:20.292897 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:13:20.292909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:13:20.292921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:13:20.292933 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:13:20.292944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:13:20.292958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:13:20.292970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:13:20.292982 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:13:20.292994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:13:20.293006 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:13:20.293021 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:13:20.293035 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:13:20.293049 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:13:20.293060 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:13:20.293074 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:13:20.293086 kernel: loop: module loaded Jan 29 11:13:20.293097 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:13:20.293108 kernel: fuse: init (API version 7.39) Jan 29 11:13:20.293120 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:13:20.293136 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:13:20.293150 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:13:20.293162 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:13:20.293174 systemd[1]: Stopped verity-setup.service. Jan 29 11:13:20.293188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:13:20.293200 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:13:20.293230 systemd-journald[1129]: Collecting audit messages is disabled. Jan 29 11:13:20.293255 kernel: ACPI: bus type drm_connector registered Jan 29 11:13:20.293267 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:13:20.293279 systemd-journald[1129]: Journal started Jan 29 11:13:20.293300 systemd-journald[1129]: Runtime Journal (/run/log/journal/7d4e6c699d1e4c2fb59c1f9e09a76f53) is 6.0M, max 48.3M, 42.2M free. Jan 29 11:13:20.061580 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:13:20.078170 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:13:20.078644 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:13:20.296454 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:13:20.297554 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:13:20.298694 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:13:20.300008 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:13:20.301345 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:13:20.302825 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:13:20.304694 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:13:20.304927 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:13:20.306680 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:13:20.308308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:13:20.308895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:13:20.310508 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:13:20.310738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:13:20.312169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:13:20.312368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:13:20.314049 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:13:20.314230 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:13:20.315684 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:13:20.315861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:13:20.317415 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:13:20.319033 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:13:20.320691 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:13:20.335883 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:13:20.342590 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:13:20.345553 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:13:20.346989 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:13:20.347031 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:13:20.349550 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:13:20.355143 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:13:20.358311 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:13:20.359845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:13:20.361401 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:13:20.364293 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:13:20.366579 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:13:20.369779 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:13:20.371125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:13:20.372167 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:13:20.378688 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:13:20.380981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:13:20.384181 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:13:20.385857 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:13:20.390566 systemd-journald[1129]: Time spent on flushing to /var/log/journal/7d4e6c699d1e4c2fb59c1f9e09a76f53 is 26.477ms for 1049 entries. Jan 29 11:13:20.390566 systemd-journald[1129]: System Journal (/var/log/journal/7d4e6c699d1e4c2fb59c1f9e09a76f53) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:13:20.425846 systemd-journald[1129]: Received client request to flush runtime journal. Jan 29 11:13:20.425887 kernel: loop0: detected capacity change from 0 to 138184 Jan 29 11:13:20.387827 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:13:20.403604 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:13:20.412824 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:13:20.414606 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:13:20.417024 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:13:20.430630 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:13:20.432741 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:13:20.434702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:13:20.438463 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:13:20.440758 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:13:20.454104 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:13:20.461806 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:13:20.464265 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:13:20.465809 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:13:20.466457 kernel: loop1: detected capacity change from 0 to 205544 Jan 29 11:13:20.487368 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 29 11:13:20.487388 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 29 11:13:20.493062 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:13:20.505666 kernel: loop2: detected capacity change from 0 to 140992 Jan 29 11:13:20.540465 kernel: loop3: detected capacity change from 0 to 138184 Jan 29 11:13:20.552467 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 11:13:20.561460 kernel: loop5: detected capacity change from 0 to 140992 Jan 29 11:13:20.570620 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:13:20.571182 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 29 11:13:20.575330 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:13:20.575356 systemd[1]: Reloading... Jan 29 11:13:20.630479 zram_generator::config[1222]: No configuration found. Jan 29 11:13:20.703064 ldconfig[1168]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:13:20.764081 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:13:20.823109 systemd[1]: Reloading finished in 247 ms. Jan 29 11:13:20.881289 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:13:20.883062 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:13:20.900757 systemd[1]: Starting ensure-sysext.service... Jan 29 11:13:20.903121 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:13:20.910426 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:13:20.910473 systemd[1]: Reloading... Jan 29 11:13:20.952277 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:13:20.952714 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:13:20.953844 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:13:20.954182 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 29 11:13:20.954259 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 29 11:13:20.958584 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:13:20.958689 systemd-tmpfiles[1262]: Skipping /boot Jan 29 11:13:20.963526 zram_generator::config[1288]: No configuration found. Jan 29 11:13:20.970805 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:13:20.970943 systemd-tmpfiles[1262]: Skipping /boot Jan 29 11:13:21.077085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:13:21.133812 systemd[1]: Reloading finished in 222 ms. Jan 29 11:13:21.153859 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:13:21.166880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:13:21.173853 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:13:21.176160 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:13:21.178468 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:13:21.182713 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:13:21.188566 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:13:21.192125 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:13:21.208680 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:13:21.211354 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:13:21.211528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:13:21.216732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:13:21.216984 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 29 11:13:21.219773 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:13:21.223739 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:13:21.225701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:13:21.225817 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:13:21.226772 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:13:21.238868 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:13:21.239089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:13:21.242060 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:13:21.242222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:13:21.244039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:13:21.244226 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:13:21.246399 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:13:21.249286 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:13:21.261831 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:13:21.263258 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:13:21.264601 augenrules[1372]: No rules Jan 29 11:13:21.263428 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:13:21.265216 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:13:21.267172 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:13:21.267403 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:13:21.269158 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:13:21.277969 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:13:21.286871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:13:21.287037 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:13:21.295758 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:13:21.299803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:13:21.304631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:13:21.305909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:13:21.306014 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:13:21.306083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:13:21.307111 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:13:21.317602 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:13:21.318668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:13:21.321637 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:13:21.322757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:13:21.327727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:13:21.330669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:13:21.330801 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:13:21.330878 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:13:21.331799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:13:21.331992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:13:21.333683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:13:21.333874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:13:21.336376 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:13:21.336570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:13:21.347908 systemd[1]: Finished ensure-sysext.service. Jan 29 11:13:21.351753 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1388) Jan 29 11:13:21.351390 systemd-resolved[1330]: Positive Trust Anchors: Jan 29 11:13:21.351417 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:13:21.351463 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:13:21.351902 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:13:21.352139 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:13:21.355512 augenrules[1399]: /sbin/augenrules: No change Jan 29 11:13:21.356906 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:13:21.356979 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:13:21.361899 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 29 11:13:21.368650 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:13:21.370669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:13:21.374111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:13:21.375654 augenrules[1425]: No rules Jan 29 11:13:21.377784 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:13:21.378042 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:13:21.387931 systemd-networkd[1377]: lo: Link UP Jan 29 11:13:21.387944 systemd-networkd[1377]: lo: Gained carrier Jan 29 11:13:21.390084 systemd-networkd[1377]: Enumeration completed Jan 29 11:13:21.390295 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:13:21.391357 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:21.391368 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:13:21.391797 systemd[1]: Reached target network.target - Network. Jan 29 11:13:21.392659 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:21.392693 systemd-networkd[1377]: eth0: Link UP Jan 29 11:13:21.392696 systemd-networkd[1377]: eth0: Gained carrier Jan 29 11:13:21.392706 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:13:21.398645 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:13:21.404474 systemd-networkd[1377]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:13:21.421566 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:13:21.427452 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:13:21.435206 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:13:21.446667 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:13:21.453385 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:13:23.013506 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 29 11:13:23.013745 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:13:23.013912 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:13:23.014118 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:13:23.012490 systemd-resolved[1330]: Clock change detected. Flushing caches. Jan 29 11:13:23.013635 systemd-timesyncd[1423]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:13:23.013700 systemd-timesyncd[1423]: Initial clock synchronization to Wed 2025-01-29 11:13:23.012430 UTC. Jan 29 11:13:23.015748 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:13:23.025027 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:13:23.025665 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:13:23.044033 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:13:23.053188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:13:23.104537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:13:23.147059 kernel: kvm_amd: TSC scaling supported Jan 29 11:13:23.147124 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:13:23.147138 kernel: kvm_amd: Nested Paging enabled Jan 29 11:13:23.147151 kernel: kvm_amd: LBR virtualization supported Jan 29 11:13:23.148126 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:13:23.148142 kernel: kvm_amd: Virtual GIF supported Jan 29 11:13:23.167033 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:13:23.193487 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:13:23.206192 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:13:23.214376 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:13:23.248448 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:13:23.250760 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:13:23.251925 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:13:23.253125 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:13:23.254401 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:13:23.255906 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:13:23.257148 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:13:23.258423 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:13:23.259697 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:13:23.259722 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:13:23.260644 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:13:23.262405 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:13:23.265264 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:13:23.275444 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:13:23.278094 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:13:23.279949 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:13:23.281228 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:13:23.282270 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:13:23.283303 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:13:23.283332 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:13:23.284604 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:13:23.287046 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:13:23.287614 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:13:23.291343 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:13:23.295180 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:13:23.296485 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:13:23.299656 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:13:23.301965 jq[1460]: false Jan 29 11:13:23.302569 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:13:23.307165 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:13:23.314105 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:13:23.320644 dbus-daemon[1459]: [system] SELinux support is enabled Jan 29 11:13:23.322647 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:13:23.324823 extend-filesystems[1461]: Found loop3 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found loop4 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found loop5 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found sr0 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found vda Jan 29 11:13:23.324823 extend-filesystems[1461]: Found vda1 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found vda2 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found vda3 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found usr Jan 29 11:13:23.324823 extend-filesystems[1461]: Found vda4 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found vda6 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found vda7 Jan 29 11:13:23.324823 extend-filesystems[1461]: Found vda9 Jan 29 11:13:23.324823 extend-filesystems[1461]: Checking size of /dev/vda9 Jan 29 11:13:23.324275 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:13:23.359083 extend-filesystems[1461]: Resized partition /dev/vda9 Jan 29 11:13:23.324807 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:13:23.329371 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:13:23.365326 jq[1477]: true Jan 29 11:13:23.332763 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:13:23.339371 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:13:23.342647 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:13:23.345095 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:13:23.345318 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:13:23.345650 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:13:23.345841 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:13:23.349558 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:13:23.350474 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:13:23.375125 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1379) Jan 29 11:13:23.372933 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:13:23.375408 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:13:23.380130 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:13:23.380233 tar[1483]: linux-amd64/helm Jan 29 11:13:23.383565 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:13:23.383599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:13:23.384957 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:13:23.384973 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:13:23.389132 update_engine[1474]: I20250129 11:13:23.388841 1474 main.cc:92] Flatcar Update Engine starting Jan 29 11:13:23.390114 jq[1486]: true Jan 29 11:13:23.397024 update_engine[1474]: I20250129 11:13:23.395202 1474 update_check_scheduler.cc:74] Next update check in 6m24s Jan 29 11:13:23.401140 systemd-logind[1472]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:13:23.401411 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:13:23.402481 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:13:23.402703 systemd-logind[1472]: New seat seat0. Jan 29 11:13:23.406668 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:13:23.415988 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:13:23.424269 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:13:23.443254 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:13:23.443254 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:13:23.443254 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:13:23.446521 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:13:23.446107 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:13:23.446667 extend-filesystems[1461]: Resized filesystem in /dev/vda9 Jan 29 11:13:23.446333 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:13:23.456025 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:13:23.457183 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:13:23.460969 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:13:23.460974 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:13:23.470180 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:13:23.484442 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:13:23.493382 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:13:23.493704 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:13:23.497457 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:13:23.512264 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:13:23.519641 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:13:23.521923 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:13:23.523772 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:13:23.589344 containerd[1491]: time="2025-01-29T11:13:23.589146550Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:13:23.612814 containerd[1491]: time="2025-01-29T11:13:23.612740817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:23.614792 containerd[1491]: time="2025-01-29T11:13:23.614749384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:23.614792 containerd[1491]: time="2025-01-29T11:13:23.614781254Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:13:23.614866 containerd[1491]: time="2025-01-29T11:13:23.614797755Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:13:23.615045 containerd[1491]: time="2025-01-29T11:13:23.615024230Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:13:23.615074 containerd[1491]: time="2025-01-29T11:13:23.615045640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615143 containerd[1491]: time="2025-01-29T11:13:23.615117665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615143 containerd[1491]: time="2025-01-29T11:13:23.615134386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615358 containerd[1491]: time="2025-01-29T11:13:23.615330203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615358 containerd[1491]: time="2025-01-29T11:13:23.615348588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615402 containerd[1491]: time="2025-01-29T11:13:23.615362474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615402 containerd[1491]: time="2025-01-29T11:13:23.615372162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615510 containerd[1491]: time="2025-01-29T11:13:23.615464145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615717 containerd[1491]: time="2025-01-29T11:13:23.615697412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615835 containerd[1491]: time="2025-01-29T11:13:23.615816676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:13:23.615835 containerd[1491]: time="2025-01-29T11:13:23.615831513Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:13:23.615975 containerd[1491]: time="2025-01-29T11:13:23.615952340Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:13:23.616066 containerd[1491]: time="2025-01-29T11:13:23.616044333Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:13:23.620926 containerd[1491]: time="2025-01-29T11:13:23.620889188Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:13:23.620979 containerd[1491]: time="2025-01-29T11:13:23.620958959Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:13:23.621000 containerd[1491]: time="2025-01-29T11:13:23.620981952Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:13:23.621043 containerd[1491]: time="2025-01-29T11:13:23.621022198Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:13:23.621043 containerd[1491]: time="2025-01-29T11:13:23.621038649Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:13:23.621235 containerd[1491]: time="2025-01-29T11:13:23.621209409Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:13:23.621570 containerd[1491]: time="2025-01-29T11:13:23.621512938Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:13:23.621712 containerd[1491]: time="2025-01-29T11:13:23.621685933Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:13:23.621712 containerd[1491]: time="2025-01-29T11:13:23.621707243Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:13:23.621787 containerd[1491]: time="2025-01-29T11:13:23.621722872Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:13:23.621787 containerd[1491]: time="2025-01-29T11:13:23.621737830Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:13:23.621787 containerd[1491]: time="2025-01-29T11:13:23.621753630Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:13:23.621787 containerd[1491]: time="2025-01-29T11:13:23.621767886Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:13:23.621787 containerd[1491]: time="2025-01-29T11:13:23.621784167Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621799506Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621813993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621826105Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621836545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621856853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621869757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621891147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621904603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621916345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.621924 containerd[1491]: time="2025-01-29T11:13:23.621929259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.621941692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.621955518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.621968963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.621983220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.621994101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.622028305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.622040928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.622060004Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.622080082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.622094108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.622104958Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.622150203Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.622167456Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:13:23.622210 containerd[1491]: time="2025-01-29T11:13:23.622176994Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:13:23.622554 containerd[1491]: time="2025-01-29T11:13:23.622187684Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:13:23.622554 containerd[1491]: time="2025-01-29T11:13:23.622196370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622554 containerd[1491]: time="2025-01-29T11:13:23.622207651Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:13:23.622554 containerd[1491]: time="2025-01-29T11:13:23.622219092Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:13:23.622554 containerd[1491]: time="2025-01-29T11:13:23.622230013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:13:23.622690 containerd[1491]: time="2025-01-29T11:13:23.622483589Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:13:23.622690 containerd[1491]: time="2025-01-29T11:13:23.622521880Z" level=info msg="Connect containerd service" Jan 29 11:13:23.622690 containerd[1491]: time="2025-01-29T11:13:23.622552478Z" level=info msg="using legacy CRI server" Jan 29 11:13:23.622690 containerd[1491]: time="2025-01-29T11:13:23.622559280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:13:23.622690 containerd[1491]: time="2025-01-29T11:13:23.622682562Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.623229387Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.623886880Z" level=info msg="Start subscribing containerd event" Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.623932365Z" level=info msg="Start recovering state" Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.624000744Z" level=info msg="Start event monitor" Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.624047622Z" level=info msg="Start snapshots syncer" Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.624058161Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.624070695Z" level=info msg="Start streaming server" Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.624154091Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.624244110Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:13:23.624529 containerd[1491]: time="2025-01-29T11:13:23.624320834Z" level=info msg="containerd successfully booted in 0.036918s" Jan 29 11:13:23.624451 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:13:23.797393 tar[1483]: linux-amd64/LICENSE Jan 29 11:13:23.797483 tar[1483]: linux-amd64/README.md Jan 29 11:13:23.807925 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:13:23.810308 systemd[1]: Started sshd@0-10.0.0.47:22-10.0.0.1:58534.service - OpenSSH per-connection server daemon (10.0.0.1:58534). Jan 29 11:13:23.817947 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:13:23.867963 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 58534 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:13:23.870292 sshd-session[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:23.881102 systemd-logind[1472]: New session 1 of user core. Jan 29 11:13:23.882835 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:13:23.900394 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:13:23.914401 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:13:23.930232 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:13:23.934022 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:13:24.033727 systemd[1557]: Queued start job for default target default.target. Jan 29 11:13:24.045221 systemd[1557]: Created slice app.slice - User Application Slice. Jan 29 11:13:24.045246 systemd[1557]: Reached target paths.target - Paths. Jan 29 11:13:24.045259 systemd[1557]: Reached target timers.target - Timers. Jan 29 11:13:24.046706 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:13:24.058908 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:13:24.059106 systemd[1557]: Reached target sockets.target - Sockets. Jan 29 11:13:24.059134 systemd[1557]: Reached target basic.target - Basic System. Jan 29 11:13:24.059186 systemd[1557]: Reached target default.target - Main User Target. Jan 29 11:13:24.059235 systemd[1557]: Startup finished in 118ms. Jan 29 11:13:24.059423 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:13:24.076146 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:13:24.139721 systemd[1]: Started sshd@1-10.0.0.47:22-10.0.0.1:58548.service - OpenSSH per-connection server daemon (10.0.0.1:58548). Jan 29 11:13:24.182314 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 58548 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:13:24.184212 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:24.188631 systemd-logind[1472]: New session 2 of user core. Jan 29 11:13:24.206152 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:13:24.260518 sshd[1570]: Connection closed by 10.0.0.1 port 58548 Jan 29 11:13:24.260908 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:24.276640 systemd[1]: sshd@1-10.0.0.47:22-10.0.0.1:58548.service: Deactivated successfully. Jan 29 11:13:24.278480 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:13:24.280100 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:13:24.281702 systemd[1]: Started sshd@2-10.0.0.47:22-10.0.0.1:58558.service - OpenSSH per-connection server daemon (10.0.0.1:58558). Jan 29 11:13:24.284053 systemd-logind[1472]: Removed session 2. Jan 29 11:13:24.323202 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 58558 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:13:24.324936 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:24.329273 systemd-logind[1472]: New session 3 of user core. Jan 29 11:13:24.343138 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:13:24.397535 sshd[1577]: Connection closed by 10.0.0.1 port 58558 Jan 29 11:13:24.397778 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:24.401304 systemd[1]: sshd@2-10.0.0.47:22-10.0.0.1:58558.service: Deactivated successfully. Jan 29 11:13:24.403125 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:13:24.403657 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:13:24.404414 systemd-logind[1472]: Removed session 3. Jan 29 11:13:24.722152 systemd-networkd[1377]: eth0: Gained IPv6LL Jan 29 11:13:24.725221 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:13:24.727002 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:13:24.736302 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:13:24.738813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:24.741257 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:13:24.761295 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:13:24.761954 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:13:24.763689 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:13:24.765839 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:13:25.371302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:25.372983 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:13:25.374346 systemd[1]: Startup finished in 730ms (kernel) + 5.849s (initrd) + 4.308s (userspace) = 10.887s. Jan 29 11:13:25.376429 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:13:25.769235 kubelet[1603]: E0129 11:13:25.768881 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:13:25.772913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:13:25.773126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:13:34.408325 systemd[1]: Started sshd@3-10.0.0.47:22-10.0.0.1:51462.service - OpenSSH per-connection server daemon (10.0.0.1:51462). Jan 29 11:13:34.450177 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 51462 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:13:34.451803 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:34.455800 systemd-logind[1472]: New session 4 of user core. Jan 29 11:13:34.468199 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:13:34.520977 sshd[1618]: Connection closed by 10.0.0.1 port 51462 Jan 29 11:13:34.521257 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:34.538056 systemd[1]: sshd@3-10.0.0.47:22-10.0.0.1:51462.service: Deactivated successfully. Jan 29 11:13:34.540342 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:13:34.542170 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:13:34.553340 systemd[1]: Started sshd@4-10.0.0.47:22-10.0.0.1:51464.service - OpenSSH per-connection server daemon (10.0.0.1:51464). Jan 29 11:13:34.554661 systemd-logind[1472]: Removed session 4. Jan 29 11:13:34.588900 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 51464 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:13:34.590081 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:34.594468 systemd-logind[1472]: New session 5 of user core. Jan 29 11:13:34.608118 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:13:34.657051 sshd[1625]: Connection closed by 10.0.0.1 port 51464 Jan 29 11:13:34.657361 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:34.673037 systemd[1]: sshd@4-10.0.0.47:22-10.0.0.1:51464.service: Deactivated successfully. Jan 29 11:13:34.674700 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:13:34.676210 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:13:34.685311 systemd[1]: Started sshd@5-10.0.0.47:22-10.0.0.1:51480.service - OpenSSH per-connection server daemon (10.0.0.1:51480). Jan 29 11:13:34.686098 systemd-logind[1472]: Removed session 5. Jan 29 11:13:34.726075 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 51480 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:13:34.727477 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:34.731697 systemd-logind[1472]: New session 6 of user core. Jan 29 11:13:34.741132 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:13:34.794293 sshd[1632]: Connection closed by 10.0.0.1 port 51480 Jan 29 11:13:34.794753 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:34.806507 systemd[1]: sshd@5-10.0.0.47:22-10.0.0.1:51480.service: Deactivated successfully. Jan 29 11:13:34.807907 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:13:34.809283 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:13:34.816215 systemd[1]: Started sshd@6-10.0.0.47:22-10.0.0.1:58576.service - OpenSSH per-connection server daemon (10.0.0.1:58576). Jan 29 11:13:34.817074 systemd-logind[1472]: Removed session 6. Jan 29 11:13:34.851501 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 58576 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:13:34.852843 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:34.856540 systemd-logind[1472]: New session 7 of user core. Jan 29 11:13:34.866118 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:13:34.923520 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:13:34.923861 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:13:34.939296 sudo[1640]: pam_unix(sudo:session): session closed for user root Jan 29 11:13:34.940831 sshd[1639]: Connection closed by 10.0.0.1 port 58576 Jan 29 11:13:34.941201 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:34.952792 systemd[1]: sshd@6-10.0.0.47:22-10.0.0.1:58576.service: Deactivated successfully. Jan 29 11:13:34.954700 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:13:34.956561 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:13:34.958163 systemd[1]: Started sshd@7-10.0.0.47:22-10.0.0.1:58580.service - OpenSSH per-connection server daemon (10.0.0.1:58580). Jan 29 11:13:34.958987 systemd-logind[1472]: Removed session 7. Jan 29 11:13:35.001106 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 58580 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:13:35.003117 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:35.009961 systemd-logind[1472]: New session 8 of user core. Jan 29 11:13:35.020075 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:13:35.074877 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:13:35.075231 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:13:35.078875 sudo[1649]: pam_unix(sudo:session): session closed for user root Jan 29 11:13:35.085448 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:13:35.085777 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:13:35.109249 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:13:35.137829 augenrules[1671]: No rules Jan 29 11:13:35.139547 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:13:35.139788 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:13:35.141104 sudo[1648]: pam_unix(sudo:session): session closed for user root Jan 29 11:13:35.142503 sshd[1647]: Connection closed by 10.0.0.1 port 58580 Jan 29 11:13:35.142857 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:35.154813 systemd[1]: sshd@7-10.0.0.47:22-10.0.0.1:58580.service: Deactivated successfully. Jan 29 11:13:35.156518 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:13:35.158095 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:13:35.159343 systemd[1]: Started sshd@8-10.0.0.47:22-10.0.0.1:58582.service - OpenSSH per-connection server daemon (10.0.0.1:58582). Jan 29 11:13:35.159996 systemd-logind[1472]: Removed session 8. Jan 29 11:13:35.198662 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 58582 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:13:35.199998 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:35.203657 systemd-logind[1472]: New session 9 of user core. Jan 29 11:13:35.214119 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:13:35.266691 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:13:35.267041 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:13:35.537213 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:13:35.537408 (dockerd)[1702]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:13:35.783730 dockerd[1702]: time="2025-01-29T11:13:35.783664604Z" level=info msg="Starting up" Jan 29 11:13:35.787656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:13:35.793568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:36.054224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:36.058375 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:13:36.097147 kubelet[1734]: E0129 11:13:36.097093 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:13:36.103530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:13:36.103745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:13:36.233567 dockerd[1702]: time="2025-01-29T11:13:36.233501459Z" level=info msg="Loading containers: start." Jan 29 11:13:36.406036 kernel: Initializing XFRM netlink socket Jan 29 11:13:36.491049 systemd-networkd[1377]: docker0: Link UP Jan 29 11:13:36.528783 dockerd[1702]: time="2025-01-29T11:13:36.528721542Z" level=info msg="Loading containers: done." Jan 29 11:13:36.543733 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck173246905-merged.mount: Deactivated successfully. Jan 29 11:13:36.546487 dockerd[1702]: time="2025-01-29T11:13:36.546443497Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:13:36.546583 dockerd[1702]: time="2025-01-29T11:13:36.546549535Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:13:36.546689 dockerd[1702]: time="2025-01-29T11:13:36.546664631Z" level=info msg="Daemon has completed initialization" Jan 29 11:13:36.585902 dockerd[1702]: time="2025-01-29T11:13:36.585826674Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:13:36.586099 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:13:37.218247 containerd[1491]: time="2025-01-29T11:13:37.218208846Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:13:37.814325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942415974.mount: Deactivated successfully. Jan 29 11:13:38.705229 containerd[1491]: time="2025-01-29T11:13:38.705170781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:38.705823 containerd[1491]: time="2025-01-29T11:13:38.705784632Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 11:13:38.707067 containerd[1491]: time="2025-01-29T11:13:38.707030308Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:38.709787 containerd[1491]: time="2025-01-29T11:13:38.709735722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:38.710786 containerd[1491]: time="2025-01-29T11:13:38.710740857Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.492494039s" Jan 29 11:13:38.710830 containerd[1491]: time="2025-01-29T11:13:38.710788486Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:13:38.712140 containerd[1491]: time="2025-01-29T11:13:38.712112579Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:13:40.046798 containerd[1491]: time="2025-01-29T11:13:40.046746113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:40.047561 containerd[1491]: time="2025-01-29T11:13:40.047498214Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 11:13:40.048545 containerd[1491]: time="2025-01-29T11:13:40.048511514Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:40.051065 containerd[1491]: time="2025-01-29T11:13:40.051026290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:40.052133 containerd[1491]: time="2025-01-29T11:13:40.052098581Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.339949944s" Jan 29 11:13:40.052133 containerd[1491]: time="2025-01-29T11:13:40.052124380Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:13:40.052994 containerd[1491]: time="2025-01-29T11:13:40.052962101Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:13:41.863756 containerd[1491]: time="2025-01-29T11:13:41.863685052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:41.864881 containerd[1491]: time="2025-01-29T11:13:41.864822726Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 11:13:41.866169 containerd[1491]: time="2025-01-29T11:13:41.866134015Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:41.869065 containerd[1491]: time="2025-01-29T11:13:41.869034475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:41.870066 containerd[1491]: time="2025-01-29T11:13:41.870026405Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.817034508s" Jan 29 11:13:41.870124 containerd[1491]: time="2025-01-29T11:13:41.870067792Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:13:41.870603 containerd[1491]: time="2025-01-29T11:13:41.870573080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:13:42.851526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397989420.mount: Deactivated successfully. Jan 29 11:13:43.474000 containerd[1491]: time="2025-01-29T11:13:43.473943502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:43.474960 containerd[1491]: time="2025-01-29T11:13:43.474921726Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:13:43.476291 containerd[1491]: time="2025-01-29T11:13:43.476235250Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:43.478255 containerd[1491]: time="2025-01-29T11:13:43.478226845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:43.478935 containerd[1491]: time="2025-01-29T11:13:43.478899256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.608291701s" Jan 29 11:13:43.478962 containerd[1491]: time="2025-01-29T11:13:43.478936055Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:13:43.479422 containerd[1491]: time="2025-01-29T11:13:43.479388193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:13:43.996577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231201040.mount: Deactivated successfully. Jan 29 11:13:44.680122 containerd[1491]: time="2025-01-29T11:13:44.680054561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:44.681242 containerd[1491]: time="2025-01-29T11:13:44.681200700Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:13:44.682843 containerd[1491]: time="2025-01-29T11:13:44.682780924Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:44.685953 containerd[1491]: time="2025-01-29T11:13:44.685907477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:44.686979 containerd[1491]: time="2025-01-29T11:13:44.686931217Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.207511826s" Jan 29 11:13:44.686979 containerd[1491]: time="2025-01-29T11:13:44.686977183Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:13:44.687519 containerd[1491]: time="2025-01-29T11:13:44.687485536Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:13:45.646033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935277274.mount: Deactivated successfully. Jan 29 11:13:45.652785 containerd[1491]: time="2025-01-29T11:13:45.652750673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:45.653486 containerd[1491]: time="2025-01-29T11:13:45.653435718Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:13:45.654481 containerd[1491]: time="2025-01-29T11:13:45.654444990Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:45.656534 containerd[1491]: time="2025-01-29T11:13:45.656499383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:45.657113 containerd[1491]: time="2025-01-29T11:13:45.657078529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 969.56483ms" Jan 29 11:13:45.657113 containerd[1491]: time="2025-01-29T11:13:45.657104137Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:13:45.657603 containerd[1491]: time="2025-01-29T11:13:45.657579168Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:13:46.183247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:13:46.192295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:46.193854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2417966042.mount: Deactivated successfully. Jan 29 11:13:46.350199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:46.354337 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:13:46.489706 kubelet[2052]: E0129 11:13:46.489350 2052 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:13:46.492506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:13:46.492693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:13:48.327122 containerd[1491]: time="2025-01-29T11:13:48.327054127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:48.327743 containerd[1491]: time="2025-01-29T11:13:48.327715597Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 11:13:48.328912 containerd[1491]: time="2025-01-29T11:13:48.328855194Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:48.331705 containerd[1491]: time="2025-01-29T11:13:48.331684480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:48.332915 containerd[1491]: time="2025-01-29T11:13:48.332884070Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.675279124s" Jan 29 11:13:48.332915 containerd[1491]: time="2025-01-29T11:13:48.332912704Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:13:50.647298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:50.659210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:50.682698 systemd[1]: Reloading requested from client PID 2136 ('systemctl') (unit session-9.scope)... Jan 29 11:13:50.682714 systemd[1]: Reloading... Jan 29 11:13:50.757050 zram_generator::config[2178]: No configuration found. Jan 29 11:13:50.976815 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:13:51.052802 systemd[1]: Reloading finished in 369 ms. Jan 29 11:13:51.100239 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:13:51.100334 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:13:51.100625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:51.103215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:51.250184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:51.254927 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:13:51.293094 kubelet[2224]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:13:51.293094 kubelet[2224]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:13:51.293094 kubelet[2224]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:13:51.293363 kubelet[2224]: I0129 11:13:51.293166 2224 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:13:51.604995 kubelet[2224]: I0129 11:13:51.604949 2224 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:13:51.604995 kubelet[2224]: I0129 11:13:51.604988 2224 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:13:51.605324 kubelet[2224]: I0129 11:13:51.605301 2224 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:13:51.623501 kubelet[2224]: I0129 11:13:51.623464 2224 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:13:51.623695 kubelet[2224]: E0129 11:13:51.623650 2224 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:51.630960 kubelet[2224]: E0129 11:13:51.630917 2224 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:13:51.630960 kubelet[2224]: I0129 11:13:51.630951 2224 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:13:51.639167 kubelet[2224]: I0129 11:13:51.639131 2224 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:13:51.640049 kubelet[2224]: I0129 11:13:51.640022 2224 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:13:51.640232 kubelet[2224]: I0129 11:13:51.640191 2224 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:13:51.640404 kubelet[2224]: I0129 11:13:51.640221 2224 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:13:51.640404 kubelet[2224]: I0129 11:13:51.640399 2224 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:13:51.640520 kubelet[2224]: I0129 11:13:51.640408 2224 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:13:51.640544 kubelet[2224]: I0129 11:13:51.640529 2224 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:13:51.641841 kubelet[2224]: I0129 11:13:51.641814 2224 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:13:51.641841 kubelet[2224]: I0129 11:13:51.641836 2224 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:13:51.641902 kubelet[2224]: I0129 11:13:51.641873 2224 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:13:51.641902 kubelet[2224]: I0129 11:13:51.641888 2224 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:13:51.646066 kubelet[2224]: W0129 11:13:51.645995 2224 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 29 11:13:51.646129 kubelet[2224]: E0129 11:13:51.646067 2224 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:51.647514 kubelet[2224]: W0129 11:13:51.647434 2224 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 29 11:13:51.647514 kubelet[2224]: E0129 11:13:51.647489 2224 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:51.648264 kubelet[2224]: I0129 11:13:51.648232 2224 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:13:51.649985 kubelet[2224]: I0129 11:13:51.649957 2224 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:13:51.650444 kubelet[2224]: W0129 11:13:51.650422 2224 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:13:51.651022 kubelet[2224]: I0129 11:13:51.650996 2224 server.go:1269] "Started kubelet" Jan 29 11:13:51.651615 kubelet[2224]: I0129 11:13:51.651354 2224 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:13:51.651615 kubelet[2224]: I0129 11:13:51.651536 2224 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:13:51.651775 kubelet[2224]: I0129 11:13:51.651738 2224 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:13:51.652909 kubelet[2224]: I0129 11:13:51.652367 2224 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:13:51.652909 kubelet[2224]: I0129 11:13:51.652420 2224 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:13:51.653284 kubelet[2224]: I0129 11:13:51.653241 2224 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:13:51.653284 kubelet[2224]: I0129 11:13:51.653279 2224 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:13:51.653987 kubelet[2224]: I0129 11:13:51.653369 2224 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:13:51.653987 kubelet[2224]: I0129 11:13:51.653411 2224 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:13:51.656421 kubelet[2224]: W0129 11:13:51.656388 2224 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 29 11:13:51.656463 kubelet[2224]: E0129 11:13:51.656433 2224 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:51.656623 kubelet[2224]: I0129 11:13:51.656588 2224 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:13:51.656672 kubelet[2224]: I0129 11:13:51.656644 2224 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:13:51.657433 kubelet[2224]: E0129 11:13:51.655127 2224 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f25845f2ca6bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:13:51.650973372 +0000 UTC m=+0.392475753,LastTimestamp:2025-01-29 11:13:51.650973372 +0000 UTC m=+0.392475753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:13:51.657433 kubelet[2224]: E0129 11:13:51.657409 2224 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:13:51.657531 kubelet[2224]: E0129 11:13:51.657484 2224 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:13:51.657859 kubelet[2224]: E0129 11:13:51.657831 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="200ms" Jan 29 11:13:51.659877 kubelet[2224]: I0129 11:13:51.659153 2224 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:13:51.673227 kubelet[2224]: I0129 11:13:51.673200 2224 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:13:51.673227 kubelet[2224]: I0129 11:13:51.673220 2224 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:13:51.673298 kubelet[2224]: I0129 11:13:51.673237 2224 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:13:51.674777 kubelet[2224]: I0129 11:13:51.674733 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:13:51.676017 kubelet[2224]: I0129 11:13:51.675981 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:13:51.676071 kubelet[2224]: I0129 11:13:51.676028 2224 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:13:51.676071 kubelet[2224]: I0129 11:13:51.676044 2224 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:13:51.676110 kubelet[2224]: E0129 11:13:51.676078 2224 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:13:51.676463 kubelet[2224]: W0129 11:13:51.676423 2224 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 29 11:13:51.676463 kubelet[2224]: E0129 11:13:51.676455 2224 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:51.758258 kubelet[2224]: E0129 11:13:51.758205 2224 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:13:51.776801 kubelet[2224]: E0129 11:13:51.776748 2224 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:13:51.858410 kubelet[2224]: E0129 11:13:51.858265 2224 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:13:51.858666 kubelet[2224]: E0129 11:13:51.858535 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="400ms" Jan 29 11:13:51.950690 kubelet[2224]: I0129 11:13:51.950610 2224 policy_none.go:49] "None policy: Start" Jan 29 11:13:51.951557 kubelet[2224]: I0129 11:13:51.951533 2224 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:13:51.951600 kubelet[2224]: I0129 11:13:51.951570 2224 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:13:51.958501 kubelet[2224]: E0129 11:13:51.958464 2224 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:13:51.962286 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:13:51.977116 kubelet[2224]: E0129 11:13:51.977088 2224 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:13:51.981290 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:13:51.987640 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:13:51.996944 kubelet[2224]: I0129 11:13:51.996898 2224 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:13:51.997179 kubelet[2224]: I0129 11:13:51.997162 2224 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:13:51.997223 kubelet[2224]: I0129 11:13:51.997174 2224 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:13:51.997530 kubelet[2224]: I0129 11:13:51.997407 2224 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:13:51.998444 kubelet[2224]: E0129 11:13:51.998392 2224 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:13:52.098753 kubelet[2224]: I0129 11:13:52.098719 2224 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:13:52.099107 kubelet[2224]: E0129 11:13:52.099086 2224 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 29 11:13:52.260101 kubelet[2224]: E0129 11:13:52.259950 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="800ms" Jan 29 11:13:52.300980 kubelet[2224]: I0129 11:13:52.300955 2224 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:13:52.301338 kubelet[2224]: E0129 11:13:52.301256 2224 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 29 11:13:52.385537 systemd[1]: Created slice kubepods-burstable-pod9319d83cb758dcf190e57cc594b8ffd4.slice - libcontainer container kubepods-burstable-pod9319d83cb758dcf190e57cc594b8ffd4.slice. Jan 29 11:13:52.407760 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:13:52.421905 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:13:52.457726 kubelet[2224]: I0129 11:13:52.457679 2224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9319d83cb758dcf190e57cc594b8ffd4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9319d83cb758dcf190e57cc594b8ffd4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:13:52.457726 kubelet[2224]: I0129 11:13:52.457724 2224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:52.457901 kubelet[2224]: I0129 11:13:52.457756 2224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:52.457901 kubelet[2224]: I0129 11:13:52.457784 2224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:52.457901 kubelet[2224]: I0129 11:13:52.457823 2224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:13:52.457901 kubelet[2224]: I0129 11:13:52.457843 2224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9319d83cb758dcf190e57cc594b8ffd4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9319d83cb758dcf190e57cc594b8ffd4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:13:52.457901 kubelet[2224]: I0129 11:13:52.457863 2224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9319d83cb758dcf190e57cc594b8ffd4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9319d83cb758dcf190e57cc594b8ffd4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:13:52.458035 kubelet[2224]: I0129 11:13:52.457890 2224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:52.458035 kubelet[2224]: I0129 11:13:52.457911 2224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:52.626902 kubelet[2224]: W0129 11:13:52.626833 2224 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 29 11:13:52.626971 kubelet[2224]: E0129 11:13:52.626911 2224 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:52.702676 kubelet[2224]: I0129 11:13:52.702641 2224 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:13:52.702902 kubelet[2224]: E0129 11:13:52.702869 2224 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 29 11:13:52.707188 kubelet[2224]: E0129 11:13:52.707145 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:52.707596 containerd[1491]: time="2025-01-29T11:13:52.707559267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9319d83cb758dcf190e57cc594b8ffd4,Namespace:kube-system,Attempt:0,}" Jan 29 11:13:52.719844 kubelet[2224]: E0129 11:13:52.719820 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:52.720085 containerd[1491]: time="2025-01-29T11:13:52.720058037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:13:52.724348 kubelet[2224]: E0129 11:13:52.724322 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:52.724722 containerd[1491]: time="2025-01-29T11:13:52.724690374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:13:52.785439 kubelet[2224]: W0129 11:13:52.785417 2224 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 29 11:13:52.785487 kubelet[2224]: E0129 11:13:52.785447 2224 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:52.865489 kubelet[2224]: W0129 11:13:52.865423 2224 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 29 11:13:52.865568 kubelet[2224]: E0129 11:13:52.865497 2224 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:53.061045 kubelet[2224]: E0129 11:13:53.060866 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="1.6s" Jan 29 11:13:53.207367 kubelet[2224]: W0129 11:13:53.207312 2224 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 29 11:13:53.207454 kubelet[2224]: E0129 11:13:53.207371 2224 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:53.269140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947780193.mount: Deactivated successfully. Jan 29 11:13:53.274990 containerd[1491]: time="2025-01-29T11:13:53.274944647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:13:53.276837 containerd[1491]: time="2025-01-29T11:13:53.276761354Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:13:53.279852 containerd[1491]: time="2025-01-29T11:13:53.279816233Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:13:53.281300 containerd[1491]: time="2025-01-29T11:13:53.281267796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:13:53.282727 containerd[1491]: time="2025-01-29T11:13:53.282687368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:13:53.284323 containerd[1491]: time="2025-01-29T11:13:53.284269264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:13:53.285163 containerd[1491]: time="2025-01-29T11:13:53.285131932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 577.477215ms" Jan 29 11:13:53.285762 containerd[1491]: time="2025-01-29T11:13:53.285721778Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:13:53.286321 containerd[1491]: time="2025-01-29T11:13:53.286279244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:13:53.289366 containerd[1491]: time="2025-01-29T11:13:53.289336728Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 564.594116ms" Jan 29 11:13:53.294023 containerd[1491]: time="2025-01-29T11:13:53.292963499Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 572.84571ms" Jan 29 11:13:53.447040 containerd[1491]: time="2025-01-29T11:13:53.446930703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:53.447180 containerd[1491]: time="2025-01-29T11:13:53.447053062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:53.447180 containerd[1491]: time="2025-01-29T11:13:53.447086495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:53.447337 containerd[1491]: time="2025-01-29T11:13:53.447205859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:53.447491 containerd[1491]: time="2025-01-29T11:13:53.447341904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:53.447491 containerd[1491]: time="2025-01-29T11:13:53.447390194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:53.447491 containerd[1491]: time="2025-01-29T11:13:53.447401155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:53.447738 containerd[1491]: time="2025-01-29T11:13:53.447475815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:53.449184 containerd[1491]: time="2025-01-29T11:13:53.446898011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:53.449230 containerd[1491]: time="2025-01-29T11:13:53.449196211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:53.449286 containerd[1491]: time="2025-01-29T11:13:53.449231558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:53.449421 containerd[1491]: time="2025-01-29T11:13:53.449385366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:53.468397 systemd[1]: Started cri-containerd-7d483d64bd78799eddad5db2c25e9d4b346b7db1d6e7c1139777d45026c5fb71.scope - libcontainer container 7d483d64bd78799eddad5db2c25e9d4b346b7db1d6e7c1139777d45026c5fb71. Jan 29 11:13:53.472402 systemd[1]: Started cri-containerd-30b0a73c383ac52da1d3672fff547e6eb1b558c1a81f512830906ac4451e551e.scope - libcontainer container 30b0a73c383ac52da1d3672fff547e6eb1b558c1a81f512830906ac4451e551e. Jan 29 11:13:53.473768 systemd[1]: Started cri-containerd-7474d1052ac06ec786aa9c7ec9926a609d4a4cfa9ac1535aed81f2efb636823e.scope - libcontainer container 7474d1052ac06ec786aa9c7ec9926a609d4a4cfa9ac1535aed81f2efb636823e. Jan 29 11:13:53.504106 kubelet[2224]: I0129 11:13:53.504047 2224 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:13:53.504754 kubelet[2224]: E0129 11:13:53.504471 2224 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 29 11:13:53.507623 containerd[1491]: time="2025-01-29T11:13:53.507379566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d483d64bd78799eddad5db2c25e9d4b346b7db1d6e7c1139777d45026c5fb71\"" Jan 29 11:13:53.509155 kubelet[2224]: E0129 11:13:53.508992 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:53.512936 containerd[1491]: time="2025-01-29T11:13:53.512815200Z" level=info msg="CreateContainer within sandbox \"7d483d64bd78799eddad5db2c25e9d4b346b7db1d6e7c1139777d45026c5fb71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:13:53.516800 containerd[1491]: time="2025-01-29T11:13:53.516735672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"30b0a73c383ac52da1d3672fff547e6eb1b558c1a81f512830906ac4451e551e\"" Jan 29 11:13:53.517520 kubelet[2224]: E0129 11:13:53.517471 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:53.519783 containerd[1491]: time="2025-01-29T11:13:53.519690503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9319d83cb758dcf190e57cc594b8ffd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7474d1052ac06ec786aa9c7ec9926a609d4a4cfa9ac1535aed81f2efb636823e\"" Jan 29 11:13:53.519972 containerd[1491]: time="2025-01-29T11:13:53.519947555Z" level=info msg="CreateContainer within sandbox \"30b0a73c383ac52da1d3672fff547e6eb1b558c1a81f512830906ac4451e551e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:13:53.521515 kubelet[2224]: E0129 11:13:53.521491 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:53.522785 containerd[1491]: time="2025-01-29T11:13:53.522697342Z" level=info msg="CreateContainer within sandbox \"7474d1052ac06ec786aa9c7ec9926a609d4a4cfa9ac1535aed81f2efb636823e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:13:53.815464 kubelet[2224]: E0129 11:13:53.815339 2224 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:13:54.020573 containerd[1491]: time="2025-01-29T11:13:54.020520989Z" level=info msg="CreateContainer within sandbox \"7474d1052ac06ec786aa9c7ec9926a609d4a4cfa9ac1535aed81f2efb636823e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bf716e0cffdb7b60b61eca4688161483aeeb62bfb817304b8ad4296782aa6b89\"" Jan 29 11:13:54.021246 containerd[1491]: time="2025-01-29T11:13:54.021202678Z" level=info msg="StartContainer for \"bf716e0cffdb7b60b61eca4688161483aeeb62bfb817304b8ad4296782aa6b89\"" Jan 29 11:13:54.023954 containerd[1491]: time="2025-01-29T11:13:54.023909404Z" level=info msg="CreateContainer within sandbox \"7d483d64bd78799eddad5db2c25e9d4b346b7db1d6e7c1139777d45026c5fb71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"daef33b07e5dc3ee53de52c0bf61ffa532e4edeee82d33f040293cd0b570181b\"" Jan 29 11:13:54.024517 containerd[1491]: time="2025-01-29T11:13:54.024480605Z" level=info msg="StartContainer for \"daef33b07e5dc3ee53de52c0bf61ffa532e4edeee82d33f040293cd0b570181b\"" Jan 29 11:13:54.026821 containerd[1491]: time="2025-01-29T11:13:54.026790968Z" level=info msg="CreateContainer within sandbox \"30b0a73c383ac52da1d3672fff547e6eb1b558c1a81f512830906ac4451e551e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"647268e66d3d6d8a9db88bde7b9176efd77526874266611a1451fd76b6e0b57d\"" Jan 29 11:13:54.027223 containerd[1491]: time="2025-01-29T11:13:54.027195767Z" level=info msg="StartContainer for \"647268e66d3d6d8a9db88bde7b9176efd77526874266611a1451fd76b6e0b57d\"" Jan 29 11:13:54.051210 systemd[1]: Started cri-containerd-bf716e0cffdb7b60b61eca4688161483aeeb62bfb817304b8ad4296782aa6b89.scope - libcontainer container bf716e0cffdb7b60b61eca4688161483aeeb62bfb817304b8ad4296782aa6b89. Jan 29 11:13:54.056402 systemd[1]: Started cri-containerd-647268e66d3d6d8a9db88bde7b9176efd77526874266611a1451fd76b6e0b57d.scope - libcontainer container 647268e66d3d6d8a9db88bde7b9176efd77526874266611a1451fd76b6e0b57d. Jan 29 11:13:54.058862 systemd[1]: Started cri-containerd-daef33b07e5dc3ee53de52c0bf61ffa532e4edeee82d33f040293cd0b570181b.scope - libcontainer container daef33b07e5dc3ee53de52c0bf61ffa532e4edeee82d33f040293cd0b570181b. Jan 29 11:13:54.100805 containerd[1491]: time="2025-01-29T11:13:54.100759084Z" level=info msg="StartContainer for \"bf716e0cffdb7b60b61eca4688161483aeeb62bfb817304b8ad4296782aa6b89\" returns successfully" Jan 29 11:13:54.108819 containerd[1491]: time="2025-01-29T11:13:54.108736755Z" level=info msg="StartContainer for \"647268e66d3d6d8a9db88bde7b9176efd77526874266611a1451fd76b6e0b57d\" returns successfully" Jan 29 11:13:54.112720 containerd[1491]: time="2025-01-29T11:13:54.112681844Z" level=info msg="StartContainer for \"daef33b07e5dc3ee53de52c0bf61ffa532e4edeee82d33f040293cd0b570181b\" returns successfully" Jan 29 11:13:54.688124 kubelet[2224]: E0129 11:13:54.688082 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:54.691869 kubelet[2224]: E0129 11:13:54.691173 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:54.691869 kubelet[2224]: E0129 11:13:54.691378 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:55.106597 kubelet[2224]: I0129 11:13:55.106564 2224 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:13:55.171268 kubelet[2224]: E0129 11:13:55.171233 2224 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:13:55.260699 kubelet[2224]: I0129 11:13:55.260666 2224 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:13:55.648485 kubelet[2224]: I0129 11:13:55.648421 2224 apiserver.go:52] "Watching apiserver" Jan 29 11:13:55.654065 kubelet[2224]: I0129 11:13:55.654024 2224 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:13:55.696856 kubelet[2224]: E0129 11:13:55.696818 2224 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 29 11:13:55.696856 kubelet[2224]: E0129 11:13:55.696870 2224 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:55.697324 kubelet[2224]: E0129 11:13:55.696827 2224 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 11:13:55.697324 kubelet[2224]: E0129 11:13:55.696987 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:55.697324 kubelet[2224]: E0129 11:13:55.697000 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:55.697324 kubelet[2224]: E0129 11:13:55.697279 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:56.698902 kubelet[2224]: E0129 11:13:56.698861 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:56.698902 kubelet[2224]: E0129 11:13:56.698875 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:57.194086 systemd[1]: Reloading requested from client PID 2504 ('systemctl') (unit session-9.scope)... Jan 29 11:13:57.194107 systemd[1]: Reloading... Jan 29 11:13:57.289046 zram_generator::config[2546]: No configuration found. Jan 29 11:13:57.411068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:13:57.516115 systemd[1]: Reloading finished in 321 ms. Jan 29 11:13:57.558965 kubelet[2224]: I0129 11:13:57.558849 2224 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:13:57.558923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:57.586405 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:13:57.586708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:57.600205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:13:57.742356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:13:57.748876 (kubelet)[2588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:13:57.800554 kubelet[2588]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:13:57.800554 kubelet[2588]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:13:57.800554 kubelet[2588]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:13:57.800554 kubelet[2588]: I0129 11:13:57.800510 2588 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:13:57.806482 kubelet[2588]: I0129 11:13:57.806418 2588 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:13:57.806482 kubelet[2588]: I0129 11:13:57.806448 2588 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:13:57.806688 kubelet[2588]: I0129 11:13:57.806678 2588 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:13:57.807889 kubelet[2588]: I0129 11:13:57.807851 2588 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:13:57.809617 kubelet[2588]: I0129 11:13:57.809566 2588 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:13:57.812606 kubelet[2588]: E0129 11:13:57.812567 2588 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:13:57.812724 kubelet[2588]: I0129 11:13:57.812605 2588 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:13:57.818906 kubelet[2588]: I0129 11:13:57.818866 2588 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:13:57.819051 kubelet[2588]: I0129 11:13:57.818963 2588 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:13:57.819123 kubelet[2588]: I0129 11:13:57.819097 2588 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:13:57.819293 kubelet[2588]: I0129 11:13:57.819122 2588 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:13:57.819374 kubelet[2588]: I0129 11:13:57.819296 2588 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:13:57.819374 kubelet[2588]: I0129 11:13:57.819306 2588 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:13:57.819374 kubelet[2588]: I0129 11:13:57.819341 2588 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:13:57.819460 kubelet[2588]: I0129 11:13:57.819446 2588 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:13:57.819489 kubelet[2588]: I0129 11:13:57.819461 2588 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:13:57.819515 kubelet[2588]: I0129 11:13:57.819490 2588 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:13:57.819515 kubelet[2588]: I0129 11:13:57.819505 2588 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:13:57.820821 kubelet[2588]: I0129 11:13:57.820327 2588 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:13:57.820996 kubelet[2588]: I0129 11:13:57.820855 2588 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:13:57.822255 kubelet[2588]: I0129 11:13:57.821304 2588 server.go:1269] "Started kubelet" Jan 29 11:13:57.822255 kubelet[2588]: I0129 11:13:57.821574 2588 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:13:57.822255 kubelet[2588]: I0129 11:13:57.821715 2588 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:13:57.822255 kubelet[2588]: I0129 11:13:57.821977 2588 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:13:57.828437 kubelet[2588]: I0129 11:13:57.828386 2588 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:13:57.833585 kubelet[2588]: E0129 11:13:57.833544 2588 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:13:57.834098 kubelet[2588]: I0129 11:13:57.834044 2588 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:13:57.837035 kubelet[2588]: I0129 11:13:57.834241 2588 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:13:57.837035 kubelet[2588]: I0129 11:13:57.834401 2588 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:13:57.837035 kubelet[2588]: I0129 11:13:57.834504 2588 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:13:57.837035 kubelet[2588]: I0129 11:13:57.835148 2588 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:13:57.837035 kubelet[2588]: I0129 11:13:57.835293 2588 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:13:57.837035 kubelet[2588]: I0129 11:13:57.836070 2588 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:13:57.840327 kubelet[2588]: I0129 11:13:57.840292 2588 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:13:57.851161 kubelet[2588]: I0129 11:13:57.851090 2588 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:13:57.852631 kubelet[2588]: I0129 11:13:57.852588 2588 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:13:57.852631 kubelet[2588]: I0129 11:13:57.852627 2588 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:13:57.852732 kubelet[2588]: I0129 11:13:57.852651 2588 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:13:57.852732 kubelet[2588]: E0129 11:13:57.852695 2588 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:13:57.876635 kubelet[2588]: I0129 11:13:57.876595 2588 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:13:57.876635 kubelet[2588]: I0129 11:13:57.876609 2588 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:13:57.876635 kubelet[2588]: I0129 11:13:57.876629 2588 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:13:57.876863 kubelet[2588]: I0129 11:13:57.876779 2588 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:13:57.876863 kubelet[2588]: I0129 11:13:57.876790 2588 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:13:57.876863 kubelet[2588]: I0129 11:13:57.876809 2588 policy_none.go:49] "None policy: Start" Jan 29 11:13:57.877498 kubelet[2588]: I0129 11:13:57.877471 2588 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:13:57.877498 kubelet[2588]: I0129 11:13:57.877496 2588 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:13:57.877654 kubelet[2588]: I0129 11:13:57.877635 2588 state_mem.go:75] "Updated machine memory state" Jan 29 11:13:57.882360 kubelet[2588]: I0129 11:13:57.882331 2588 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:13:57.882668 kubelet[2588]: I0129 11:13:57.882483 2588 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:13:57.882668 kubelet[2588]: I0129 11:13:57.882501 2588 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:13:57.882668 kubelet[2588]: I0129 11:13:57.882660 2588 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:13:57.960131 kubelet[2588]: E0129 11:13:57.960073 2588 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:13:57.960131 kubelet[2588]: E0129 11:13:57.960113 2588 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:13:57.989638 kubelet[2588]: I0129 11:13:57.989597 2588 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:13:57.999444 kubelet[2588]: I0129 11:13:57.999420 2588 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:13:57.999527 kubelet[2588]: I0129 11:13:57.999502 2588 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:13:58.036454 kubelet[2588]: I0129 11:13:58.036386 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:58.036515 kubelet[2588]: I0129 11:13:58.036445 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9319d83cb758dcf190e57cc594b8ffd4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9319d83cb758dcf190e57cc594b8ffd4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:13:58.036515 kubelet[2588]: I0129 11:13:58.036494 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9319d83cb758dcf190e57cc594b8ffd4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9319d83cb758dcf190e57cc594b8ffd4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:13:58.036565 kubelet[2588]: I0129 11:13:58.036516 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:58.036565 kubelet[2588]: I0129 11:13:58.036537 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:58.036565 kubelet[2588]: I0129 11:13:58.036559 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:58.036653 kubelet[2588]: I0129 11:13:58.036581 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:13:58.036653 kubelet[2588]: I0129 11:13:58.036599 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9319d83cb758dcf190e57cc594b8ffd4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9319d83cb758dcf190e57cc594b8ffd4\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:13:58.036653 kubelet[2588]: I0129 11:13:58.036641 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:13:58.194769 sudo[2625]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:13:58.195256 sudo[2625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:13:58.261356 kubelet[2588]: E0129 11:13:58.261310 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:58.261522 kubelet[2588]: E0129 11:13:58.261442 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:58.265043 kubelet[2588]: E0129 11:13:58.262027 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:58.772217 sudo[2625]: pam_unix(sudo:session): session closed for user root Jan 29 11:13:58.820767 kubelet[2588]: I0129 11:13:58.820711 2588 apiserver.go:52] "Watching apiserver" Jan 29 11:13:58.835026 kubelet[2588]: I0129 11:13:58.834967 2588 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:13:58.863840 kubelet[2588]: E0129 11:13:58.863816 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:58.863942 kubelet[2588]: E0129 11:13:58.863816 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:58.864222 kubelet[2588]: E0129 11:13:58.864183 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:58.918840 kubelet[2588]: I0129 11:13:58.918745 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.9187267390000002 podStartE2EDuration="2.918726739s" podCreationTimestamp="2025-01-29 11:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:13:58.918661133 +0000 UTC m=+1.160756610" watchObservedRunningTime="2025-01-29 11:13:58.918726739 +0000 UTC m=+1.160822226" Jan 29 11:13:59.003608 kubelet[2588]: I0129 11:13:59.003544 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.003525141 podStartE2EDuration="2.003525141s" podCreationTimestamp="2025-01-29 11:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:13:58.926691144 +0000 UTC m=+1.168786631" watchObservedRunningTime="2025-01-29 11:13:59.003525141 +0000 UTC m=+1.245620628" Jan 29 11:13:59.010695 kubelet[2588]: I0129 11:13:59.010626 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.010606725 podStartE2EDuration="3.010606725s" podCreationTimestamp="2025-01-29 11:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:13:59.003756203 +0000 UTC m=+1.245851680" watchObservedRunningTime="2025-01-29 11:13:59.010606725 +0000 UTC m=+1.252702202" Jan 29 11:13:59.865543 kubelet[2588]: E0129 11:13:59.865504 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:00.387841 sudo[1682]: pam_unix(sudo:session): session closed for user root Jan 29 11:14:00.389639 sshd[1681]: Connection closed by 10.0.0.1 port 58582 Jan 29 11:14:00.390122 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:00.394952 systemd[1]: sshd@8-10.0.0.47:22-10.0.0.1:58582.service: Deactivated successfully. Jan 29 11:14:00.397558 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:14:00.397796 systemd[1]: session-9.scope: Consumed 4.775s CPU time, 154.2M memory peak, 0B memory swap peak. Jan 29 11:14:00.398464 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:14:00.399476 systemd-logind[1472]: Removed session 9. Jan 29 11:14:01.592585 kubelet[2588]: E0129 11:14:01.592546 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:03.886699 kubelet[2588]: I0129 11:14:03.886665 2588 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:14:03.887112 containerd[1491]: time="2025-01-29T11:14:03.886939157Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:14:03.887380 kubelet[2588]: I0129 11:14:03.887166 2588 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:14:04.786298 kubelet[2588]: W0129 11:14:04.786263 2588 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 29 11:14:04.788720 kubelet[2588]: E0129 11:14:04.786440 2588 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 29 11:14:04.788720 kubelet[2588]: W0129 11:14:04.786451 2588 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 29 11:14:04.788720 kubelet[2588]: E0129 11:14:04.786486 2588 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 29 11:14:04.788720 kubelet[2588]: W0129 11:14:04.786340 2588 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 29 11:14:04.788720 kubelet[2588]: E0129 11:14:04.786504 2588 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 29 11:14:04.791277 systemd[1]: Created slice kubepods-besteffort-pod1d7ffdda_9bb5_4c87_ba21_3a51ef99fc1e.slice - libcontainer container kubepods-besteffort-pod1d7ffdda_9bb5_4c87_ba21_3a51ef99fc1e.slice. Jan 29 11:14:04.805420 systemd[1]: Created slice kubepods-burstable-podeba734ef_816f_46bb_baf1_695eebc4010c.slice - libcontainer container kubepods-burstable-podeba734ef_816f_46bb_baf1_695eebc4010c.slice. Jan 29 11:14:04.922112 kubelet[2588]: I0129 11:14:04.922062 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-host-proc-sys-net\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.922112 kubelet[2588]: I0129 11:14:04.922106 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-cgroup\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.922112 kubelet[2588]: I0129 11:14:04.922136 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e-kube-proxy\") pod \"kube-proxy-p4mqx\" (UID: \"1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e\") " pod="kube-system/kube-proxy-p4mqx" Jan 29 11:14:04.922713 kubelet[2588]: I0129 11:14:04.922157 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eba734ef-816f-46bb-baf1-695eebc4010c-clustermesh-secrets\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.922713 kubelet[2588]: I0129 11:14:04.922184 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-hubble-tls\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.922713 kubelet[2588]: I0129 11:14:04.922202 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrb2z\" (UniqueName: \"kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-kube-api-access-nrb2z\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.922713 kubelet[2588]: I0129 11:14:04.922222 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e-xtables-lock\") pod \"kube-proxy-p4mqx\" (UID: \"1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e\") " pod="kube-system/kube-proxy-p4mqx" Jan 29 11:14:04.922713 kubelet[2588]: I0129 11:14:04.922245 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-hostproc\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.922713 kubelet[2588]: I0129 11:14:04.922263 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-xtables-lock\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.922920 kubelet[2588]: I0129 11:14:04.922280 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-host-proc-sys-kernel\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.922920 kubelet[2588]: I0129 11:14:04.922299 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-config-path\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.922920 kubelet[2588]: I0129 11:14:04.922319 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e-lib-modules\") pod \"kube-proxy-p4mqx\" (UID: \"1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e\") " pod="kube-system/kube-proxy-p4mqx" Jan 29 11:14:04.922920 kubelet[2588]: I0129 11:14:04.922338 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbq4\" (UniqueName: \"kubernetes.io/projected/1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e-kube-api-access-svbq4\") pod \"kube-proxy-p4mqx\" (UID: \"1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e\") " pod="kube-system/kube-proxy-p4mqx" Jan 29 11:14:04.922920 kubelet[2588]: I0129 11:14:04.922358 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-run\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.923268 kubelet[2588]: I0129 11:14:04.922991 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-etc-cni-netd\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.923268 kubelet[2588]: I0129 11:14:04.923065 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-bpf-maps\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.923268 kubelet[2588]: I0129 11:14:04.923220 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cni-path\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.923268 kubelet[2588]: I0129 11:14:04.923248 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-lib-modules\") pod \"cilium-ftbd9\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " pod="kube-system/cilium-ftbd9" Jan 29 11:14:04.933563 systemd[1]: Created slice kubepods-besteffort-pod5ed11690_cd93_45ed_9778_ba1458f97b07.slice - libcontainer container kubepods-besteffort-pod5ed11690_cd93_45ed_9778_ba1458f97b07.slice. Jan 29 11:14:05.102460 kubelet[2588]: E0129 11:14:05.102412 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:05.103074 containerd[1491]: time="2025-01-29T11:14:05.103004133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4mqx,Uid:1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:05.124644 kubelet[2588]: I0129 11:14:05.124602 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ed11690-cd93-45ed-9778-ba1458f97b07-cilium-config-path\") pod \"cilium-operator-5d85765b45-bfkvx\" (UID: \"5ed11690-cd93-45ed-9778-ba1458f97b07\") " pod="kube-system/cilium-operator-5d85765b45-bfkvx" Jan 29 11:14:05.124644 kubelet[2588]: I0129 11:14:05.124645 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26mh\" (UniqueName: \"kubernetes.io/projected/5ed11690-cd93-45ed-9778-ba1458f97b07-kube-api-access-z26mh\") pod \"cilium-operator-5d85765b45-bfkvx\" (UID: \"5ed11690-cd93-45ed-9778-ba1458f97b07\") " pod="kube-system/cilium-operator-5d85765b45-bfkvx" Jan 29 11:14:05.129271 containerd[1491]: time="2025-01-29T11:14:05.128730863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:05.129271 containerd[1491]: time="2025-01-29T11:14:05.129237897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:05.129271 containerd[1491]: time="2025-01-29T11:14:05.129253447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:05.129388 containerd[1491]: time="2025-01-29T11:14:05.129336375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:05.150162 systemd[1]: Started cri-containerd-a08a4679e44a268099c3a496f35b7d1afe66af6723c5df0db17fd93a6626127e.scope - libcontainer container a08a4679e44a268099c3a496f35b7d1afe66af6723c5df0db17fd93a6626127e. Jan 29 11:14:05.174450 containerd[1491]: time="2025-01-29T11:14:05.174385277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4mqx,Uid:1d7ffdda-9bb5-4c87-ba21-3a51ef99fc1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a08a4679e44a268099c3a496f35b7d1afe66af6723c5df0db17fd93a6626127e\"" Jan 29 11:14:05.175147 kubelet[2588]: E0129 11:14:05.175115 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:05.176948 containerd[1491]: time="2025-01-29T11:14:05.176908546Z" level=info msg="CreateContainer within sandbox \"a08a4679e44a268099c3a496f35b7d1afe66af6723c5df0db17fd93a6626127e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:14:05.189424 kubelet[2588]: E0129 11:14:05.189371 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:05.198068 containerd[1491]: time="2025-01-29T11:14:05.198022085Z" level=info msg="CreateContainer within sandbox \"a08a4679e44a268099c3a496f35b7d1afe66af6723c5df0db17fd93a6626127e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ee418b06d4c40b1b347f4aaf538fc3a323a8831f30d8691fbea0372e8beb57c8\"" Jan 29 11:14:05.198784 containerd[1491]: time="2025-01-29T11:14:05.198677461Z" level=info msg="StartContainer for \"ee418b06d4c40b1b347f4aaf538fc3a323a8831f30d8691fbea0372e8beb57c8\"" Jan 29 11:14:05.232270 systemd[1]: Started cri-containerd-ee418b06d4c40b1b347f4aaf538fc3a323a8831f30d8691fbea0372e8beb57c8.scope - libcontainer container ee418b06d4c40b1b347f4aaf538fc3a323a8831f30d8691fbea0372e8beb57c8. Jan 29 11:14:05.269322 containerd[1491]: time="2025-01-29T11:14:05.269281959Z" level=info msg="StartContainer for \"ee418b06d4c40b1b347f4aaf538fc3a323a8831f30d8691fbea0372e8beb57c8\" returns successfully" Jan 29 11:14:05.689553 kubelet[2588]: E0129 11:14:05.689515 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:05.874537 kubelet[2588]: E0129 11:14:05.874450 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:05.874537 kubelet[2588]: E0129 11:14:05.874450 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:05.874706 kubelet[2588]: E0129 11:14:05.874601 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:05.889961 kubelet[2588]: I0129 11:14:05.889657 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p4mqx" podStartSLOduration=1.889640878 podStartE2EDuration="1.889640878s" podCreationTimestamp="2025-01-29 11:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:05.889341368 +0000 UTC m=+8.131436855" watchObservedRunningTime="2025-01-29 11:14:05.889640878 +0000 UTC m=+8.131736365" Jan 29 11:14:06.025417 kubelet[2588]: E0129 11:14:06.025253 2588 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:14:06.025417 kubelet[2588]: E0129 11:14:06.025270 2588 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 29 11:14:06.025417 kubelet[2588]: E0129 11:14:06.025396 2588 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-ftbd9: failed to sync secret cache: timed out waiting for the condition Jan 29 11:14:06.025417 kubelet[2588]: E0129 11:14:06.025376 2588 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-config-path podName:eba734ef-816f-46bb-baf1-695eebc4010c nodeName:}" failed. No retries permitted until 2025-01-29 11:14:06.525346952 +0000 UTC m=+8.767442440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-config-path") pod "cilium-ftbd9" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:14:06.026210 kubelet[2588]: E0129 11:14:06.025501 2588 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-hubble-tls podName:eba734ef-816f-46bb-baf1-695eebc4010c nodeName:}" failed. No retries permitted until 2025-01-29 11:14:06.52548281 +0000 UTC m=+8.767578297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-hubble-tls") pod "cilium-ftbd9" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:14:06.225781 kubelet[2588]: E0129 11:14:06.225731 2588 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 11:14:06.225944 kubelet[2588]: E0129 11:14:06.225806 2588 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ed11690-cd93-45ed-9778-ba1458f97b07-cilium-config-path podName:5ed11690-cd93-45ed-9778-ba1458f97b07 nodeName:}" failed. No retries permitted until 2025-01-29 11:14:06.725788776 +0000 UTC m=+8.967884263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5ed11690-cd93-45ed-9778-ba1458f97b07-cilium-config-path") pod "cilium-operator-5d85765b45-bfkvx" (UID: "5ed11690-cd93-45ed-9778-ba1458f97b07") : failed to sync configmap cache: timed out waiting for the condition Jan 29 11:14:06.608518 kubelet[2588]: E0129 11:14:06.608474 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:06.609208 containerd[1491]: time="2025-01-29T11:14:06.609047306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ftbd9,Uid:eba734ef-816f-46bb-baf1-695eebc4010c,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:06.635762 containerd[1491]: time="2025-01-29T11:14:06.635631284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:06.635762 containerd[1491]: time="2025-01-29T11:14:06.635706567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:06.635762 containerd[1491]: time="2025-01-29T11:14:06.635719512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:06.635976 containerd[1491]: time="2025-01-29T11:14:06.635824891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:06.663158 systemd[1]: Started cri-containerd-747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15.scope - libcontainer container 747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15. Jan 29 11:14:06.686627 containerd[1491]: time="2025-01-29T11:14:06.686574772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ftbd9,Uid:eba734ef-816f-46bb-baf1-695eebc4010c,Namespace:kube-system,Attempt:0,} returns sandbox id \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\"" Jan 29 11:14:06.687376 kubelet[2588]: E0129 11:14:06.687320 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:06.688181 containerd[1491]: time="2025-01-29T11:14:06.688153772Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:14:06.876956 kubelet[2588]: E0129 11:14:06.876847 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:07.039572 kubelet[2588]: E0129 11:14:07.039511 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:07.040168 containerd[1491]: time="2025-01-29T11:14:07.040105694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bfkvx,Uid:5ed11690-cd93-45ed-9778-ba1458f97b07,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:07.064356 containerd[1491]: time="2025-01-29T11:14:07.064217387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:07.064356 containerd[1491]: time="2025-01-29T11:14:07.064269415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:07.064356 containerd[1491]: time="2025-01-29T11:14:07.064279654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:07.064531 containerd[1491]: time="2025-01-29T11:14:07.064352894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:07.085144 systemd[1]: Started cri-containerd-ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9.scope - libcontainer container ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9. Jan 29 11:14:07.120195 containerd[1491]: time="2025-01-29T11:14:07.120143115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bfkvx,Uid:5ed11690-cd93-45ed-9778-ba1458f97b07,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9\"" Jan 29 11:14:07.120966 kubelet[2588]: E0129 11:14:07.120930 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:08.489149 update_engine[1474]: I20250129 11:14:08.489050 1474 update_attempter.cc:509] Updating boot flags... Jan 29 11:14:08.516047 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2960) Jan 29 11:14:08.553045 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2962) Jan 29 11:14:08.580233 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2962) Jan 29 11:14:11.594910 kubelet[2588]: E0129 11:14:11.594871 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:14.358145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348239226.mount: Deactivated successfully. Jan 29 11:14:16.302662 containerd[1491]: time="2025-01-29T11:14:16.302611557Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:16.303382 containerd[1491]: time="2025-01-29T11:14:16.303350402Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:14:16.304607 containerd[1491]: time="2025-01-29T11:14:16.304556730Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:16.305917 containerd[1491]: time="2025-01-29T11:14:16.305892873Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.617705536s" Jan 29 11:14:16.305970 containerd[1491]: time="2025-01-29T11:14:16.305919764Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:14:16.314092 containerd[1491]: time="2025-01-29T11:14:16.314060085Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:14:16.334330 containerd[1491]: time="2025-01-29T11:14:16.334293179Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:14:16.347382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591884762.mount: Deactivated successfully. Jan 29 11:14:16.349224 containerd[1491]: time="2025-01-29T11:14:16.349175541Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\"" Jan 29 11:14:16.352128 containerd[1491]: time="2025-01-29T11:14:16.352034098Z" level=info msg="StartContainer for \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\"" Jan 29 11:14:16.386200 systemd[1]: Started cri-containerd-248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c.scope - libcontainer container 248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c. Jan 29 11:14:16.411677 containerd[1491]: time="2025-01-29T11:14:16.411630090Z" level=info msg="StartContainer for \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\" returns successfully" Jan 29 11:14:16.421306 systemd[1]: cri-containerd-248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c.scope: Deactivated successfully. Jan 29 11:14:16.892765 containerd[1491]: time="2025-01-29T11:14:16.892707950Z" level=info msg="shim disconnected" id=248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c namespace=k8s.io Jan 29 11:14:16.892765 containerd[1491]: time="2025-01-29T11:14:16.892757383Z" level=warning msg="cleaning up after shim disconnected" id=248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c namespace=k8s.io Jan 29 11:14:16.892765 containerd[1491]: time="2025-01-29T11:14:16.892766521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:16.897178 kubelet[2588]: E0129 11:14:16.897137 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:17.345276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c-rootfs.mount: Deactivated successfully. Jan 29 11:14:17.844306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207429414.mount: Deactivated successfully. Jan 29 11:14:17.898602 kubelet[2588]: E0129 11:14:17.898572 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:17.900297 containerd[1491]: time="2025-01-29T11:14:17.900253157Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:14:17.915801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142053474.mount: Deactivated successfully. Jan 29 11:14:17.918476 containerd[1491]: time="2025-01-29T11:14:17.918440915Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\"" Jan 29 11:14:17.918956 containerd[1491]: time="2025-01-29T11:14:17.918916772Z" level=info msg="StartContainer for \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\"" Jan 29 11:14:17.950148 systemd[1]: Started cri-containerd-801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28.scope - libcontainer container 801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28. Jan 29 11:14:17.980334 containerd[1491]: time="2025-01-29T11:14:17.980294198Z" level=info msg="StartContainer for \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\" returns successfully" Jan 29 11:14:17.993167 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:14:17.993403 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:14:17.993474 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:14:18.000313 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:14:18.000509 systemd[1]: cri-containerd-801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28.scope: Deactivated successfully. Jan 29 11:14:18.023401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:14:18.077583 containerd[1491]: time="2025-01-29T11:14:18.077515609Z" level=info msg="shim disconnected" id=801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28 namespace=k8s.io Jan 29 11:14:18.077583 containerd[1491]: time="2025-01-29T11:14:18.077568388Z" level=warning msg="cleaning up after shim disconnected" id=801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28 namespace=k8s.io Jan 29 11:14:18.077583 containerd[1491]: time="2025-01-29T11:14:18.077578328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:18.332564 containerd[1491]: time="2025-01-29T11:14:18.332528777Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:18.333267 containerd[1491]: time="2025-01-29T11:14:18.333210022Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:14:18.334384 containerd[1491]: time="2025-01-29T11:14:18.334356986Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:14:18.336374 containerd[1491]: time="2025-01-29T11:14:18.336339327Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.022250569s" Jan 29 11:14:18.336374 containerd[1491]: time="2025-01-29T11:14:18.336372469Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:14:18.340948 containerd[1491]: time="2025-01-29T11:14:18.340922904Z" level=info msg="CreateContainer within sandbox \"ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:14:18.353982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31902559.mount: Deactivated successfully. Jan 29 11:14:18.354946 containerd[1491]: time="2025-01-29T11:14:18.354915905Z" level=info msg="CreateContainer within sandbox \"ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\"" Jan 29 11:14:18.355479 containerd[1491]: time="2025-01-29T11:14:18.355391632Z" level=info msg="StartContainer for \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\"" Jan 29 11:14:18.387145 systemd[1]: Started cri-containerd-17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1.scope - libcontainer container 17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1. Jan 29 11:14:18.411679 containerd[1491]: time="2025-01-29T11:14:18.411498608Z" level=info msg="StartContainer for \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\" returns successfully" Jan 29 11:14:18.904137 kubelet[2588]: E0129 11:14:18.904049 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:18.907853 kubelet[2588]: E0129 11:14:18.907775 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:18.910385 containerd[1491]: time="2025-01-29T11:14:18.909448640Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:14:18.941843 containerd[1491]: time="2025-01-29T11:14:18.941696041Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\"" Jan 29 11:14:18.942366 containerd[1491]: time="2025-01-29T11:14:18.942346057Z" level=info msg="StartContainer for \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\"" Jan 29 11:14:18.979981 kubelet[2588]: I0129 11:14:18.979906 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bfkvx" podStartSLOduration=3.76432761 podStartE2EDuration="14.979886724s" podCreationTimestamp="2025-01-29 11:14:04 +0000 UTC" firstStartedPulling="2025-01-29 11:14:07.121480855 +0000 UTC m=+9.363576342" lastFinishedPulling="2025-01-29 11:14:18.337039968 +0000 UTC m=+20.579135456" observedRunningTime="2025-01-29 11:14:18.933897998 +0000 UTC m=+21.175993485" watchObservedRunningTime="2025-01-29 11:14:18.979886724 +0000 UTC m=+21.221982211" Jan 29 11:14:18.993057 systemd[1]: Started cri-containerd-9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d.scope - libcontainer container 9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d. Jan 29 11:14:19.030944 systemd[1]: cri-containerd-9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d.scope: Deactivated successfully. Jan 29 11:14:19.115225 containerd[1491]: time="2025-01-29T11:14:19.114991082Z" level=info msg="StartContainer for \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\" returns successfully" Jan 29 11:14:19.256371 containerd[1491]: time="2025-01-29T11:14:19.256185793Z" level=info msg="shim disconnected" id=9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d namespace=k8s.io Jan 29 11:14:19.256371 containerd[1491]: time="2025-01-29T11:14:19.256249893Z" level=warning msg="cleaning up after shim disconnected" id=9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d namespace=k8s.io Jan 29 11:14:19.256371 containerd[1491]: time="2025-01-29T11:14:19.256258369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:19.345754 systemd[1]: run-containerd-runc-k8s.io-17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1-runc.pK0W8e.mount: Deactivated successfully. Jan 29 11:14:19.911164 kubelet[2588]: E0129 11:14:19.911128 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:19.911681 kubelet[2588]: E0129 11:14:19.911340 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:19.912969 containerd[1491]: time="2025-01-29T11:14:19.912927960Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:14:19.932187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823111531.mount: Deactivated successfully. Jan 29 11:14:19.935645 containerd[1491]: time="2025-01-29T11:14:19.935602383Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\"" Jan 29 11:14:19.936084 containerd[1491]: time="2025-01-29T11:14:19.936055287Z" level=info msg="StartContainer for \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\"" Jan 29 11:14:19.986206 systemd[1]: Started cri-containerd-ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08.scope - libcontainer container ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08. Jan 29 11:14:20.008437 systemd[1]: cri-containerd-ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08.scope: Deactivated successfully. Jan 29 11:14:20.010576 containerd[1491]: time="2025-01-29T11:14:20.010532983Z" level=info msg="StartContainer for \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\" returns successfully" Jan 29 11:14:20.033433 containerd[1491]: time="2025-01-29T11:14:20.033365239Z" level=info msg="shim disconnected" id=ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08 namespace=k8s.io Jan 29 11:14:20.033433 containerd[1491]: time="2025-01-29T11:14:20.033424751Z" level=warning msg="cleaning up after shim disconnected" id=ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08 namespace=k8s.io Jan 29 11:14:20.033433 containerd[1491]: time="2025-01-29T11:14:20.033434539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:14:20.345272 systemd[1]: run-containerd-runc-k8s.io-ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08-runc.dQyY8q.mount: Deactivated successfully. Jan 29 11:14:20.345387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08-rootfs.mount: Deactivated successfully. Jan 29 11:14:20.921561 kubelet[2588]: E0129 11:14:20.918053 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:20.922237 containerd[1491]: time="2025-01-29T11:14:20.922114066Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:14:20.939680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146875399.mount: Deactivated successfully. Jan 29 11:14:20.940541 containerd[1491]: time="2025-01-29T11:14:20.940491162Z" level=info msg="CreateContainer within sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\"" Jan 29 11:14:20.940966 containerd[1491]: time="2025-01-29T11:14:20.940933897Z" level=info msg="StartContainer for \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\"" Jan 29 11:14:20.971139 systemd[1]: Started cri-containerd-cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5.scope - libcontainer container cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5. Jan 29 11:14:21.003724 containerd[1491]: time="2025-01-29T11:14:21.003670252Z" level=info msg="StartContainer for \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\" returns successfully" Jan 29 11:14:21.125242 kubelet[2588]: I0129 11:14:21.125196 2588 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:14:21.163475 systemd[1]: Created slice kubepods-burstable-podfb95e8b8_7e1e_4830_ba86_a24729912100.slice - libcontainer container kubepods-burstable-podfb95e8b8_7e1e_4830_ba86_a24729912100.slice. Jan 29 11:14:21.175938 systemd[1]: Created slice kubepods-burstable-pod7da10f76_713b_4d7e_863f_e5885c50ca9f.slice - libcontainer container kubepods-burstable-pod7da10f76_713b_4d7e_863f_e5885c50ca9f.slice. Jan 29 11:14:21.327801 kubelet[2588]: I0129 11:14:21.327749 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9txzj\" (UniqueName: \"kubernetes.io/projected/fb95e8b8-7e1e-4830-ba86-a24729912100-kube-api-access-9txzj\") pod \"coredns-6f6b679f8f-qrf6n\" (UID: \"fb95e8b8-7e1e-4830-ba86-a24729912100\") " pod="kube-system/coredns-6f6b679f8f-qrf6n" Jan 29 11:14:21.327801 kubelet[2588]: I0129 11:14:21.327812 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb95e8b8-7e1e-4830-ba86-a24729912100-config-volume\") pod \"coredns-6f6b679f8f-qrf6n\" (UID: \"fb95e8b8-7e1e-4830-ba86-a24729912100\") " pod="kube-system/coredns-6f6b679f8f-qrf6n" Jan 29 11:14:21.327960 kubelet[2588]: I0129 11:14:21.327831 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7da10f76-713b-4d7e-863f-e5885c50ca9f-config-volume\") pod \"coredns-6f6b679f8f-ccb99\" (UID: \"7da10f76-713b-4d7e-863f-e5885c50ca9f\") " pod="kube-system/coredns-6f6b679f8f-ccb99" Jan 29 11:14:21.327960 kubelet[2588]: I0129 11:14:21.327846 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nssb\" (UniqueName: \"kubernetes.io/projected/7da10f76-713b-4d7e-863f-e5885c50ca9f-kube-api-access-6nssb\") pod \"coredns-6f6b679f8f-ccb99\" (UID: \"7da10f76-713b-4d7e-863f-e5885c50ca9f\") " pod="kube-system/coredns-6f6b679f8f-ccb99" Jan 29 11:14:21.466583 kubelet[2588]: E0129 11:14:21.466471 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:21.467266 containerd[1491]: time="2025-01-29T11:14:21.467078993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qrf6n,Uid:fb95e8b8-7e1e-4830-ba86-a24729912100,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:21.481297 kubelet[2588]: E0129 11:14:21.481272 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:21.481765 containerd[1491]: time="2025-01-29T11:14:21.481733953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ccb99,Uid:7da10f76-713b-4d7e-863f-e5885c50ca9f,Namespace:kube-system,Attempt:0,}" Jan 29 11:14:21.919394 kubelet[2588]: E0129 11:14:21.919361 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:22.920616 kubelet[2588]: E0129 11:14:22.920582 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:23.376055 systemd-networkd[1377]: cilium_host: Link UP Jan 29 11:14:23.376272 systemd-networkd[1377]: cilium_net: Link UP Jan 29 11:14:23.377670 systemd-networkd[1377]: cilium_net: Gained carrier Jan 29 11:14:23.378071 systemd-networkd[1377]: cilium_host: Gained carrier Jan 29 11:14:23.378419 systemd-networkd[1377]: cilium_net: Gained IPv6LL Jan 29 11:14:23.378875 systemd-networkd[1377]: cilium_host: Gained IPv6LL Jan 29 11:14:23.476361 systemd-networkd[1377]: cilium_vxlan: Link UP Jan 29 11:14:23.476370 systemd-networkd[1377]: cilium_vxlan: Gained carrier Jan 29 11:14:23.681095 kernel: NET: Registered PF_ALG protocol family Jan 29 11:14:23.923022 kubelet[2588]: E0129 11:14:23.922976 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:24.325399 systemd-networkd[1377]: lxc_health: Link UP Jan 29 11:14:24.333441 systemd-networkd[1377]: lxc_health: Gained carrier Jan 29 11:14:24.625868 kubelet[2588]: I0129 11:14:24.625502 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ftbd9" podStartSLOduration=10.999498571 podStartE2EDuration="20.625486933s" podCreationTimestamp="2025-01-29 11:14:04 +0000 UTC" firstStartedPulling="2025-01-29 11:14:06.687808857 +0000 UTC m=+8.929904344" lastFinishedPulling="2025-01-29 11:14:16.313797219 +0000 UTC m=+18.555892706" observedRunningTime="2025-01-29 11:14:21.932244324 +0000 UTC m=+24.174339811" watchObservedRunningTime="2025-01-29 11:14:24.625486933 +0000 UTC m=+26.867582420" Jan 29 11:14:24.744365 systemd-networkd[1377]: lxce399044b0cf1: Link UP Jan 29 11:14:24.747730 systemd-networkd[1377]: lxc0ad2c9df43dc: Link UP Jan 29 11:14:24.749055 kernel: eth0: renamed from tmp191bd Jan 29 11:14:24.765647 systemd-networkd[1377]: lxce399044b0cf1: Gained carrier Jan 29 11:14:24.769032 kernel: eth0: renamed from tmpb057d Jan 29 11:14:24.776672 systemd-networkd[1377]: lxc0ad2c9df43dc: Gained carrier Jan 29 11:14:24.936764 kubelet[2588]: E0129 11:14:24.936639 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:25.075188 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL Jan 29 11:14:25.586236 systemd-networkd[1377]: lxc_health: Gained IPv6LL Jan 29 11:14:25.938252 kubelet[2588]: E0129 11:14:25.938130 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:26.162244 systemd-networkd[1377]: lxce399044b0cf1: Gained IPv6LL Jan 29 11:14:26.739191 systemd-networkd[1377]: lxc0ad2c9df43dc: Gained IPv6LL Jan 29 11:14:26.939883 kubelet[2588]: E0129 11:14:26.939841 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:27.954919 systemd[1]: Started sshd@9-10.0.0.47:22-10.0.0.1:60306.service - OpenSSH per-connection server daemon (10.0.0.1:60306). Jan 29 11:14:28.007073 sshd[3816]: Accepted publickey for core from 10.0.0.1 port 60306 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:28.009548 sshd-session[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:28.015570 systemd-logind[1472]: New session 10 of user core. Jan 29 11:14:28.023205 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:14:28.113976 containerd[1491]: time="2025-01-29T11:14:28.113855353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:28.113976 containerd[1491]: time="2025-01-29T11:14:28.113929562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:28.113976 containerd[1491]: time="2025-01-29T11:14:28.113943159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:28.114509 containerd[1491]: time="2025-01-29T11:14:28.114055780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:28.124684 containerd[1491]: time="2025-01-29T11:14:28.124169467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:14:28.124684 containerd[1491]: time="2025-01-29T11:14:28.124221696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:14:28.124684 containerd[1491]: time="2025-01-29T11:14:28.124235532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:28.124684 containerd[1491]: time="2025-01-29T11:14:28.124305292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:14:28.142185 systemd[1]: Started cri-containerd-191bd01bc1008252b2dd0e98a4fc7d52f9eadfe209df38dc17f6546c082092ee.scope - libcontainer container 191bd01bc1008252b2dd0e98a4fc7d52f9eadfe209df38dc17f6546c082092ee. Jan 29 11:14:28.149151 systemd[1]: Started cri-containerd-b057debca8d21cba55d29e008ec657489baecddc05906b88e42b6f2c91acdb56.scope - libcontainer container b057debca8d21cba55d29e008ec657489baecddc05906b88e42b6f2c91acdb56. Jan 29 11:14:28.160758 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:14:28.163730 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:14:28.185673 sshd[3821]: Connection closed by 10.0.0.1 port 60306 Jan 29 11:14:28.186040 sshd-session[3816]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:28.188335 containerd[1491]: time="2025-01-29T11:14:28.188299384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qrf6n,Uid:fb95e8b8-7e1e-4830-ba86-a24729912100,Namespace:kube-system,Attempt:0,} returns sandbox id \"191bd01bc1008252b2dd0e98a4fc7d52f9eadfe209df38dc17f6546c082092ee\"" Jan 29 11:14:28.188893 kubelet[2588]: E0129 11:14:28.188874 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:28.190603 systemd[1]: sshd@9-10.0.0.47:22-10.0.0.1:60306.service: Deactivated successfully. Jan 29 11:14:28.192572 containerd[1491]: time="2025-01-29T11:14:28.192276327Z" level=info msg="CreateContainer within sandbox \"191bd01bc1008252b2dd0e98a4fc7d52f9eadfe209df38dc17f6546c082092ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:14:28.194318 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:14:28.197431 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:14:28.198802 systemd-logind[1472]: Removed session 10. Jan 29 11:14:28.210257 containerd[1491]: time="2025-01-29T11:14:28.210102944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ccb99,Uid:7da10f76-713b-4d7e-863f-e5885c50ca9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b057debca8d21cba55d29e008ec657489baecddc05906b88e42b6f2c91acdb56\"" Jan 29 11:14:28.211601 kubelet[2588]: E0129 11:14:28.211537 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:28.213256 containerd[1491]: time="2025-01-29T11:14:28.213200891Z" level=info msg="CreateContainer within sandbox \"b057debca8d21cba55d29e008ec657489baecddc05906b88e42b6f2c91acdb56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:14:28.215660 containerd[1491]: time="2025-01-29T11:14:28.215628950Z" level=info msg="CreateContainer within sandbox \"191bd01bc1008252b2dd0e98a4fc7d52f9eadfe209df38dc17f6546c082092ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4d9feb0fb49aa37ce53f3f2cdc0f7037c59e458a4d73edc80955cf6e6656d30\"" Jan 29 11:14:28.216690 containerd[1491]: time="2025-01-29T11:14:28.215956606Z" level=info msg="StartContainer for \"a4d9feb0fb49aa37ce53f3f2cdc0f7037c59e458a4d73edc80955cf6e6656d30\"" Jan 29 11:14:28.229671 containerd[1491]: time="2025-01-29T11:14:28.229622816Z" level=info msg="CreateContainer within sandbox \"b057debca8d21cba55d29e008ec657489baecddc05906b88e42b6f2c91acdb56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29575438ba74e6b0a741e048ab661c818be81fb1935ec8d4dde5ac8f163ed8a7\"" Jan 29 11:14:28.230243 containerd[1491]: time="2025-01-29T11:14:28.230223045Z" level=info msg="StartContainer for \"29575438ba74e6b0a741e048ab661c818be81fb1935ec8d4dde5ac8f163ed8a7\"" Jan 29 11:14:28.243221 systemd[1]: Started cri-containerd-a4d9feb0fb49aa37ce53f3f2cdc0f7037c59e458a4d73edc80955cf6e6656d30.scope - libcontainer container a4d9feb0fb49aa37ce53f3f2cdc0f7037c59e458a4d73edc80955cf6e6656d30. Jan 29 11:14:28.262132 systemd[1]: Started cri-containerd-29575438ba74e6b0a741e048ab661c818be81fb1935ec8d4dde5ac8f163ed8a7.scope - libcontainer container 29575438ba74e6b0a741e048ab661c818be81fb1935ec8d4dde5ac8f163ed8a7. Jan 29 11:14:28.279898 containerd[1491]: time="2025-01-29T11:14:28.279848045Z" level=info msg="StartContainer for \"a4d9feb0fb49aa37ce53f3f2cdc0f7037c59e458a4d73edc80955cf6e6656d30\" returns successfully" Jan 29 11:14:28.293362 containerd[1491]: time="2025-01-29T11:14:28.293310592Z" level=info msg="StartContainer for \"29575438ba74e6b0a741e048ab661c818be81fb1935ec8d4dde5ac8f163ed8a7\" returns successfully" Jan 29 11:14:28.945928 kubelet[2588]: E0129 11:14:28.945545 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:28.948441 kubelet[2588]: E0129 11:14:28.948409 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:28.954642 kubelet[2588]: I0129 11:14:28.954577 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qrf6n" podStartSLOduration=24.954560668 podStartE2EDuration="24.954560668s" podCreationTimestamp="2025-01-29 11:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:28.953869729 +0000 UTC m=+31.195965216" watchObservedRunningTime="2025-01-29 11:14:28.954560668 +0000 UTC m=+31.196656145" Jan 29 11:14:28.980950 kubelet[2588]: I0129 11:14:28.980874 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ccb99" podStartSLOduration=24.980852931 podStartE2EDuration="24.980852931s" podCreationTimestamp="2025-01-29 11:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:14:28.979495598 +0000 UTC m=+31.221591085" watchObservedRunningTime="2025-01-29 11:14:28.980852931 +0000 UTC m=+31.222948439" Jan 29 11:14:29.119831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3648977834.mount: Deactivated successfully. Jan 29 11:14:29.949904 kubelet[2588]: E0129 11:14:29.949856 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:29.949904 kubelet[2588]: E0129 11:14:29.949878 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:30.951072 kubelet[2588]: E0129 11:14:30.951031 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:30.951475 kubelet[2588]: E0129 11:14:30.951041 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:14:33.196708 systemd[1]: Started sshd@10-10.0.0.47:22-10.0.0.1:60308.service - OpenSSH per-connection server daemon (10.0.0.1:60308). Jan 29 11:14:33.240513 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 60308 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:33.241855 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:33.245375 systemd-logind[1472]: New session 11 of user core. Jan 29 11:14:33.254116 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:14:33.375876 sshd[4000]: Connection closed by 10.0.0.1 port 60308 Jan 29 11:14:33.376207 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:33.379540 systemd[1]: sshd@10-10.0.0.47:22-10.0.0.1:60308.service: Deactivated successfully. Jan 29 11:14:33.381280 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:14:33.381832 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:14:33.382595 systemd-logind[1472]: Removed session 11. Jan 29 11:14:38.386942 systemd[1]: Started sshd@11-10.0.0.47:22-10.0.0.1:39574.service - OpenSSH per-connection server daemon (10.0.0.1:39574). Jan 29 11:14:38.427682 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 39574 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:38.429206 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:38.433460 systemd-logind[1472]: New session 12 of user core. Jan 29 11:14:38.447128 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:14:38.563426 sshd[4021]: Connection closed by 10.0.0.1 port 39574 Jan 29 11:14:38.563812 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:38.567721 systemd[1]: sshd@11-10.0.0.47:22-10.0.0.1:39574.service: Deactivated successfully. Jan 29 11:14:38.570227 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:14:38.571924 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:14:38.572986 systemd-logind[1472]: Removed session 12. Jan 29 11:14:43.576920 systemd[1]: Started sshd@12-10.0.0.47:22-10.0.0.1:39582.service - OpenSSH per-connection server daemon (10.0.0.1:39582). Jan 29 11:14:43.618796 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 39582 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:43.620081 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:43.623573 systemd-logind[1472]: New session 13 of user core. Jan 29 11:14:43.636123 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:14:43.749589 sshd[4037]: Connection closed by 10.0.0.1 port 39582 Jan 29 11:14:43.749949 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:43.760866 systemd[1]: sshd@12-10.0.0.47:22-10.0.0.1:39582.service: Deactivated successfully. Jan 29 11:14:43.762739 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:14:43.764607 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:14:43.765951 systemd[1]: Started sshd@13-10.0.0.47:22-10.0.0.1:39592.service - OpenSSH per-connection server daemon (10.0.0.1:39592). Jan 29 11:14:43.766696 systemd-logind[1472]: Removed session 13. Jan 29 11:14:43.829194 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 39592 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:43.831119 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:43.835155 systemd-logind[1472]: New session 14 of user core. Jan 29 11:14:43.842147 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:14:43.994801 sshd[4052]: Connection closed by 10.0.0.1 port 39592 Jan 29 11:14:43.995528 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:44.007496 systemd[1]: sshd@13-10.0.0.47:22-10.0.0.1:39592.service: Deactivated successfully. Jan 29 11:14:44.009845 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:14:44.011864 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:14:44.019430 systemd[1]: Started sshd@14-10.0.0.47:22-10.0.0.1:39608.service - OpenSSH per-connection server daemon (10.0.0.1:39608). Jan 29 11:14:44.020400 systemd-logind[1472]: Removed session 14. Jan 29 11:14:44.058729 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 39608 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:44.060495 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:44.064409 systemd-logind[1472]: New session 15 of user core. Jan 29 11:14:44.079130 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:14:44.206571 sshd[4065]: Connection closed by 10.0.0.1 port 39608 Jan 29 11:14:44.207088 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:44.211576 systemd[1]: sshd@14-10.0.0.47:22-10.0.0.1:39608.service: Deactivated successfully. Jan 29 11:14:44.213647 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:14:44.214380 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:14:44.215411 systemd-logind[1472]: Removed session 15. Jan 29 11:14:49.224977 systemd[1]: Started sshd@15-10.0.0.47:22-10.0.0.1:58842.service - OpenSSH per-connection server daemon (10.0.0.1:58842). Jan 29 11:14:49.265021 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 58842 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:49.266392 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:49.270151 systemd-logind[1472]: New session 16 of user core. Jan 29 11:14:49.278135 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:14:49.380592 sshd[4079]: Connection closed by 10.0.0.1 port 58842 Jan 29 11:14:49.380940 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:49.384441 systemd[1]: sshd@15-10.0.0.47:22-10.0.0.1:58842.service: Deactivated successfully. Jan 29 11:14:49.386556 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:14:49.387245 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:14:49.388149 systemd-logind[1472]: Removed session 16. Jan 29 11:14:54.397506 systemd[1]: Started sshd@16-10.0.0.47:22-10.0.0.1:58858.service - OpenSSH per-connection server daemon (10.0.0.1:58858). Jan 29 11:14:54.437768 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 58858 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:54.439204 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:54.443406 systemd-logind[1472]: New session 17 of user core. Jan 29 11:14:54.456158 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:14:54.567478 sshd[4094]: Connection closed by 10.0.0.1 port 58858 Jan 29 11:14:54.567922 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:54.583873 systemd[1]: sshd@16-10.0.0.47:22-10.0.0.1:58858.service: Deactivated successfully. Jan 29 11:14:54.586150 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:14:54.588303 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:14:54.594305 systemd[1]: Started sshd@17-10.0.0.47:22-10.0.0.1:58868.service - OpenSSH per-connection server daemon (10.0.0.1:58868). Jan 29 11:14:54.595197 systemd-logind[1472]: Removed session 17. Jan 29 11:14:54.630348 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 58868 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:54.631942 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:54.635846 systemd-logind[1472]: New session 18 of user core. Jan 29 11:14:54.643124 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:14:54.860445 sshd[4108]: Connection closed by 10.0.0.1 port 58868 Jan 29 11:14:54.860933 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:54.872912 systemd[1]: sshd@17-10.0.0.47:22-10.0.0.1:58868.service: Deactivated successfully. Jan 29 11:14:54.874751 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:14:54.876689 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:14:54.888356 systemd[1]: Started sshd@18-10.0.0.47:22-10.0.0.1:55130.service - OpenSSH per-connection server daemon (10.0.0.1:55130). Jan 29 11:14:54.889348 systemd-logind[1472]: Removed session 18. Jan 29 11:14:54.930257 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 55130 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:54.931741 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:54.935686 systemd-logind[1472]: New session 19 of user core. Jan 29 11:14:54.945140 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:14:56.173654 sshd[4120]: Connection closed by 10.0.0.1 port 55130 Jan 29 11:14:56.174063 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:56.185684 systemd[1]: sshd@18-10.0.0.47:22-10.0.0.1:55130.service: Deactivated successfully. Jan 29 11:14:56.188952 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:14:56.192544 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:14:56.200515 systemd[1]: Started sshd@19-10.0.0.47:22-10.0.0.1:55146.service - OpenSSH per-connection server daemon (10.0.0.1:55146). Jan 29 11:14:56.202524 systemd-logind[1472]: Removed session 19. Jan 29 11:14:56.238175 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 55146 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:56.239740 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:56.243884 systemd-logind[1472]: New session 20 of user core. Jan 29 11:14:56.250146 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:14:56.485246 sshd[4140]: Connection closed by 10.0.0.1 port 55146 Jan 29 11:14:56.485967 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:56.495947 systemd[1]: sshd@19-10.0.0.47:22-10.0.0.1:55146.service: Deactivated successfully. Jan 29 11:14:56.497915 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:14:56.499813 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:14:56.507320 systemd[1]: Started sshd@20-10.0.0.47:22-10.0.0.1:55150.service - OpenSSH per-connection server daemon (10.0.0.1:55150). Jan 29 11:14:56.508451 systemd-logind[1472]: Removed session 20. Jan 29 11:14:56.545613 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 55150 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:14:56.547208 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:14:56.551388 systemd-logind[1472]: New session 21 of user core. Jan 29 11:14:56.565125 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:14:56.681245 sshd[4153]: Connection closed by 10.0.0.1 port 55150 Jan 29 11:14:56.681621 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jan 29 11:14:56.685535 systemd[1]: sshd@20-10.0.0.47:22-10.0.0.1:55150.service: Deactivated successfully. Jan 29 11:14:56.687542 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:14:56.688176 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:14:56.688999 systemd-logind[1472]: Removed session 21. Jan 29 11:15:01.697380 systemd[1]: Started sshd@21-10.0.0.47:22-10.0.0.1:55156.service - OpenSSH per-connection server daemon (10.0.0.1:55156). Jan 29 11:15:01.737957 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 55156 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:15:01.739639 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:01.743610 systemd-logind[1472]: New session 22 of user core. Jan 29 11:15:01.750155 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:15:01.860085 sshd[4170]: Connection closed by 10.0.0.1 port 55156 Jan 29 11:15:01.860432 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:01.864314 systemd[1]: sshd@21-10.0.0.47:22-10.0.0.1:55156.service: Deactivated successfully. Jan 29 11:15:01.866385 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:15:01.866954 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:15:01.867742 systemd-logind[1472]: Removed session 22. Jan 29 11:15:06.872939 systemd[1]: Started sshd@22-10.0.0.47:22-10.0.0.1:47420.service - OpenSSH per-connection server daemon (10.0.0.1:47420). Jan 29 11:15:06.912538 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 47420 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:15:06.913895 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:06.917598 systemd-logind[1472]: New session 23 of user core. Jan 29 11:15:06.934113 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:15:07.036357 sshd[4189]: Connection closed by 10.0.0.1 port 47420 Jan 29 11:15:07.036682 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:07.040126 systemd[1]: sshd@22-10.0.0.47:22-10.0.0.1:47420.service: Deactivated successfully. Jan 29 11:15:07.042100 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:15:07.042757 systemd-logind[1472]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:15:07.043560 systemd-logind[1472]: Removed session 23. Jan 29 11:15:12.048091 systemd[1]: Started sshd@23-10.0.0.47:22-10.0.0.1:47432.service - OpenSSH per-connection server daemon (10.0.0.1:47432). Jan 29 11:15:12.088362 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 47432 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:15:12.089880 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:12.093462 systemd-logind[1472]: New session 24 of user core. Jan 29 11:15:12.103130 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:15:12.206730 sshd[4203]: Connection closed by 10.0.0.1 port 47432 Jan 29 11:15:12.207081 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:12.211066 systemd[1]: sshd@23-10.0.0.47:22-10.0.0.1:47432.service: Deactivated successfully. Jan 29 11:15:12.213081 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:15:12.213668 systemd-logind[1472]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:15:12.214623 systemd-logind[1472]: Removed session 24. Jan 29 11:15:16.853337 kubelet[2588]: E0129 11:15:16.853290 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:17.218833 systemd[1]: Started sshd@24-10.0.0.47:22-10.0.0.1:37192.service - OpenSSH per-connection server daemon (10.0.0.1:37192). Jan 29 11:15:17.258574 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 37192 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:15:17.259959 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:17.263680 systemd-logind[1472]: New session 25 of user core. Jan 29 11:15:17.275122 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:15:17.378130 sshd[4217]: Connection closed by 10.0.0.1 port 37192 Jan 29 11:15:17.378612 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:17.392830 systemd[1]: sshd@24-10.0.0.47:22-10.0.0.1:37192.service: Deactivated successfully. Jan 29 11:15:17.394649 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:15:17.396427 systemd-logind[1472]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:15:17.404246 systemd[1]: Started sshd@25-10.0.0.47:22-10.0.0.1:37208.service - OpenSSH per-connection server daemon (10.0.0.1:37208). Jan 29 11:15:17.405229 systemd-logind[1472]: Removed session 25. Jan 29 11:15:17.439968 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 37208 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:15:17.441307 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:17.445352 systemd-logind[1472]: New session 26 of user core. Jan 29 11:15:17.461257 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:15:17.853933 kubelet[2588]: E0129 11:15:17.853897 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:18.790362 containerd[1491]: time="2025-01-29T11:15:18.790268466Z" level=info msg="StopContainer for \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\" with timeout 30 (s)" Jan 29 11:15:18.791556 containerd[1491]: time="2025-01-29T11:15:18.791523757Z" level=info msg="Stop container \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\" with signal terminated" Jan 29 11:15:18.802763 systemd[1]: cri-containerd-17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1.scope: Deactivated successfully. Jan 29 11:15:18.816284 containerd[1491]: time="2025-01-29T11:15:18.816199158Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:15:18.824341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1-rootfs.mount: Deactivated successfully. Jan 29 11:15:18.825310 containerd[1491]: time="2025-01-29T11:15:18.825275476Z" level=info msg="StopContainer for \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\" with timeout 2 (s)" Jan 29 11:15:18.825623 containerd[1491]: time="2025-01-29T11:15:18.825598424Z" level=info msg="Stop container \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\" with signal terminated" Jan 29 11:15:18.830795 containerd[1491]: time="2025-01-29T11:15:18.830739737Z" level=info msg="shim disconnected" id=17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1 namespace=k8s.io Jan 29 11:15:18.830795 containerd[1491]: time="2025-01-29T11:15:18.830790714Z" level=warning msg="cleaning up after shim disconnected" id=17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1 namespace=k8s.io Jan 29 11:15:18.830871 containerd[1491]: time="2025-01-29T11:15:18.830799150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:18.833171 systemd-networkd[1377]: lxc_health: Link DOWN Jan 29 11:15:18.833180 systemd-networkd[1377]: lxc_health: Lost carrier Jan 29 11:15:18.853622 containerd[1491]: time="2025-01-29T11:15:18.853588935Z" level=info msg="StopContainer for \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\" returns successfully" Jan 29 11:15:18.858510 containerd[1491]: time="2025-01-29T11:15:18.858466895Z" level=info msg="StopPodSandbox for \"ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9\"" Jan 29 11:15:18.858934 systemd[1]: cri-containerd-cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5.scope: Deactivated successfully. Jan 29 11:15:18.859419 systemd[1]: cri-containerd-cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5.scope: Consumed 6.710s CPU time. Jan 29 11:15:18.870659 containerd[1491]: time="2025-01-29T11:15:18.858519615Z" level=info msg="Container to stop \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:18.872908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9-shm.mount: Deactivated successfully. Jan 29 11:15:18.879920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5-rootfs.mount: Deactivated successfully. Jan 29 11:15:18.880860 systemd[1]: cri-containerd-ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9.scope: Deactivated successfully. Jan 29 11:15:18.894846 containerd[1491]: time="2025-01-29T11:15:18.894770394Z" level=info msg="shim disconnected" id=cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5 namespace=k8s.io Jan 29 11:15:18.894846 containerd[1491]: time="2025-01-29T11:15:18.894828686Z" level=warning msg="cleaning up after shim disconnected" id=cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5 namespace=k8s.io Jan 29 11:15:18.894846 containerd[1491]: time="2025-01-29T11:15:18.894837442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:18.903462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9-rootfs.mount: Deactivated successfully. Jan 29 11:15:18.908214 containerd[1491]: time="2025-01-29T11:15:18.908077103Z" level=info msg="shim disconnected" id=ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9 namespace=k8s.io Jan 29 11:15:18.908214 containerd[1491]: time="2025-01-29T11:15:18.908137378Z" level=warning msg="cleaning up after shim disconnected" id=ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9 namespace=k8s.io Jan 29 11:15:18.908214 containerd[1491]: time="2025-01-29T11:15:18.908147638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:18.914992 containerd[1491]: time="2025-01-29T11:15:18.914696573Z" level=info msg="StopContainer for \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\" returns successfully" Jan 29 11:15:18.915206 containerd[1491]: time="2025-01-29T11:15:18.915161021Z" level=info msg="StopPodSandbox for \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\"" Jan 29 11:15:18.915244 containerd[1491]: time="2025-01-29T11:15:18.915189706Z" level=info msg="Container to stop \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:18.915244 containerd[1491]: time="2025-01-29T11:15:18.915223199Z" level=info msg="Container to stop \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:18.915244 containerd[1491]: time="2025-01-29T11:15:18.915232909Z" level=info msg="Container to stop \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:18.915244 containerd[1491]: time="2025-01-29T11:15:18.915242346Z" level=info msg="Container to stop \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:18.915382 containerd[1491]: time="2025-01-29T11:15:18.915251533Z" level=info msg="Container to stop \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:18.921911 systemd[1]: cri-containerd-747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15.scope: Deactivated successfully. Jan 29 11:15:18.924977 containerd[1491]: time="2025-01-29T11:15:18.924930354Z" level=info msg="TearDown network for sandbox \"ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9\" successfully" Jan 29 11:15:18.924977 containerd[1491]: time="2025-01-29T11:15:18.924969229Z" level=info msg="StopPodSandbox for \"ae6aef2605a829cef3fe5f3625cc2ca3bbd52b67d9cb56f05be5da1855143fa9\" returns successfully" Jan 29 11:15:18.946394 containerd[1491]: time="2025-01-29T11:15:18.946310514Z" level=info msg="shim disconnected" id=747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15 namespace=k8s.io Jan 29 11:15:18.946394 containerd[1491]: time="2025-01-29T11:15:18.946384475Z" level=warning msg="cleaning up after shim disconnected" id=747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15 namespace=k8s.io Jan 29 11:15:18.946394 containerd[1491]: time="2025-01-29T11:15:18.946396669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:18.963103 containerd[1491]: time="2025-01-29T11:15:18.963051991Z" level=info msg="TearDown network for sandbox \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" successfully" Jan 29 11:15:18.963103 containerd[1491]: time="2025-01-29T11:15:18.963091828Z" level=info msg="StopPodSandbox for \"747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15\" returns successfully" Jan 29 11:15:19.025454 kubelet[2588]: I0129 11:15:19.025421 2588 scope.go:117] "RemoveContainer" containerID="cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5" Jan 29 11:15:19.026670 containerd[1491]: time="2025-01-29T11:15:19.026624858Z" level=info msg="RemoveContainer for \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\"" Jan 29 11:15:19.033303 kubelet[2588]: I0129 11:15:19.033247 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z26mh\" (UniqueName: \"kubernetes.io/projected/5ed11690-cd93-45ed-9778-ba1458f97b07-kube-api-access-z26mh\") pod \"5ed11690-cd93-45ed-9778-ba1458f97b07\" (UID: \"5ed11690-cd93-45ed-9778-ba1458f97b07\") " Jan 29 11:15:19.033303 kubelet[2588]: I0129 11:15:19.033301 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ed11690-cd93-45ed-9778-ba1458f97b07-cilium-config-path\") pod \"5ed11690-cd93-45ed-9778-ba1458f97b07\" (UID: \"5ed11690-cd93-45ed-9778-ba1458f97b07\") " Jan 29 11:15:19.034867 containerd[1491]: time="2025-01-29T11:15:19.034826166Z" level=info msg="RemoveContainer for \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\" returns successfully" Jan 29 11:15:19.035132 kubelet[2588]: I0129 11:15:19.035055 2588 scope.go:117] "RemoveContainer" containerID="ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08" Jan 29 11:15:19.035901 containerd[1491]: time="2025-01-29T11:15:19.035880501Z" level=info msg="RemoveContainer for \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\"" Jan 29 11:15:19.036520 kubelet[2588]: I0129 11:15:19.036479 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed11690-cd93-45ed-9778-ba1458f97b07-kube-api-access-z26mh" (OuterVolumeSpecName: "kube-api-access-z26mh") pod "5ed11690-cd93-45ed-9778-ba1458f97b07" (UID: "5ed11690-cd93-45ed-9778-ba1458f97b07"). InnerVolumeSpecName "kube-api-access-z26mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:19.037198 kubelet[2588]: I0129 11:15:19.037171 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed11690-cd93-45ed-9778-ba1458f97b07-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ed11690-cd93-45ed-9778-ba1458f97b07" (UID: "5ed11690-cd93-45ed-9778-ba1458f97b07"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:19.039565 containerd[1491]: time="2025-01-29T11:15:19.039538203Z" level=info msg="RemoveContainer for \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\" returns successfully" Jan 29 11:15:19.039717 kubelet[2588]: I0129 11:15:19.039686 2588 scope.go:117] "RemoveContainer" containerID="9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d" Jan 29 11:15:19.040867 containerd[1491]: time="2025-01-29T11:15:19.040776851Z" level=info msg="RemoveContainer for \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\"" Jan 29 11:15:19.043892 containerd[1491]: time="2025-01-29T11:15:19.043862008Z" level=info msg="RemoveContainer for \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\" returns successfully" Jan 29 11:15:19.044119 kubelet[2588]: I0129 11:15:19.044019 2588 scope.go:117] "RemoveContainer" containerID="801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28" Jan 29 11:15:19.044874 containerd[1491]: time="2025-01-29T11:15:19.044833204Z" level=info msg="RemoveContainer for \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\"" Jan 29 11:15:19.048312 containerd[1491]: time="2025-01-29T11:15:19.048273331Z" level=info msg="RemoveContainer for \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\" returns successfully" Jan 29 11:15:19.048473 kubelet[2588]: I0129 11:15:19.048430 2588 scope.go:117] "RemoveContainer" containerID="248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c" Jan 29 11:15:19.049371 containerd[1491]: time="2025-01-29T11:15:19.049348766Z" level=info msg="RemoveContainer for \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\"" Jan 29 11:15:19.052210 containerd[1491]: time="2025-01-29T11:15:19.052182132Z" level=info msg="RemoveContainer for \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\" returns successfully" Jan 29 11:15:19.052338 kubelet[2588]: I0129 11:15:19.052319 2588 scope.go:117] "RemoveContainer" containerID="cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5" Jan 29 11:15:19.052546 containerd[1491]: time="2025-01-29T11:15:19.052492065Z" level=error msg="ContainerStatus for \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\": not found" Jan 29 11:15:19.059061 kubelet[2588]: E0129 11:15:19.058998 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\": not found" containerID="cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5" Jan 29 11:15:19.059155 kubelet[2588]: I0129 11:15:19.059056 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5"} err="failed to get container status \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf7f08144315c324f3044270d4ad1a2b8b583e85778e362cb8295dc934ba85f5\": not found" Jan 29 11:15:19.059155 kubelet[2588]: I0129 11:15:19.059146 2588 scope.go:117] "RemoveContainer" containerID="ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08" Jan 29 11:15:19.059334 containerd[1491]: time="2025-01-29T11:15:19.059294798Z" level=error msg="ContainerStatus for \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\": not found" Jan 29 11:15:19.059458 kubelet[2588]: E0129 11:15:19.059430 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\": not found" containerID="ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08" Jan 29 11:15:19.059512 kubelet[2588]: I0129 11:15:19.059458 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08"} err="failed to get container status \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff0aa5baaa140d0f7651a14745bbfadefc8aa38d162e9d8835471e9841a22c08\": not found" Jan 29 11:15:19.059512 kubelet[2588]: I0129 11:15:19.059476 2588 scope.go:117] "RemoveContainer" containerID="9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d" Jan 29 11:15:19.059651 containerd[1491]: time="2025-01-29T11:15:19.059624348Z" level=error msg="ContainerStatus for \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\": not found" Jan 29 11:15:19.059771 kubelet[2588]: E0129 11:15:19.059747 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\": not found" containerID="9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d" Jan 29 11:15:19.059804 kubelet[2588]: I0129 11:15:19.059781 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d"} err="failed to get container status \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a5995f9c9869d8753440b92133f6d5b0dc05081d4c4660366de2e47aadcf25d\": not found" Jan 29 11:15:19.059895 kubelet[2588]: I0129 11:15:19.059811 2588 scope.go:117] "RemoveContainer" containerID="801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28" Jan 29 11:15:19.060101 containerd[1491]: time="2025-01-29T11:15:19.060063988Z" level=error msg="ContainerStatus for \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\": not found" Jan 29 11:15:19.060209 kubelet[2588]: E0129 11:15:19.060191 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\": not found" containerID="801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28" Jan 29 11:15:19.060237 kubelet[2588]: I0129 11:15:19.060216 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28"} err="failed to get container status \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\": rpc error: code = NotFound desc = an error occurred when try to find container \"801d753e7d2601866470ff36af4d7c2dbe66038376fff813999cb582a86f5b28\": not found" Jan 29 11:15:19.060237 kubelet[2588]: I0129 11:15:19.060229 2588 scope.go:117] "RemoveContainer" containerID="248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c" Jan 29 11:15:19.060407 containerd[1491]: time="2025-01-29T11:15:19.060373300Z" level=error msg="ContainerStatus for \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\": not found" Jan 29 11:15:19.060532 kubelet[2588]: E0129 11:15:19.060507 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\": not found" containerID="248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c" Jan 29 11:15:19.060573 kubelet[2588]: I0129 11:15:19.060529 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c"} err="failed to get container status \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\": rpc error: code = NotFound desc = an error occurred when try to find container \"248c79193060d24624f97dd8ec45dced0918a3b8833f1e4bca2aa27ef3b9223c\": not found" Jan 29 11:15:19.060573 kubelet[2588]: I0129 11:15:19.060544 2588 scope.go:117] "RemoveContainer" containerID="17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1" Jan 29 11:15:19.061339 containerd[1491]: time="2025-01-29T11:15:19.061305993Z" level=info msg="RemoveContainer for \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\"" Jan 29 11:15:19.064380 containerd[1491]: time="2025-01-29T11:15:19.064339882Z" level=info msg="RemoveContainer for \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\" returns successfully" Jan 29 11:15:19.064496 kubelet[2588]: I0129 11:15:19.064467 2588 scope.go:117] "RemoveContainer" containerID="17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1" Jan 29 11:15:19.064657 containerd[1491]: time="2025-01-29T11:15:19.064616892Z" level=error msg="ContainerStatus for \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\": not found" Jan 29 11:15:19.064800 kubelet[2588]: E0129 11:15:19.064749 2588 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\": not found" containerID="17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1" Jan 29 11:15:19.064870 kubelet[2588]: I0129 11:15:19.064789 2588 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1"} err="failed to get container status \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"17810539defa8571e619bff634aceec5a977e8b7cea57fdf6a7a49ce044da1b1\": not found" Jan 29 11:15:19.134168 kubelet[2588]: I0129 11:15:19.134101 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-etc-cni-netd\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134168 kubelet[2588]: I0129 11:15:19.134152 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-bpf-maps\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134168 kubelet[2588]: I0129 11:15:19.134177 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cni-path\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134395 kubelet[2588]: I0129 11:15:19.134206 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-hubble-tls\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134395 kubelet[2588]: I0129 11:15:19.134204 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.134395 kubelet[2588]: I0129 11:15:19.134226 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-cgroup\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134395 kubelet[2588]: I0129 11:15:19.134246 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-lib-modules\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134395 kubelet[2588]: I0129 11:15:19.134225 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.134395 kubelet[2588]: I0129 11:15:19.134265 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-hostproc\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134555 kubelet[2588]: I0129 11:15:19.134286 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cni-path" (OuterVolumeSpecName: "cni-path") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.134555 kubelet[2588]: I0129 11:15:19.134286 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-host-proc-sys-net\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134555 kubelet[2588]: I0129 11:15:19.134311 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.134555 kubelet[2588]: I0129 11:15:19.134334 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eba734ef-816f-46bb-baf1-695eebc4010c-clustermesh-secrets\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134555 kubelet[2588]: I0129 11:15:19.134354 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-xtables-lock\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134663 kubelet[2588]: I0129 11:15:19.134335 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.134663 kubelet[2588]: I0129 11:15:19.134348 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.134663 kubelet[2588]: I0129 11:15:19.134381 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.134663 kubelet[2588]: I0129 11:15:19.134361 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-hostproc" (OuterVolumeSpecName: "hostproc") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.134663 kubelet[2588]: I0129 11:15:19.134404 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.134779 kubelet[2588]: I0129 11:15:19.134369 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-run\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134779 kubelet[2588]: I0129 11:15:19.134431 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-config-path\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134779 kubelet[2588]: I0129 11:15:19.134447 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrb2z\" (UniqueName: \"kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-kube-api-access-nrb2z\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134779 kubelet[2588]: I0129 11:15:19.134462 2588 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-host-proc-sys-kernel\") pod \"eba734ef-816f-46bb-baf1-695eebc4010c\" (UID: \"eba734ef-816f-46bb-baf1-695eebc4010c\") " Jan 29 11:15:19.134779 kubelet[2588]: I0129 11:15:19.134489 2588 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134779 kubelet[2588]: I0129 11:15:19.134507 2588 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134779 kubelet[2588]: I0129 11:15:19.134515 2588 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z26mh\" (UniqueName: \"kubernetes.io/projected/5ed11690-cd93-45ed-9778-ba1458f97b07-kube-api-access-z26mh\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134948 kubelet[2588]: I0129 11:15:19.134525 2588 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ed11690-cd93-45ed-9778-ba1458f97b07-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134948 kubelet[2588]: I0129 11:15:19.134532 2588 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134948 kubelet[2588]: I0129 11:15:19.134539 2588 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134948 kubelet[2588]: I0129 11:15:19.134547 2588 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134948 kubelet[2588]: I0129 11:15:19.134554 2588 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134948 kubelet[2588]: I0129 11:15:19.134561 2588 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134948 kubelet[2588]: I0129 11:15:19.134569 2588 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.134948 kubelet[2588]: I0129 11:15:19.134577 2588 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.135177 kubelet[2588]: I0129 11:15:19.134594 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:19.138046 kubelet[2588]: I0129 11:15:19.137992 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:19.138046 kubelet[2588]: I0129 11:15:19.138042 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-kube-api-access-nrb2z" (OuterVolumeSpecName: "kube-api-access-nrb2z") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "kube-api-access-nrb2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:19.138209 kubelet[2588]: I0129 11:15:19.138076 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba734ef-816f-46bb-baf1-695eebc4010c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:19.138851 kubelet[2588]: I0129 11:15:19.138813 2588 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eba734ef-816f-46bb-baf1-695eebc4010c" (UID: "eba734ef-816f-46bb-baf1-695eebc4010c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:19.234781 kubelet[2588]: I0129 11:15:19.234727 2588 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eba734ef-816f-46bb-baf1-695eebc4010c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.234781 kubelet[2588]: I0129 11:15:19.234758 2588 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eba734ef-816f-46bb-baf1-695eebc4010c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.234781 kubelet[2588]: I0129 11:15:19.234768 2588 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eba734ef-816f-46bb-baf1-695eebc4010c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.234781 kubelet[2588]: I0129 11:15:19.234777 2588 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nrb2z\" (UniqueName: \"kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-kube-api-access-nrb2z\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.234781 kubelet[2588]: I0129 11:15:19.234789 2588 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eba734ef-816f-46bb-baf1-695eebc4010c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 11:15:19.334980 systemd[1]: Removed slice kubepods-burstable-podeba734ef_816f_46bb_baf1_695eebc4010c.slice - libcontainer container kubepods-burstable-podeba734ef_816f_46bb_baf1_695eebc4010c.slice. Jan 29 11:15:19.335124 systemd[1]: kubepods-burstable-podeba734ef_816f_46bb_baf1_695eebc4010c.slice: Consumed 6.811s CPU time. Jan 29 11:15:19.336413 systemd[1]: Removed slice kubepods-besteffort-pod5ed11690_cd93_45ed_9778_ba1458f97b07.slice - libcontainer container kubepods-besteffort-pod5ed11690_cd93_45ed_9778_ba1458f97b07.slice. Jan 29 11:15:19.797424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15-rootfs.mount: Deactivated successfully. Jan 29 11:15:19.797550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-747ccb3602ab10ae48b6dad502e1318e11cda423fffa9fcd519bec0cd67bad15-shm.mount: Deactivated successfully. Jan 29 11:15:19.797629 systemd[1]: var-lib-kubelet-pods-eba734ef\x2d816f\x2d46bb\x2dbaf1\x2d695eebc4010c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:15:19.797706 systemd[1]: var-lib-kubelet-pods-eba734ef\x2d816f\x2d46bb\x2dbaf1\x2d695eebc4010c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:15:19.797779 systemd[1]: var-lib-kubelet-pods-5ed11690\x2dcd93\x2d45ed\x2d9778\x2dba1458f97b07-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz26mh.mount: Deactivated successfully. Jan 29 11:15:19.797856 systemd[1]: var-lib-kubelet-pods-eba734ef\x2d816f\x2d46bb\x2dbaf1\x2d695eebc4010c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnrb2z.mount: Deactivated successfully. Jan 29 11:15:19.856065 kubelet[2588]: I0129 11:15:19.855994 2588 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed11690-cd93-45ed-9778-ba1458f97b07" path="/var/lib/kubelet/pods/5ed11690-cd93-45ed-9778-ba1458f97b07/volumes" Jan 29 11:15:19.856622 kubelet[2588]: I0129 11:15:19.856597 2588 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba734ef-816f-46bb-baf1-695eebc4010c" path="/var/lib/kubelet/pods/eba734ef-816f-46bb-baf1-695eebc4010c/volumes" Jan 29 11:15:20.761383 sshd[4232]: Connection closed by 10.0.0.1 port 37208 Jan 29 11:15:20.761800 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:20.771868 systemd[1]: sshd@25-10.0.0.47:22-10.0.0.1:37208.service: Deactivated successfully. Jan 29 11:15:20.773941 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:15:20.775470 systemd-logind[1472]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:15:20.794315 systemd[1]: Started sshd@26-10.0.0.47:22-10.0.0.1:37224.service - OpenSSH per-connection server daemon (10.0.0.1:37224). Jan 29 11:15:20.795402 systemd-logind[1472]: Removed session 26. Jan 29 11:15:20.833251 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 37224 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:15:20.835033 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:20.839471 systemd-logind[1472]: New session 27 of user core. Jan 29 11:15:20.848154 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:15:20.854100 kubelet[2588]: E0129 11:15:20.854060 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:21.456530 sshd[4393]: Connection closed by 10.0.0.1 port 37224 Jan 29 11:15:21.457890 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:21.469900 systemd[1]: sshd@26-10.0.0.47:22-10.0.0.1:37224.service: Deactivated successfully. Jan 29 11:15:21.474148 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:15:21.476767 systemd-logind[1472]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:15:21.479938 kubelet[2588]: E0129 11:15:21.478924 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eba734ef-816f-46bb-baf1-695eebc4010c" containerName="cilium-agent" Jan 29 11:15:21.479938 kubelet[2588]: E0129 11:15:21.478952 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eba734ef-816f-46bb-baf1-695eebc4010c" containerName="mount-bpf-fs" Jan 29 11:15:21.479938 kubelet[2588]: E0129 11:15:21.478960 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eba734ef-816f-46bb-baf1-695eebc4010c" containerName="clean-cilium-state" Jan 29 11:15:21.479938 kubelet[2588]: E0129 11:15:21.478968 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eba734ef-816f-46bb-baf1-695eebc4010c" containerName="mount-cgroup" Jan 29 11:15:21.479938 kubelet[2588]: E0129 11:15:21.478976 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eba734ef-816f-46bb-baf1-695eebc4010c" containerName="apply-sysctl-overwrites" Jan 29 11:15:21.479938 kubelet[2588]: E0129 11:15:21.478985 2588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ed11690-cd93-45ed-9778-ba1458f97b07" containerName="cilium-operator" Jan 29 11:15:21.479938 kubelet[2588]: I0129 11:15:21.479022 2588 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed11690-cd93-45ed-9778-ba1458f97b07" containerName="cilium-operator" Jan 29 11:15:21.479938 kubelet[2588]: I0129 11:15:21.479030 2588 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba734ef-816f-46bb-baf1-695eebc4010c" containerName="cilium-agent" Jan 29 11:15:21.487418 systemd[1]: Started sshd@27-10.0.0.47:22-10.0.0.1:37230.service - OpenSSH per-connection server daemon (10.0.0.1:37230). Jan 29 11:15:21.490936 systemd-logind[1472]: Removed session 27. Jan 29 11:15:21.498840 systemd[1]: Created slice kubepods-burstable-pod53bfc73f_3737_4158_a02a_c31dad845cfc.slice - libcontainer container kubepods-burstable-pod53bfc73f_3737_4158_a02a_c31dad845cfc.slice. Jan 29 11:15:21.526792 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 37230 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:15:21.528384 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:21.532194 systemd-logind[1472]: New session 28 of user core. Jan 29 11:15:21.541190 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 11:15:21.547186 kubelet[2588]: I0129 11:15:21.547157 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl9rq\" (UniqueName: \"kubernetes.io/projected/53bfc73f-3737-4158-a02a-c31dad845cfc-kube-api-access-vl9rq\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547265 kubelet[2588]: I0129 11:15:21.547191 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-xtables-lock\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547265 kubelet[2588]: I0129 11:15:21.547211 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/53bfc73f-3737-4158-a02a-c31dad845cfc-cilium-ipsec-secrets\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547265 kubelet[2588]: I0129 11:15:21.547227 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-cilium-run\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547265 kubelet[2588]: I0129 11:15:21.547239 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-bpf-maps\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547265 kubelet[2588]: I0129 11:15:21.547255 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-host-proc-sys-kernel\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547265 kubelet[2588]: I0129 11:15:21.547268 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-hostproc\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547400 kubelet[2588]: I0129 11:15:21.547283 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-cilium-cgroup\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547400 kubelet[2588]: I0129 11:15:21.547296 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53bfc73f-3737-4158-a02a-c31dad845cfc-clustermesh-secrets\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547400 kubelet[2588]: I0129 11:15:21.547309 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-cni-path\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547400 kubelet[2588]: I0129 11:15:21.547321 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-etc-cni-netd\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547400 kubelet[2588]: I0129 11:15:21.547334 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-lib-modules\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547400 kubelet[2588]: I0129 11:15:21.547347 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53bfc73f-3737-4158-a02a-c31dad845cfc-cilium-config-path\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547601 kubelet[2588]: I0129 11:15:21.547360 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53bfc73f-3737-4158-a02a-c31dad845cfc-host-proc-sys-net\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.547601 kubelet[2588]: I0129 11:15:21.547376 2588 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53bfc73f-3737-4158-a02a-c31dad845cfc-hubble-tls\") pod \"cilium-2l29k\" (UID: \"53bfc73f-3737-4158-a02a-c31dad845cfc\") " pod="kube-system/cilium-2l29k" Jan 29 11:15:21.593001 sshd[4406]: Connection closed by 10.0.0.1 port 37230 Jan 29 11:15:21.593391 sshd-session[4404]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:21.607709 systemd[1]: sshd@27-10.0.0.47:22-10.0.0.1:37230.service: Deactivated successfully. Jan 29 11:15:21.609300 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 11:15:21.610847 systemd-logind[1472]: Session 28 logged out. Waiting for processes to exit. Jan 29 11:15:21.612048 systemd[1]: Started sshd@28-10.0.0.47:22-10.0.0.1:37242.service - OpenSSH per-connection server daemon (10.0.0.1:37242). Jan 29 11:15:21.612756 systemd-logind[1472]: Removed session 28. Jan 29 11:15:21.653608 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 37242 ssh2: RSA SHA256:xqpJTelN1UchJb9Z7O/KWDYKQbcyOSl3Ip3rJWr1Y+w Jan 29 11:15:21.655689 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:21.669641 systemd-logind[1472]: New session 29 of user core. Jan 29 11:15:21.678131 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 11:15:21.802313 kubelet[2588]: E0129 11:15:21.802177 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:21.803075 containerd[1491]: time="2025-01-29T11:15:21.802764441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2l29k,Uid:53bfc73f-3737-4158-a02a-c31dad845cfc,Namespace:kube-system,Attempt:0,}" Jan 29 11:15:21.824093 containerd[1491]: time="2025-01-29T11:15:21.823992050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:21.824093 containerd[1491]: time="2025-01-29T11:15:21.824057004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:21.824093 containerd[1491]: time="2025-01-29T11:15:21.824067463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:21.824319 containerd[1491]: time="2025-01-29T11:15:21.824134822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:21.846144 systemd[1]: Started cri-containerd-73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847.scope - libcontainer container 73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847. Jan 29 11:15:21.867792 containerd[1491]: time="2025-01-29T11:15:21.867757288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2l29k,Uid:53bfc73f-3737-4158-a02a-c31dad845cfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\"" Jan 29 11:15:21.868743 kubelet[2588]: E0129 11:15:21.868418 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:21.870487 containerd[1491]: time="2025-01-29T11:15:21.870434301Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:15:21.909161 containerd[1491]: time="2025-01-29T11:15:21.909101956Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"32cd2b5d936dbe2a6a965e4d9f5a836d3df7bfeb7bcedeabb84077e457ff3379\"" Jan 29 11:15:21.909716 containerd[1491]: time="2025-01-29T11:15:21.909597853Z" level=info msg="StartContainer for \"32cd2b5d936dbe2a6a965e4d9f5a836d3df7bfeb7bcedeabb84077e457ff3379\"" Jan 29 11:15:21.935234 systemd[1]: Started cri-containerd-32cd2b5d936dbe2a6a965e4d9f5a836d3df7bfeb7bcedeabb84077e457ff3379.scope - libcontainer container 32cd2b5d936dbe2a6a965e4d9f5a836d3df7bfeb7bcedeabb84077e457ff3379. Jan 29 11:15:21.960485 containerd[1491]: time="2025-01-29T11:15:21.960434757Z" level=info msg="StartContainer for \"32cd2b5d936dbe2a6a965e4d9f5a836d3df7bfeb7bcedeabb84077e457ff3379\" returns successfully" Jan 29 11:15:21.970248 systemd[1]: cri-containerd-32cd2b5d936dbe2a6a965e4d9f5a836d3df7bfeb7bcedeabb84077e457ff3379.scope: Deactivated successfully. Jan 29 11:15:22.006426 containerd[1491]: time="2025-01-29T11:15:22.006353263Z" level=info msg="shim disconnected" id=32cd2b5d936dbe2a6a965e4d9f5a836d3df7bfeb7bcedeabb84077e457ff3379 namespace=k8s.io Jan 29 11:15:22.006426 containerd[1491]: time="2025-01-29T11:15:22.006410503Z" level=warning msg="cleaning up after shim disconnected" id=32cd2b5d936dbe2a6a965e4d9f5a836d3df7bfeb7bcedeabb84077e457ff3379 namespace=k8s.io Jan 29 11:15:22.006426 containerd[1491]: time="2025-01-29T11:15:22.006420241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:22.035613 kubelet[2588]: E0129 11:15:22.035573 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:22.037134 containerd[1491]: time="2025-01-29T11:15:22.037094154Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:15:22.049673 containerd[1491]: time="2025-01-29T11:15:22.049457543Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"82a7c25fdc28be7ec7853cd3d089fa9bde8bf521df9ad1e3c5fae41f86bd2705\"" Jan 29 11:15:22.050182 containerd[1491]: time="2025-01-29T11:15:22.050153211Z" level=info msg="StartContainer for \"82a7c25fdc28be7ec7853cd3d089fa9bde8bf521df9ad1e3c5fae41f86bd2705\"" Jan 29 11:15:22.081147 systemd[1]: Started cri-containerd-82a7c25fdc28be7ec7853cd3d089fa9bde8bf521df9ad1e3c5fae41f86bd2705.scope - libcontainer container 82a7c25fdc28be7ec7853cd3d089fa9bde8bf521df9ad1e3c5fae41f86bd2705. Jan 29 11:15:22.109089 containerd[1491]: time="2025-01-29T11:15:22.109047662Z" level=info msg="StartContainer for \"82a7c25fdc28be7ec7853cd3d089fa9bde8bf521df9ad1e3c5fae41f86bd2705\" returns successfully" Jan 29 11:15:22.114727 systemd[1]: cri-containerd-82a7c25fdc28be7ec7853cd3d089fa9bde8bf521df9ad1e3c5fae41f86bd2705.scope: Deactivated successfully. Jan 29 11:15:22.137045 containerd[1491]: time="2025-01-29T11:15:22.136972636Z" level=info msg="shim disconnected" id=82a7c25fdc28be7ec7853cd3d089fa9bde8bf521df9ad1e3c5fae41f86bd2705 namespace=k8s.io Jan 29 11:15:22.137045 containerd[1491]: time="2025-01-29T11:15:22.137039224Z" level=warning msg="cleaning up after shim disconnected" id=82a7c25fdc28be7ec7853cd3d089fa9bde8bf521df9ad1e3c5fae41f86bd2705 namespace=k8s.io Jan 29 11:15:22.137045 containerd[1491]: time="2025-01-29T11:15:22.137047219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:22.899519 kubelet[2588]: E0129 11:15:22.899467 2588 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:15:23.038374 kubelet[2588]: E0129 11:15:23.038347 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:23.039977 containerd[1491]: time="2025-01-29T11:15:23.039929978Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:15:23.090450 containerd[1491]: time="2025-01-29T11:15:23.090390208Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b48abd7934130c5be7d4141e197ecbce64208a028d8678279cfe9edfd736e56f\"" Jan 29 11:15:23.090899 containerd[1491]: time="2025-01-29T11:15:23.090874471Z" level=info msg="StartContainer for \"b48abd7934130c5be7d4141e197ecbce64208a028d8678279cfe9edfd736e56f\"" Jan 29 11:15:23.122159 systemd[1]: Started cri-containerd-b48abd7934130c5be7d4141e197ecbce64208a028d8678279cfe9edfd736e56f.scope - libcontainer container b48abd7934130c5be7d4141e197ecbce64208a028d8678279cfe9edfd736e56f. Jan 29 11:15:23.154383 containerd[1491]: time="2025-01-29T11:15:23.154207302Z" level=info msg="StartContainer for \"b48abd7934130c5be7d4141e197ecbce64208a028d8678279cfe9edfd736e56f\" returns successfully" Jan 29 11:15:23.154747 systemd[1]: cri-containerd-b48abd7934130c5be7d4141e197ecbce64208a028d8678279cfe9edfd736e56f.scope: Deactivated successfully. Jan 29 11:15:23.181517 containerd[1491]: time="2025-01-29T11:15:23.181447957Z" level=info msg="shim disconnected" id=b48abd7934130c5be7d4141e197ecbce64208a028d8678279cfe9edfd736e56f namespace=k8s.io Jan 29 11:15:23.181517 containerd[1491]: time="2025-01-29T11:15:23.181507080Z" level=warning msg="cleaning up after shim disconnected" id=b48abd7934130c5be7d4141e197ecbce64208a028d8678279cfe9edfd736e56f namespace=k8s.io Jan 29 11:15:23.181517 containerd[1491]: time="2025-01-29T11:15:23.181517489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:23.654163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b48abd7934130c5be7d4141e197ecbce64208a028d8678279cfe9edfd736e56f-rootfs.mount: Deactivated successfully. Jan 29 11:15:24.041403 kubelet[2588]: E0129 11:15:24.041279 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:24.042857 containerd[1491]: time="2025-01-29T11:15:24.042775573Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:15:24.062921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345586466.mount: Deactivated successfully. Jan 29 11:15:24.064433 containerd[1491]: time="2025-01-29T11:15:24.064386900Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3bc2c98db52f3503d5db2cda6937076c3c2f74da5667d6e71adde9a08c3ab236\"" Jan 29 11:15:24.064878 containerd[1491]: time="2025-01-29T11:15:24.064852407Z" level=info msg="StartContainer for \"3bc2c98db52f3503d5db2cda6937076c3c2f74da5667d6e71adde9a08c3ab236\"" Jan 29 11:15:24.094143 systemd[1]: Started cri-containerd-3bc2c98db52f3503d5db2cda6937076c3c2f74da5667d6e71adde9a08c3ab236.scope - libcontainer container 3bc2c98db52f3503d5db2cda6937076c3c2f74da5667d6e71adde9a08c3ab236. Jan 29 11:15:24.115470 systemd[1]: cri-containerd-3bc2c98db52f3503d5db2cda6937076c3c2f74da5667d6e71adde9a08c3ab236.scope: Deactivated successfully. Jan 29 11:15:24.117586 containerd[1491]: time="2025-01-29T11:15:24.117553019Z" level=info msg="StartContainer for \"3bc2c98db52f3503d5db2cda6937076c3c2f74da5667d6e71adde9a08c3ab236\" returns successfully" Jan 29 11:15:24.139478 containerd[1491]: time="2025-01-29T11:15:24.139416287Z" level=info msg="shim disconnected" id=3bc2c98db52f3503d5db2cda6937076c3c2f74da5667d6e71adde9a08c3ab236 namespace=k8s.io Jan 29 11:15:24.139478 containerd[1491]: time="2025-01-29T11:15:24.139472414Z" level=warning msg="cleaning up after shim disconnected" id=3bc2c98db52f3503d5db2cda6937076c3c2f74da5667d6e71adde9a08c3ab236 namespace=k8s.io Jan 29 11:15:24.139478 containerd[1491]: time="2025-01-29T11:15:24.139482212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:24.653618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bc2c98db52f3503d5db2cda6937076c3c2f74da5667d6e71adde9a08c3ab236-rootfs.mount: Deactivated successfully. Jan 29 11:15:25.051488 kubelet[2588]: E0129 11:15:25.051102 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:25.052966 containerd[1491]: time="2025-01-29T11:15:25.052910622Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:15:25.074349 containerd[1491]: time="2025-01-29T11:15:25.074309517Z" level=info msg="CreateContainer within sandbox \"73cecf2cfa770f6b49328217633147cbd552e9bf4642d4fa275f2c1a7a6ff847\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c67fb82933d8b1a576604a539ca6b3fb7d15e30ee24639e81021849b49ed6459\"" Jan 29 11:15:25.074773 containerd[1491]: time="2025-01-29T11:15:25.074751400Z" level=info msg="StartContainer for \"c67fb82933d8b1a576604a539ca6b3fb7d15e30ee24639e81021849b49ed6459\"" Jan 29 11:15:25.102162 systemd[1]: Started cri-containerd-c67fb82933d8b1a576604a539ca6b3fb7d15e30ee24639e81021849b49ed6459.scope - libcontainer container c67fb82933d8b1a576604a539ca6b3fb7d15e30ee24639e81021849b49ed6459. Jan 29 11:15:25.131855 containerd[1491]: time="2025-01-29T11:15:25.131813885Z" level=info msg="StartContainer for \"c67fb82933d8b1a576604a539ca6b3fb7d15e30ee24639e81021849b49ed6459\" returns successfully" Jan 29 11:15:25.523043 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:15:26.055023 kubelet[2588]: E0129 11:15:26.054977 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:26.066534 kubelet[2588]: I0129 11:15:26.066458 2588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2l29k" podStartSLOduration=5.066440018 podStartE2EDuration="5.066440018s" podCreationTimestamp="2025-01-29 11:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:15:26.066330489 +0000 UTC m=+88.308426006" watchObservedRunningTime="2025-01-29 11:15:26.066440018 +0000 UTC m=+88.308535506" Jan 29 11:15:27.803051 kubelet[2588]: E0129 11:15:27.802997 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:28.472236 systemd-networkd[1377]: lxc_health: Link UP Jan 29 11:15:28.481253 systemd-networkd[1377]: lxc_health: Gained carrier Jan 29 11:15:29.804258 kubelet[2588]: E0129 11:15:29.803882 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:30.061318 kubelet[2588]: E0129 11:15:30.061207 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:30.290260 systemd-networkd[1377]: lxc_health: Gained IPv6LL Jan 29 11:15:31.062992 kubelet[2588]: E0129 11:15:31.062943 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:15:34.342951 sshd[4420]: Connection closed by 10.0.0.1 port 37242 Jan 29 11:15:34.343438 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:34.347534 systemd[1]: sshd@28-10.0.0.47:22-10.0.0.1:37242.service: Deactivated successfully. Jan 29 11:15:34.349586 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 11:15:34.350350 systemd-logind[1472]: Session 29 logged out. Waiting for processes to exit. Jan 29 11:15:34.351310 systemd-logind[1472]: Removed session 29. Jan 29 11:15:34.853201 kubelet[2588]: E0129 11:15:34.853154 2588 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"