Jan 13 20:37:16.905950 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:37:16.905973 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:37:16.905984 kernel: BIOS-provided physical RAM map: Jan 13 20:37:16.905990 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 20:37:16.905996 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 20:37:16.906002 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 20:37:16.906009 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 20:37:16.906015 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 20:37:16.906021 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 13 20:37:16.906027 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 13 20:37:16.906035 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 13 20:37:16.906041 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 13 20:37:16.906047 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 13 20:37:16.906053 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 13 20:37:16.906061 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 13 20:37:16.906068 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 20:37:16.906076 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 13 20:37:16.906083 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 13 20:37:16.906089 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 13 20:37:16.906096 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 13 20:37:16.906102 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 13 20:37:16.906110 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 20:37:16.906119 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:37:16.906127 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:37:16.906133 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 13 20:37:16.906139 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:37:16.906146 kernel: NX (Execute Disable) protection: active Jan 13 20:37:16.906155 kernel: APIC: Static calls initialized Jan 13 20:37:16.906161 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 13 20:37:16.906168 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 13 20:37:16.906174 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 13 20:37:16.906181 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 13 20:37:16.906187 kernel: extended physical RAM map: Jan 13 20:37:16.906193 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 20:37:16.906200 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 20:37:16.906206 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 20:37:16.906213 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 20:37:16.906219 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 20:37:16.906228 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 13 20:37:16.906235 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 13 20:37:16.906245 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 13 20:37:16.906251 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 13 20:37:16.906260 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 13 20:37:16.906268 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 13 20:37:16.906275 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 13 20:37:16.906285 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 13 20:37:16.906291 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 13 20:37:16.906298 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 13 20:37:16.906305 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 13 20:37:16.906312 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 20:37:16.906318 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 13 20:37:16.906325 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 13 20:37:16.906332 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 13 20:37:16.906339 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 13 20:37:16.906348 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 13 20:37:16.906354 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 20:37:16.906361 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:37:16.906368 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:37:16.906374 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 13 20:37:16.906381 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:37:16.906388 kernel: efi: EFI v2.7 by EDK II Jan 13 20:37:16.906395 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 13 20:37:16.906401 kernel: random: crng init done Jan 13 20:37:16.906409 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 13 20:37:16.906419 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 13 20:37:16.906429 kernel: secureboot: Secure boot disabled Jan 13 20:37:16.906436 kernel: SMBIOS 2.8 present. Jan 13 20:37:16.906443 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 13 20:37:16.906449 kernel: Hypervisor detected: KVM Jan 13 20:37:16.906456 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:37:16.906463 kernel: kvm-clock: using sched offset of 2804277006 cycles Jan 13 20:37:16.906470 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:37:16.906478 kernel: tsc: Detected 2794.750 MHz processor Jan 13 20:37:16.906485 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:37:16.906492 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:37:16.906499 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 13 20:37:16.906508 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 13 20:37:16.906515 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:37:16.906522 kernel: Using GB pages for direct mapping Jan 13 20:37:16.906529 kernel: ACPI: Early table checksum verification disabled Jan 13 20:37:16.906536 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 13 20:37:16.906543 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:37:16.906550 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:37:16.906557 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:37:16.906565 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 13 20:37:16.906576 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:37:16.906583 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:37:16.906590 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:37:16.906597 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:37:16.906604 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:37:16.906611 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 13 20:37:16.906618 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 13 20:37:16.906625 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 13 20:37:16.906634 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 13 20:37:16.906641 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 13 20:37:16.906648 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 13 20:37:16.906655 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 13 20:37:16.906661 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 13 20:37:16.906668 kernel: No NUMA configuration found Jan 13 20:37:16.906675 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 13 20:37:16.906682 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 13 20:37:16.906689 kernel: Zone ranges: Jan 13 20:37:16.906696 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:37:16.906705 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 13 20:37:16.906712 kernel: Normal empty Jan 13 20:37:16.906720 kernel: Movable zone start for each node Jan 13 20:37:16.906730 kernel: Early memory node ranges Jan 13 20:37:16.906738 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 13 20:37:16.906745 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 13 20:37:16.906751 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 13 20:37:16.906758 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 13 20:37:16.906778 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 13 20:37:16.906788 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 13 20:37:16.906795 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 13 20:37:16.906802 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 13 20:37:16.906808 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 13 20:37:16.906824 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:37:16.906833 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 13 20:37:16.906851 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 13 20:37:16.906860 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:37:16.906867 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 13 20:37:16.906875 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 13 20:37:16.906882 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 20:37:16.906889 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 13 20:37:16.906898 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 13 20:37:16.906905 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:37:16.906912 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:37:16.906920 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:37:16.906927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:37:16.906937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:37:16.906944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:37:16.906951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:37:16.906958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:37:16.906965 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:37:16.906973 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:37:16.906980 kernel: TSC deadline timer available Jan 13 20:37:16.906987 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:37:16.906996 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:37:16.907008 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:37:16.907015 kernel: kvm-guest: setup PV sched yield Jan 13 20:37:16.907022 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 13 20:37:16.907030 kernel: Booting paravirtualized kernel on KVM Jan 13 20:37:16.907037 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:37:16.907044 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:37:16.907052 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:37:16.907059 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:37:16.907066 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:37:16.907076 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:37:16.907083 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:37:16.907091 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:37:16.907099 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:37:16.907107 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:37:16.907117 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:37:16.907125 kernel: Fallback order for Node 0: 0 Jan 13 20:37:16.907132 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 13 20:37:16.907140 kernel: Policy zone: DMA32 Jan 13 20:37:16.907149 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:37:16.907157 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Jan 13 20:37:16.907164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:37:16.907171 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:37:16.907179 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:37:16.907186 kernel: Dynamic Preempt: voluntary Jan 13 20:37:16.907193 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:37:16.907201 kernel: rcu: RCU event tracing is enabled. Jan 13 20:37:16.907208 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:37:16.907218 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:37:16.907225 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:37:16.907233 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:37:16.907240 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:37:16.907247 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:37:16.907254 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:37:16.907261 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:37:16.907269 kernel: Console: colour dummy device 80x25 Jan 13 20:37:16.907276 kernel: printk: console [ttyS0] enabled Jan 13 20:37:16.907286 kernel: ACPI: Core revision 20230628 Jan 13 20:37:16.907297 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:37:16.907305 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:37:16.907312 kernel: x2apic enabled Jan 13 20:37:16.907319 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:37:16.907327 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:37:16.907334 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:37:16.907341 kernel: kvm-guest: setup PV IPIs Jan 13 20:37:16.907348 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:37:16.907358 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:37:16.907365 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 13 20:37:16.907372 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:37:16.907380 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:37:16.907387 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:37:16.907397 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:37:16.907406 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:37:16.907413 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:37:16.907420 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:37:16.907430 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:37:16.907437 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:37:16.907444 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:37:16.907452 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:37:16.907459 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:37:16.907467 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:37:16.907474 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:37:16.907482 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:37:16.907492 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:37:16.907499 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:37:16.907506 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:37:16.907513 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:37:16.907521 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:37:16.907528 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:37:16.907535 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:37:16.907542 kernel: landlock: Up and running. Jan 13 20:37:16.907549 kernel: SELinux: Initializing. Jan 13 20:37:16.907559 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:37:16.907568 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:37:16.907578 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:37:16.907586 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:37:16.907593 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:37:16.907600 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:37:16.907607 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:37:16.907614 kernel: ... version: 0 Jan 13 20:37:16.907622 kernel: ... bit width: 48 Jan 13 20:37:16.907631 kernel: ... generic registers: 6 Jan 13 20:37:16.907638 kernel: ... value mask: 0000ffffffffffff Jan 13 20:37:16.907645 kernel: ... max period: 00007fffffffffff Jan 13 20:37:16.907652 kernel: ... fixed-purpose events: 0 Jan 13 20:37:16.907659 kernel: ... event mask: 000000000000003f Jan 13 20:37:16.907667 kernel: signal: max sigframe size: 1776 Jan 13 20:37:16.907674 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:37:16.907682 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:37:16.907692 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:37:16.907702 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:37:16.907709 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:37:16.907716 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:37:16.907723 kernel: smpboot: Max logical packages: 1 Jan 13 20:37:16.907730 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 13 20:37:16.907737 kernel: devtmpfs: initialized Jan 13 20:37:16.907744 kernel: x86/mm: Memory block size: 128MB Jan 13 20:37:16.907752 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 13 20:37:16.907794 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 13 20:37:16.907805 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 13 20:37:16.907813 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 13 20:37:16.907830 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 13 20:37:16.907838 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 13 20:37:16.907846 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:37:16.907853 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:37:16.907860 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:37:16.907868 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:37:16.907878 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:37:16.907890 kernel: audit: type=2000 audit(1736800636.290:1): state=initialized audit_enabled=0 res=1 Jan 13 20:37:16.907897 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:37:16.907904 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:37:16.907912 kernel: cpuidle: using governor menu Jan 13 20:37:16.907919 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:37:16.907926 kernel: dca service started, version 1.12.1 Jan 13 20:37:16.907934 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 20:37:16.907941 kernel: PCI: Using configuration type 1 for base access Jan 13 20:37:16.907948 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:37:16.907957 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:37:16.907964 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:37:16.907972 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:37:16.907979 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:37:16.907989 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:37:16.907997 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:37:16.908004 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:37:16.908011 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:37:16.908018 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:37:16.908028 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:37:16.908035 kernel: ACPI: Interpreter enabled Jan 13 20:37:16.908042 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:37:16.908049 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:37:16.908056 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:37:16.908063 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:37:16.908070 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:37:16.908077 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:37:16.908255 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:37:16.908395 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:37:16.908521 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:37:16.908532 kernel: PCI host bridge to bus 0000:00 Jan 13 20:37:16.908660 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:37:16.908794 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:37:16.908928 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:37:16.909053 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 13 20:37:16.909163 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 13 20:37:16.909274 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 13 20:37:16.909387 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:37:16.909523 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:37:16.909652 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:37:16.909787 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 13 20:37:16.909924 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 13 20:37:16.910056 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 13 20:37:16.910179 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 13 20:37:16.910353 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:37:16.910511 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:37:16.910634 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 13 20:37:16.910777 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 13 20:37:16.910913 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 13 20:37:16.911048 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:37:16.911170 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 13 20:37:16.911291 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 13 20:37:16.911413 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 13 20:37:16.911540 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:37:16.911747 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 13 20:37:16.911922 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 13 20:37:16.912096 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 13 20:37:16.912218 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 13 20:37:16.912344 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:37:16.912472 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:37:16.912599 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:37:16.912725 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 13 20:37:16.912873 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 13 20:37:16.913003 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:37:16.913135 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 13 20:37:16.913146 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:37:16.913154 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:37:16.913161 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:37:16.913172 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:37:16.913179 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:37:16.913187 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:37:16.913194 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:37:16.913201 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:37:16.913209 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:37:16.913216 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:37:16.913223 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:37:16.913230 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:37:16.913240 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:37:16.913247 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:37:16.913254 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:37:16.913261 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:37:16.913268 kernel: iommu: Default domain type: Translated Jan 13 20:37:16.913275 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:37:16.913283 kernel: efivars: Registered efivars operations Jan 13 20:37:16.913290 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:37:16.913297 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:37:16.913306 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 13 20:37:16.913313 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 13 20:37:16.913320 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 13 20:37:16.913327 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 13 20:37:16.913335 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 13 20:37:16.913342 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 13 20:37:16.913349 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 13 20:37:16.913356 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 13 20:37:16.913478 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:37:16.913604 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:37:16.913723 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:37:16.913732 kernel: vgaarb: loaded Jan 13 20:37:16.913740 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:37:16.913747 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:37:16.913754 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:37:16.913780 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:37:16.913788 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:37:16.913795 kernel: pnp: PnP ACPI init Jan 13 20:37:16.913939 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 13 20:37:16.913950 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:37:16.913958 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:37:16.913966 kernel: NET: Registered PF_INET protocol family Jan 13 20:37:16.913991 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:37:16.914001 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:37:16.914009 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:37:16.914016 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:37:16.914026 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:37:16.914034 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:37:16.914041 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:37:16.914049 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:37:16.914056 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:37:16.914064 kernel: NET: Registered PF_XDP protocol family Jan 13 20:37:16.914197 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 13 20:37:16.914320 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 13 20:37:16.914566 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:37:16.914685 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:37:16.914923 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:37:16.915035 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 13 20:37:16.915143 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 13 20:37:16.915251 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 13 20:37:16.915260 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:37:16.915268 kernel: Initialise system trusted keyrings Jan 13 20:37:16.915281 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:37:16.915289 kernel: Key type asymmetric registered Jan 13 20:37:16.915297 kernel: Asymmetric key parser 'x509' registered Jan 13 20:37:16.915305 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:37:16.915314 kernel: io scheduler mq-deadline registered Jan 13 20:37:16.915322 kernel: io scheduler kyber registered Jan 13 20:37:16.915330 kernel: io scheduler bfq registered Jan 13 20:37:16.915338 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:37:16.915347 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:37:16.915373 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:37:16.915391 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:37:16.915398 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:37:16.915406 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:37:16.915413 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:37:16.915421 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:37:16.915431 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:37:16.915567 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:37:16.915578 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:37:16.915689 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:37:16.915845 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:37:16 UTC (1736800636) Jan 13 20:37:16.915962 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:37:16.915972 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:37:16.915979 kernel: efifb: probing for efifb Jan 13 20:37:16.915991 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 13 20:37:16.915998 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 13 20:37:16.916006 kernel: efifb: scrolling: redraw Jan 13 20:37:16.916013 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 13 20:37:16.916021 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:37:16.916028 kernel: fb0: EFI VGA frame buffer device Jan 13 20:37:16.916036 kernel: pstore: Using crash dump compression: deflate Jan 13 20:37:16.916043 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 20:37:16.916051 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:37:16.916060 kernel: Segment Routing with IPv6 Jan 13 20:37:16.916067 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:37:16.916075 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:37:16.916085 kernel: Key type dns_resolver registered Jan 13 20:37:16.916092 kernel: IPI shorthand broadcast: enabled Jan 13 20:37:16.916099 kernel: sched_clock: Marking stable (597002710, 154966598)->(805383444, -53414136) Jan 13 20:37:16.916107 kernel: registered taskstats version 1 Jan 13 20:37:16.916114 kernel: Loading compiled-in X.509 certificates Jan 13 20:37:16.916122 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:37:16.916131 kernel: Key type .fscrypt registered Jan 13 20:37:16.916139 kernel: Key type fscrypt-provisioning registered Jan 13 20:37:16.916146 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:37:16.916153 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:37:16.916161 kernel: ima: No architecture policies found Jan 13 20:37:16.916168 kernel: clk: Disabling unused clocks Jan 13 20:37:16.916176 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:37:16.916183 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:37:16.916191 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:37:16.916201 kernel: Run /init as init process Jan 13 20:37:16.916208 kernel: with arguments: Jan 13 20:37:16.916215 kernel: /init Jan 13 20:37:16.916223 kernel: with environment: Jan 13 20:37:16.916230 kernel: HOME=/ Jan 13 20:37:16.916237 kernel: TERM=linux Jan 13 20:37:16.916244 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:37:16.916254 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:37:16.916266 systemd[1]: Detected virtualization kvm. Jan 13 20:37:16.916274 systemd[1]: Detected architecture x86-64. Jan 13 20:37:16.916282 systemd[1]: Running in initrd. Jan 13 20:37:16.916289 systemd[1]: No hostname configured, using default hostname. Jan 13 20:37:16.916297 systemd[1]: Hostname set to . Jan 13 20:37:16.916305 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:37:16.916313 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:37:16.916321 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:37:16.916331 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:37:16.916340 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:37:16.916348 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:37:16.916357 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:37:16.916365 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:37:16.916375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:37:16.916385 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:37:16.916393 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:37:16.916401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:37:16.916409 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:37:16.916417 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:37:16.916425 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:37:16.916433 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:37:16.916440 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:37:16.916449 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:37:16.916462 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:37:16.916473 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:37:16.916483 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:37:16.916494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:37:16.916504 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:37:16.916515 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:37:16.916525 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:37:16.916535 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:37:16.916546 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:37:16.916561 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:37:16.916572 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:37:16.916583 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:37:16.916591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:37:16.916598 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:37:16.916606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:37:16.916614 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:37:16.916645 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 20:37:16.916666 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:37:16.916674 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:37:16.916682 systemd-journald[192]: Journal started Jan 13 20:37:16.916699 systemd-journald[192]: Runtime Journal (/run/log/journal/15f5e27736a74794a416bd566075f7fc) is 6.0M, max 48.3M, 42.2M free. Jan 13 20:37:16.913065 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 20:37:16.933135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:37:16.934793 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:37:16.935562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:16.940302 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:37:16.945159 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:37:16.944902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:37:16.948176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:37:16.951375 kernel: Bridge firewalling registered Jan 13 20:37:16.948662 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 20:37:16.950263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:37:16.955639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:37:16.957731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:37:16.968248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:37:16.969847 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:37:16.974941 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:37:16.977058 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:37:16.992523 dracut-cmdline[232]: dracut-dracut-053 Jan 13 20:37:16.995249 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:37:17.006062 systemd-resolved[226]: Positive Trust Anchors: Jan 13 20:37:17.006076 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:37:17.006107 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:37:17.008585 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 13 20:37:17.009720 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:37:17.016007 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:37:17.079803 kernel: SCSI subsystem initialized Jan 13 20:37:17.090808 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:37:17.102797 kernel: iscsi: registered transport (tcp) Jan 13 20:37:17.128011 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:37:17.128090 kernel: QLogic iSCSI HBA Driver Jan 13 20:37:17.181983 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:37:17.189038 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:37:17.214022 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:37:17.214095 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:37:17.215142 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:37:17.257803 kernel: raid6: avx2x4 gen() 29901 MB/s Jan 13 20:37:17.274786 kernel: raid6: avx2x2 gen() 31186 MB/s Jan 13 20:37:17.291883 kernel: raid6: avx2x1 gen() 25817 MB/s Jan 13 20:37:17.291924 kernel: raid6: using algorithm avx2x2 gen() 31186 MB/s Jan 13 20:37:17.309910 kernel: raid6: .... xor() 19741 MB/s, rmw enabled Jan 13 20:37:17.309936 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:37:17.329788 kernel: xor: automatically using best checksumming function avx Jan 13 20:37:17.486821 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:37:17.501442 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:37:17.515078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:37:17.529681 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 13 20:37:17.535529 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:37:17.544955 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:37:17.558942 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 13 20:37:17.590680 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:37:17.599950 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:37:17.665650 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:37:17.681127 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:37:17.692898 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:37:17.699190 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:37:17.699337 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:37:17.699349 kernel: GPT:9289727 != 19775487 Jan 13 20:37:17.699359 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:37:17.699375 kernel: GPT:9289727 != 19775487 Jan 13 20:37:17.699384 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:37:17.699394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:37:17.695115 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:37:17.698276 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:37:17.711093 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:37:17.703852 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:37:17.706610 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:37:17.716960 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:37:17.742241 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:37:17.744246 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:37:17.744271 kernel: AES CTR mode by8 optimization enabled Jan 13 20:37:17.744284 kernel: libata version 3.00 loaded. Jan 13 20:37:17.751783 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (467) Jan 13 20:37:17.751833 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (472) Jan 13 20:37:17.757790 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:37:17.773903 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:37:17.773918 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:37:17.774066 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:37:17.774200 kernel: scsi host0: ahci Jan 13 20:37:17.774346 kernel: scsi host1: ahci Jan 13 20:37:17.774483 kernel: scsi host2: ahci Jan 13 20:37:17.774634 kernel: scsi host3: ahci Jan 13 20:37:17.774820 kernel: scsi host4: ahci Jan 13 20:37:17.774962 kernel: scsi host5: ahci Jan 13 20:37:17.775098 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 13 20:37:17.775108 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 13 20:37:17.775118 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 13 20:37:17.775133 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 13 20:37:17.775143 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 13 20:37:17.775154 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 13 20:37:17.759852 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:37:17.776051 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:37:17.784562 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:37:17.784642 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:37:17.794667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:37:17.823945 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:37:17.825125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:37:17.825182 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:37:17.829778 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:37:17.829853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:37:17.829904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:17.832202 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:37:17.833168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:37:17.846569 disk-uuid[553]: Primary Header is updated. Jan 13 20:37:17.846569 disk-uuid[553]: Secondary Entries is updated. Jan 13 20:37:17.846569 disk-uuid[553]: Secondary Header is updated. Jan 13 20:37:17.850802 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:37:17.852263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:17.855784 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:37:17.860080 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:37:17.876866 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:37:18.079813 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:37:18.079883 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:37:18.079894 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:37:18.080794 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:37:18.087793 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:37:18.087828 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:37:18.088795 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:37:18.088808 kernel: ata3.00: applying bridge limits Jan 13 20:37:18.089783 kernel: ata3.00: configured for UDMA/100 Jan 13 20:37:18.091797 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:37:18.143359 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:37:18.155334 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:37:18.155347 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:37:18.866740 disk-uuid[555]: The operation has completed successfully. Jan 13 20:37:18.868304 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:37:18.897697 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:37:18.897839 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:37:18.922941 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:37:18.926605 sh[594]: Success Jan 13 20:37:18.939909 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:37:18.980282 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:37:18.993456 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:37:18.998292 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:37:19.010844 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:37:19.010889 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:37:19.013136 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:37:19.013162 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:37:19.014045 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:37:19.019669 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:37:19.020423 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:37:19.032906 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:37:19.035334 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:37:19.043053 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:37:19.043086 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:37:19.043096 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:37:19.045787 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:37:19.055951 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:37:19.057616 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:37:19.067740 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:37:19.075000 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:37:19.127792 ignition[676]: Ignition 2.20.0 Jan 13 20:37:19.127804 ignition[676]: Stage: fetch-offline Jan 13 20:37:19.127843 ignition[676]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:19.127853 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:37:19.127941 ignition[676]: parsed url from cmdline: "" Jan 13 20:37:19.127945 ignition[676]: no config URL provided Jan 13 20:37:19.127951 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:37:19.127961 ignition[676]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:37:19.127990 ignition[676]: op(1): [started] loading QEMU firmware config module Jan 13 20:37:19.127995 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:37:19.135740 ignition[676]: op(1): [finished] loading QEMU firmware config module Jan 13 20:37:19.164010 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:37:19.177925 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:37:19.186843 ignition[676]: parsing config with SHA512: 0e06e7c53c066de7533897d7991219ddcc6578e5c9d7a16f2707b0ae85ea63c1733169dd6eb294e1965537f0c4912feda43d79ce78f800d9cc5e7e8a87c92388 Jan 13 20:37:19.191070 unknown[676]: fetched base config from "system" Jan 13 20:37:19.191085 unknown[676]: fetched user config from "qemu" Jan 13 20:37:19.191832 ignition[676]: fetch-offline: fetch-offline passed Jan 13 20:37:19.191948 ignition[676]: Ignition finished successfully Jan 13 20:37:19.196572 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:37:19.201042 systemd-networkd[782]: lo: Link UP Jan 13 20:37:19.201056 systemd-networkd[782]: lo: Gained carrier Jan 13 20:37:19.202976 systemd-networkd[782]: Enumeration completed Jan 13 20:37:19.203080 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:37:19.203457 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:37:19.203463 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:37:19.203957 systemd[1]: Reached target network.target - Network. Jan 13 20:37:19.204618 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:37:19.204650 systemd-networkd[782]: eth0: Link UP Jan 13 20:37:19.204656 systemd-networkd[782]: eth0: Gained carrier Jan 13 20:37:19.204664 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:37:19.210937 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:37:19.217844 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:37:19.224230 ignition[785]: Ignition 2.20.0 Jan 13 20:37:19.224248 ignition[785]: Stage: kargs Jan 13 20:37:19.224406 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:19.224417 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:37:19.225216 ignition[785]: kargs: kargs passed Jan 13 20:37:19.228406 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:37:19.225259 ignition[785]: Ignition finished successfully Jan 13 20:37:19.238967 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:37:19.250026 ignition[795]: Ignition 2.20.0 Jan 13 20:37:19.250037 ignition[795]: Stage: disks Jan 13 20:37:19.250189 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:19.250200 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:37:19.254073 ignition[795]: disks: disks passed Jan 13 20:37:19.254121 ignition[795]: Ignition finished successfully Jan 13 20:37:19.257373 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:37:19.258640 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:37:19.260793 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:37:19.263190 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:37:19.265440 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:37:19.267443 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:37:19.279937 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:37:19.291828 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:37:19.298643 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:37:19.316915 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:37:19.400793 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:37:19.401606 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:37:19.404041 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:37:19.415847 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:37:19.418130 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:37:19.419476 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:37:19.427345 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Jan 13 20:37:19.427368 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:37:19.427380 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:37:19.427398 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:37:19.419514 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:37:19.430935 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:37:19.419537 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:37:19.432826 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:37:19.449292 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:37:19.451543 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:37:19.493600 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:37:19.500114 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:37:19.505365 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:37:19.510532 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:37:19.605046 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:37:19.611949 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:37:19.612826 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:37:19.620785 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:37:19.639818 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:37:19.642828 ignition[928]: INFO : Ignition 2.20.0 Jan 13 20:37:19.642828 ignition[928]: INFO : Stage: mount Jan 13 20:37:19.644641 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:19.644641 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:37:19.644641 ignition[928]: INFO : mount: mount passed Jan 13 20:37:19.644641 ignition[928]: INFO : Ignition finished successfully Jan 13 20:37:19.646034 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:37:19.656952 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:37:20.010587 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:37:20.020046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:37:20.028601 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Jan 13 20:37:20.028641 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:37:20.029683 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:37:20.029697 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:37:20.033798 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:37:20.034758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:37:20.062300 ignition[958]: INFO : Ignition 2.20.0 Jan 13 20:37:20.062300 ignition[958]: INFO : Stage: files Jan 13 20:37:20.064281 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:20.064281 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:37:20.064281 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:37:20.067981 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:37:20.067981 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:37:20.067981 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:37:20.067981 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:37:20.067981 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:37:20.067370 unknown[958]: wrote ssh authorized keys file for user: core Jan 13 20:37:20.076335 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:37:20.076335 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:37:20.109320 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:37:20.280517 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:37:20.280517 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:37:20.284530 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:37:20.644878 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:37:20.735214 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:37:20.737097 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:37:20.738916 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:37:20.738916 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:37:20.742360 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:37:20.742360 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:37:20.745760 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:37:20.747479 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:37:20.749277 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:37:20.751430 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:37:20.753310 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:37:20.755052 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:37:20.757617 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:37:20.760073 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:37:20.762159 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:37:21.083387 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:37:21.176934 systemd-networkd[782]: eth0: Gained IPv6LL Jan 13 20:37:21.495447 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:37:21.495447 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:37:21.499236 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:37:21.501446 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:37:21.501446 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:37:21.501446 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 20:37:21.505779 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:37:21.507666 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:37:21.507666 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 20:37:21.507666 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:37:21.530306 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:37:21.536096 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:37:21.537740 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:37:21.537740 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:37:21.540533 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:37:21.541945 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:37:21.543701 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:37:21.545379 ignition[958]: INFO : files: files passed Jan 13 20:37:21.546150 ignition[958]: INFO : Ignition finished successfully Jan 13 20:37:21.549178 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:37:21.561897 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:37:21.564758 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:37:21.567398 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:37:21.568386 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:37:21.574566 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:37:21.577694 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:37:21.577694 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:37:21.580924 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:37:21.580017 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:37:21.582740 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:37:21.589905 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:37:21.613404 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:37:21.613543 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:37:21.614164 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:37:21.616914 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:37:21.617283 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:37:21.618175 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:37:21.636977 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:37:21.642895 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:37:21.654359 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:37:21.655917 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:37:21.658369 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:37:21.660450 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:37:21.660577 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:37:21.662972 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:37:21.664593 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:37:21.666652 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:37:21.668754 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:37:21.670824 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:37:21.672984 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:37:21.675353 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:37:21.677718 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:37:21.679838 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:37:21.682120 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:37:21.683943 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:37:21.684051 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:37:21.686271 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:37:21.688070 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:37:21.690380 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:37:21.690515 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:37:21.692550 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:37:21.692688 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:37:21.695622 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:37:21.695794 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:37:21.698003 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:37:21.700113 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:37:21.703829 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:37:21.705456 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:37:21.707776 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:37:21.709985 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:37:21.710087 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:37:21.711956 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:37:21.712046 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:37:21.714090 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:37:21.714204 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:37:21.716927 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:37:21.717070 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:37:21.729913 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:37:21.730920 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:37:21.731037 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:37:21.734279 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:37:21.734470 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:37:21.734689 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:37:21.735698 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:37:21.735866 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:37:21.746077 ignition[1014]: INFO : Ignition 2.20.0 Jan 13 20:37:21.746077 ignition[1014]: INFO : Stage: umount Jan 13 20:37:21.746077 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:37:21.746077 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:37:21.746077 ignition[1014]: INFO : umount: umount passed Jan 13 20:37:21.746077 ignition[1014]: INFO : Ignition finished successfully Jan 13 20:37:21.739571 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:37:21.739730 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:37:21.746592 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:37:21.746750 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:37:21.749305 systemd[1]: Stopped target network.target - Network. Jan 13 20:37:21.750116 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:37:21.750183 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:37:21.750466 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:37:21.750508 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:37:21.750997 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:37:21.751042 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:37:21.751315 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:37:21.751358 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:37:21.751854 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:37:21.752423 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:37:21.766804 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 13 20:37:21.768983 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:37:21.769140 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:37:21.772196 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:37:21.772355 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:37:21.774847 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:37:21.774914 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:37:21.783872 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:37:21.784220 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:37:21.784276 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:37:21.787379 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:37:21.787432 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:37:21.789608 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:37:21.789660 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:37:21.792008 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:37:21.792059 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:37:21.793073 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:37:21.805776 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:37:21.805916 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:37:21.809072 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:37:21.809259 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:37:21.810955 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:37:21.811025 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:37:21.812632 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:37:21.812674 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:37:21.813151 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:37:21.813201 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:37:21.813981 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:37:21.814031 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:37:21.814708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:37:21.814758 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:37:21.816298 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:37:21.824949 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:37:21.825009 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:37:21.826286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:37:21.826336 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:21.833006 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:37:21.833116 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:37:21.848377 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:37:22.018316 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:37:22.018435 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:37:22.020490 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:37:22.022208 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:37:22.022259 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:37:22.031939 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:37:22.040701 systemd[1]: Switching root. Jan 13 20:37:22.074609 systemd-journald[192]: Journal stopped Jan 13 20:37:23.218406 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 20:37:23.218485 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:37:23.218503 kernel: SELinux: policy capability open_perms=1 Jan 13 20:37:23.218522 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:37:23.218541 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:37:23.218554 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:37:23.218573 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:37:23.218590 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:37:23.218604 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:37:23.218617 kernel: audit: type=1403 audit(1736800642.438:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:37:23.218632 systemd[1]: Successfully loaded SELinux policy in 42.176ms. Jan 13 20:37:23.218660 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.450ms. Jan 13 20:37:23.218683 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:37:23.218698 systemd[1]: Detected virtualization kvm. Jan 13 20:37:23.218712 systemd[1]: Detected architecture x86-64. Jan 13 20:37:23.218729 systemd[1]: Detected first boot. Jan 13 20:37:23.218746 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:37:23.218773 zram_generator::config[1058]: No configuration found. Jan 13 20:37:23.218795 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:37:23.218810 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:37:23.218825 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:37:23.218839 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:37:23.218855 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:37:23.218871 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:37:23.218888 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:37:23.218902 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:37:23.218917 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:37:23.218932 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:37:23.218946 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:37:23.218960 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:37:23.218975 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:37:23.218990 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:37:23.219004 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:37:23.219021 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:37:23.219036 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:37:23.219051 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:37:23.219065 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:37:23.219080 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:37:23.219094 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:37:23.219109 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:37:23.219124 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:37:23.219145 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:37:23.219163 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:37:23.219181 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:37:23.219200 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:37:23.219218 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:37:23.219236 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:37:23.219253 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:37:23.219271 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:37:23.219287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:37:23.219303 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:37:23.219317 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:37:23.219332 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:37:23.219346 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:37:23.219360 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:37:23.219375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:23.219390 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:37:23.219404 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:37:23.219418 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:37:23.219436 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:37:23.219450 systemd[1]: Reached target machines.target - Containers. Jan 13 20:37:23.219466 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:37:23.219482 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:37:23.219498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:37:23.219514 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:37:23.219535 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:37:23.219551 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:37:23.219572 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:37:23.219588 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:37:23.219604 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:37:23.219621 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:37:23.219638 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:37:23.219654 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:37:23.219679 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:37:23.219695 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:37:23.219714 kernel: fuse: init (API version 7.39) Jan 13 20:37:23.219729 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:37:23.219745 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:37:23.219759 kernel: loop: module loaded Jan 13 20:37:23.219794 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:37:23.219811 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:37:23.219826 kernel: ACPI: bus type drm_connector registered Jan 13 20:37:23.219841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:37:23.219857 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:37:23.219894 systemd-journald[1128]: Collecting audit messages is disabled. Jan 13 20:37:23.219928 systemd[1]: Stopped verity-setup.service. Jan 13 20:37:23.219946 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:23.219965 systemd-journald[1128]: Journal started Jan 13 20:37:23.219995 systemd-journald[1128]: Runtime Journal (/run/log/journal/15f5e27736a74794a416bd566075f7fc) is 6.0M, max 48.3M, 42.2M free. Jan 13 20:37:22.984929 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:37:23.005972 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:37:23.006432 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:37:23.223806 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:37:23.226701 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:37:23.228065 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:37:23.229423 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:37:23.230597 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:37:23.231897 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:37:23.233246 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:37:23.234653 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:37:23.236296 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:37:23.238089 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:37:23.238341 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:37:23.240015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:37:23.240260 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:37:23.242131 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:37:23.242389 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:37:23.244086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:37:23.244357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:37:23.246091 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:37:23.246359 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:37:23.247942 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:37:23.248181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:37:23.249716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:37:23.251313 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:37:23.253199 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:37:23.268824 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:37:23.279838 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:37:23.282194 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:37:23.283694 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:37:23.283801 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:37:23.286071 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:37:23.288551 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:37:23.292649 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:37:23.294231 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:37:23.296250 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:37:23.299969 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:37:23.301382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:37:23.304996 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:37:23.308975 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:37:23.310259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:37:23.313162 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:37:23.333038 systemd-journald[1128]: Time spent on flushing to /var/log/journal/15f5e27736a74794a416bd566075f7fc is 15.421ms for 1041 entries. Jan 13 20:37:23.333038 systemd-journald[1128]: System Journal (/var/log/journal/15f5e27736a74794a416bd566075f7fc) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:37:23.359676 systemd-journald[1128]: Received client request to flush runtime journal. Jan 13 20:37:23.359710 kernel: loop0: detected capacity change from 0 to 205544 Jan 13 20:37:23.318705 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:37:23.322041 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:37:23.323684 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:37:23.326173 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:37:23.343534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:37:23.349035 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:37:23.351816 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:37:23.364063 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:37:23.368912 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:37:23.371892 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:37:23.375790 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:37:23.383675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:37:23.388628 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:37:23.394039 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:37:23.401991 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:37:23.404638 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:37:23.405590 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:37:23.419796 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:37:23.431940 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 13 20:37:23.432315 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 13 20:37:23.438639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:37:23.460789 kernel: loop2: detected capacity change from 0 to 140992 Jan 13 20:37:23.493911 kernel: loop3: detected capacity change from 0 to 205544 Jan 13 20:37:23.502787 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:37:23.517788 kernel: loop5: detected capacity change from 0 to 140992 Jan 13 20:37:23.528320 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:37:23.530231 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 13 20:37:23.535795 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:37:23.535810 systemd[1]: Reloading... Jan 13 20:37:23.581785 zram_generator::config[1223]: No configuration found. Jan 13 20:37:23.624850 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:37:23.702434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:37:23.753093 systemd[1]: Reloading finished in 216 ms. Jan 13 20:37:23.786026 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:37:23.787571 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:37:23.799910 systemd[1]: Starting ensure-sysext.service... Jan 13 20:37:23.801800 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:37:23.808270 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:37:23.808279 systemd[1]: Reloading... Jan 13 20:37:23.826253 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:37:23.826624 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:37:23.827645 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:37:23.828128 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 20:37:23.828262 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 20:37:23.833842 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:37:23.833922 systemd-tmpfiles[1261]: Skipping /boot Jan 13 20:37:23.847520 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:37:23.847600 systemd-tmpfiles[1261]: Skipping /boot Jan 13 20:37:23.863032 zram_generator::config[1291]: No configuration found. Jan 13 20:37:23.964780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:37:24.015161 systemd[1]: Reloading finished in 206 ms. Jan 13 20:37:24.033320 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:37:24.046224 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:37:24.055189 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:37:24.058093 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:37:24.060922 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:37:24.065526 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:37:24.069502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:37:24.073321 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:37:24.079422 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.079596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:37:24.082684 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:37:24.086851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:37:24.093867 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:37:24.095222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:37:24.106183 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:37:24.107523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.108832 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:37:24.113421 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 13 20:37:24.114409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:37:24.114675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:37:24.117198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:37:24.117419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:37:24.120643 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:37:24.120927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:37:24.136713 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.137447 augenrules[1357]: No rules Jan 13 20:37:24.138452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:37:24.146058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:37:24.152041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:37:24.156074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:37:24.157237 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:37:24.158991 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:37:24.160045 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.160976 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:37:24.162673 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:37:24.164401 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:37:24.164868 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:37:24.166409 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:37:24.168298 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:37:24.170118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:37:24.170294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:37:24.172314 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:37:24.172493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:37:24.174428 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:37:24.174603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:37:24.188018 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:37:24.195447 systemd[1]: Finished ensure-sysext.service. Jan 13 20:37:24.202631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.211004 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:37:24.212301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:37:24.213622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:37:24.216096 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:37:24.219919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:37:24.222477 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:37:24.223709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:37:24.227905 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:37:24.228960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Jan 13 20:37:24.233973 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:37:24.235166 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:37:24.235196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:37:24.235613 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:37:24.235864 systemd-resolved[1329]: Positive Trust Anchors: Jan 13 20:37:24.235873 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:37:24.235904 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:37:24.237081 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:37:24.237284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:37:24.245097 systemd-resolved[1329]: Defaulting to hostname 'linux'. Jan 13 20:37:24.251144 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:37:24.251999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:37:24.254285 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:37:24.257158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:37:24.262622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:37:24.262869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:37:24.264860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:37:24.265093 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:37:24.272985 augenrules[1399]: /sbin/augenrules: No change Jan 13 20:37:24.275602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:37:24.276899 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:37:24.286603 augenrules[1432]: No rules Jan 13 20:37:24.288810 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:37:24.289638 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:37:24.318540 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:37:24.324826 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:37:24.327949 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:37:24.331935 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:37:24.333795 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:37:24.334093 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:37:24.338328 systemd-networkd[1407]: lo: Link UP Jan 13 20:37:24.338344 systemd-networkd[1407]: lo: Gained carrier Jan 13 20:37:24.340734 systemd-networkd[1407]: Enumeration completed Jan 13 20:37:24.340948 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:37:24.341085 systemd[1]: Reached target network.target - Network. Jan 13 20:37:24.342851 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:37:24.343257 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:37:24.343265 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:37:24.345404 systemd-networkd[1407]: eth0: Link UP Jan 13 20:37:24.345413 systemd-networkd[1407]: eth0: Gained carrier Jan 13 20:37:24.345426 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:37:24.345453 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:37:24.359494 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 13 20:37:24.361975 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:37:24.362158 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:37:24.363816 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:37:24.364044 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:37:24.364985 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jan 13 20:37:24.821187 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:37:24.821281 systemd-timesyncd[1410]: Initial clock synchronization to Mon 2025-01-13 20:37:24.821008 UTC. Jan 13 20:37:24.821783 systemd-resolved[1329]: Clock change detected. Flushing caches. Jan 13 20:37:24.830504 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:37:24.853110 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:37:24.858326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:37:24.867021 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:37:24.867324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:24.870136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:37:24.945476 kernel: kvm_amd: TSC scaling supported Jan 13 20:37:24.945564 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:37:24.945587 kernel: kvm_amd: Nested Paging enabled Jan 13 20:37:24.946666 kernel: kvm_amd: LBR virtualization supported Jan 13 20:37:24.946687 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:37:24.947361 kernel: kvm_amd: Virtual GIF supported Jan 13 20:37:24.970129 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:37:24.978438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:37:24.998606 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:37:25.008363 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:37:25.017378 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:37:25.051460 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:37:25.053009 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:37:25.054194 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:37:25.055414 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:37:25.056726 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:37:25.058232 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:37:25.059522 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:37:25.060840 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:37:25.062146 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:37:25.062175 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:37:25.063128 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:37:25.064670 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:37:25.067584 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:37:25.073672 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:37:25.076201 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:37:25.077967 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:37:25.079266 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:37:25.080335 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:37:25.081446 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:37:25.081471 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:37:25.082507 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:37:25.084678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:37:25.087160 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:37:25.088211 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:37:25.092346 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:37:25.093520 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:37:25.095271 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:37:25.100178 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:37:25.101400 jq[1467]: false Jan 13 20:37:25.106681 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:37:25.109675 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:37:25.113164 extend-filesystems[1468]: Found loop3 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found loop4 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found loop5 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found sr0 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found vda Jan 13 20:37:25.113164 extend-filesystems[1468]: Found vda1 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found vda2 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found vda3 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found usr Jan 13 20:37:25.113164 extend-filesystems[1468]: Found vda4 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found vda6 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found vda7 Jan 13 20:37:25.113164 extend-filesystems[1468]: Found vda9 Jan 13 20:37:25.113164 extend-filesystems[1468]: Checking size of /dev/vda9 Jan 13 20:37:25.143065 extend-filesystems[1468]: Resized partition /dev/vda9 Jan 13 20:37:25.116594 dbus-daemon[1466]: [system] SELinux support is enabled Jan 13 20:37:25.113393 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:37:25.114886 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:37:25.115361 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:37:25.144814 update_engine[1481]: I20250113 20:37:25.140166 1481 main.cc:92] Flatcar Update Engine starting Jan 13 20:37:25.144814 update_engine[1481]: I20250113 20:37:25.141360 1481 update_check_scheduler.cc:74] Next update check in 5m18s Jan 13 20:37:25.116715 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:37:25.120358 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:37:25.126395 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:37:25.130386 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:37:25.144515 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:37:25.144747 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:37:25.145215 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:37:25.145468 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:37:25.146133 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:37:25.149566 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:37:25.149780 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:37:25.152871 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:37:25.152901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1374) Jan 13 20:37:25.152916 jq[1482]: true Jan 13 20:37:25.167791 jq[1492]: true Jan 13 20:37:25.171274 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:37:25.182833 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:37:25.187626 tar[1490]: linux-amd64/helm Jan 13 20:37:25.194142 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:37:25.206529 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:37:25.206529 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:37:25.206529 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:37:25.199110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:37:25.217780 extend-filesystems[1468]: Resized filesystem in /dev/vda9 Jan 13 20:37:25.199142 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:37:25.200737 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:37:25.200753 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:37:25.207115 systemd-logind[1480]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:37:25.207136 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:37:25.208345 systemd-logind[1480]: New seat seat0. Jan 13 20:37:25.211306 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:37:25.214368 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:37:25.215726 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:37:25.216625 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:37:25.231531 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:37:25.232696 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:37:25.235856 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:37:25.250134 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:37:25.295420 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:37:25.318845 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:37:25.329352 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:37:25.336470 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:37:25.336721 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:37:25.346316 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:37:25.358032 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:37:25.364379 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:37:25.367003 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:37:25.368313 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:37:25.381301 containerd[1497]: time="2025-01-13T20:37:25.379266100Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:37:25.402166 containerd[1497]: time="2025-01-13T20:37:25.402126333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:25.403983 containerd[1497]: time="2025-01-13T20:37:25.403934633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:25.404047 containerd[1497]: time="2025-01-13T20:37:25.404033128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:37:25.404146 containerd[1497]: time="2025-01-13T20:37:25.404131913Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:37:25.404370 containerd[1497]: time="2025-01-13T20:37:25.404354701Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:37:25.404438 containerd[1497]: time="2025-01-13T20:37:25.404425935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:25.404555 containerd[1497]: time="2025-01-13T20:37:25.404534769Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:25.404608 containerd[1497]: time="2025-01-13T20:37:25.404596054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:25.404862 containerd[1497]: time="2025-01-13T20:37:25.404840031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:25.404923 containerd[1497]: time="2025-01-13T20:37:25.404909151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:25.404995 containerd[1497]: time="2025-01-13T20:37:25.404978972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:25.405051 containerd[1497]: time="2025-01-13T20:37:25.405039265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:25.405204 containerd[1497]: time="2025-01-13T20:37:25.405189046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:25.405486 containerd[1497]: time="2025-01-13T20:37:25.405469231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:37:25.405665 containerd[1497]: time="2025-01-13T20:37:25.405649539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:37:25.405711 containerd[1497]: time="2025-01-13T20:37:25.405700745Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:37:25.405844 containerd[1497]: time="2025-01-13T20:37:25.405829096Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:37:25.405947 containerd[1497]: time="2025-01-13T20:37:25.405926819Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:37:25.413973 containerd[1497]: time="2025-01-13T20:37:25.413912031Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:37:25.414017 containerd[1497]: time="2025-01-13T20:37:25.413993584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:37:25.414017 containerd[1497]: time="2025-01-13T20:37:25.414013772Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:37:25.414097 containerd[1497]: time="2025-01-13T20:37:25.414033739Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:37:25.414097 containerd[1497]: time="2025-01-13T20:37:25.414057865Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:37:25.414377 containerd[1497]: time="2025-01-13T20:37:25.414347307Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:37:25.416268 containerd[1497]: time="2025-01-13T20:37:25.416227513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:37:25.416435 containerd[1497]: time="2025-01-13T20:37:25.416402401Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:37:25.416471 containerd[1497]: time="2025-01-13T20:37:25.416432427Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:37:25.416510 containerd[1497]: time="2025-01-13T20:37:25.416468635Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:37:25.416510 containerd[1497]: time="2025-01-13T20:37:25.416490927Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:37:25.416559 containerd[1497]: time="2025-01-13T20:37:25.416509872Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:37:25.416559 containerd[1497]: time="2025-01-13T20:37:25.416527305Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:37:25.416559 containerd[1497]: time="2025-01-13T20:37:25.416545319Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:37:25.416619 containerd[1497]: time="2025-01-13T20:37:25.416563913Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:37:25.416619 containerd[1497]: time="2025-01-13T20:37:25.416581897Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:37:25.416619 containerd[1497]: time="2025-01-13T20:37:25.416600151Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:37:25.416619 containerd[1497]: time="2025-01-13T20:37:25.416616031Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:37:25.416699 containerd[1497]: time="2025-01-13T20:37:25.416647610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416699 containerd[1497]: time="2025-01-13T20:37:25.416667848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416699 containerd[1497]: time="2025-01-13T20:37:25.416684760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416762 containerd[1497]: time="2025-01-13T20:37:25.416702814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416762 containerd[1497]: time="2025-01-13T20:37:25.416721098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416762 containerd[1497]: time="2025-01-13T20:37:25.416739292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416762 containerd[1497]: time="2025-01-13T20:37:25.416755473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416852 containerd[1497]: time="2025-01-13T20:37:25.416774308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416852 containerd[1497]: time="2025-01-13T20:37:25.416796179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416852 containerd[1497]: time="2025-01-13T20:37:25.416817339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416852 containerd[1497]: time="2025-01-13T20:37:25.416833409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416852 containerd[1497]: time="2025-01-13T20:37:25.416850891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416965 containerd[1497]: time="2025-01-13T20:37:25.416868595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416965 containerd[1497]: time="2025-01-13T20:37:25.416889985Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:37:25.416965 containerd[1497]: time="2025-01-13T20:37:25.416923157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.416965 containerd[1497]: time="2025-01-13T20:37:25.416953895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.417041 containerd[1497]: time="2025-01-13T20:37:25.416970175Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:37:25.417041 containerd[1497]: time="2025-01-13T20:37:25.417024116Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:37:25.417091 containerd[1497]: time="2025-01-13T20:37:25.417047821Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:37:25.417091 containerd[1497]: time="2025-01-13T20:37:25.417062969Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:37:25.417431 containerd[1497]: time="2025-01-13T20:37:25.417396635Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:37:25.417431 containerd[1497]: time="2025-01-13T20:37:25.417421912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.417472 containerd[1497]: time="2025-01-13T20:37:25.417439225Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:37:25.417472 containerd[1497]: time="2025-01-13T20:37:25.417454223Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:37:25.417472 containerd[1497]: time="2025-01-13T20:37:25.417467237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:37:25.417872 containerd[1497]: time="2025-01-13T20:37:25.417801764Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:37:25.417872 containerd[1497]: time="2025-01-13T20:37:25.417866305Z" level=info msg="Connect containerd service" Jan 13 20:37:25.418037 containerd[1497]: time="2025-01-13T20:37:25.417903806Z" level=info msg="using legacy CRI server" Jan 13 20:37:25.418037 containerd[1497]: time="2025-01-13T20:37:25.417914085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:37:25.418128 containerd[1497]: time="2025-01-13T20:37:25.418100034Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:37:25.419043 containerd[1497]: time="2025-01-13T20:37:25.418884384Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:37:25.419043 containerd[1497]: time="2025-01-13T20:37:25.419000372Z" level=info msg="Start subscribing containerd event" Jan 13 20:37:25.419043 containerd[1497]: time="2025-01-13T20:37:25.419038052Z" level=info msg="Start recovering state" Jan 13 20:37:25.419125 containerd[1497]: time="2025-01-13T20:37:25.419114365Z" level=info msg="Start event monitor" Jan 13 20:37:25.419146 containerd[1497]: time="2025-01-13T20:37:25.419127871Z" level=info msg="Start snapshots syncer" Jan 13 20:37:25.419146 containerd[1497]: time="2025-01-13T20:37:25.419137098Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:37:25.419146 containerd[1497]: time="2025-01-13T20:37:25.419144913Z" level=info msg="Start streaming server" Jan 13 20:37:25.419557 containerd[1497]: time="2025-01-13T20:37:25.419535545Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:37:25.419615 containerd[1497]: time="2025-01-13T20:37:25.419596309Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:37:25.421541 containerd[1497]: time="2025-01-13T20:37:25.421513604Z" level=info msg="containerd successfully booted in 0.043386s" Jan 13 20:37:25.421634 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:37:25.432384 systemd[1]: Started sshd@0-10.0.0.63:22-10.0.0.1:49940.service - OpenSSH per-connection server daemon (10.0.0.1:49940). Jan 13 20:37:25.434166 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:37:25.486495 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 49940 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:37:25.488418 sshd-session[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:25.496686 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:37:25.507300 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:37:25.510816 systemd-logind[1480]: New session 1 of user core. Jan 13 20:37:25.520252 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:37:25.536369 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:37:25.540626 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:37:25.577141 tar[1490]: linux-amd64/LICENSE Jan 13 20:37:25.577240 tar[1490]: linux-amd64/README.md Jan 13 20:37:25.592795 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:37:25.662777 systemd[1559]: Queued start job for default target default.target. Jan 13 20:37:25.675368 systemd[1559]: Created slice app.slice - User Application Slice. Jan 13 20:37:25.675394 systemd[1559]: Reached target paths.target - Paths. Jan 13 20:37:25.675408 systemd[1559]: Reached target timers.target - Timers. Jan 13 20:37:25.676995 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:37:25.688845 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:37:25.688983 systemd[1559]: Reached target sockets.target - Sockets. Jan 13 20:37:25.689002 systemd[1559]: Reached target basic.target - Basic System. Jan 13 20:37:25.689038 systemd[1559]: Reached target default.target - Main User Target. Jan 13 20:37:25.689070 systemd[1559]: Startup finished in 140ms. Jan 13 20:37:25.689662 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:37:25.692455 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:37:25.758918 systemd[1]: Started sshd@1-10.0.0.63:22-10.0.0.1:49948.service - OpenSSH per-connection server daemon (10.0.0.1:49948). Jan 13 20:37:25.805538 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 49948 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:37:25.807053 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:25.811267 systemd-logind[1480]: New session 2 of user core. Jan 13 20:37:25.822202 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:37:25.877375 sshd[1575]: Connection closed by 10.0.0.1 port 49948 Jan 13 20:37:25.877771 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:25.888652 systemd[1]: sshd@1-10.0.0.63:22-10.0.0.1:49948.service: Deactivated successfully. Jan 13 20:37:25.890280 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:37:25.891963 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:37:25.904425 systemd[1]: Started sshd@2-10.0.0.63:22-10.0.0.1:49958.service - OpenSSH per-connection server daemon (10.0.0.1:49958). Jan 13 20:37:25.907016 systemd-logind[1480]: Removed session 2. Jan 13 20:37:25.945177 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 49958 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:37:25.946673 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:25.951126 systemd-logind[1480]: New session 3 of user core. Jan 13 20:37:25.961222 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:37:26.018983 sshd[1582]: Connection closed by 10.0.0.1 port 49958 Jan 13 20:37:26.019280 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:26.023421 systemd[1]: sshd@2-10.0.0.63:22-10.0.0.1:49958.service: Deactivated successfully. Jan 13 20:37:26.025294 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:37:26.025825 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:37:26.026688 systemd-logind[1480]: Removed session 3. Jan 13 20:37:26.624285 systemd-networkd[1407]: eth0: Gained IPv6LL Jan 13 20:37:26.627769 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:37:26.629802 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:37:26.642326 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:37:26.645356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:26.647898 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:37:26.669250 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:37:26.669552 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:37:26.671443 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:37:26.675528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:37:27.353679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:27.355704 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:37:27.357267 systemd[1]: Startup finished in 744ms (kernel) + 5.717s (initrd) + 4.504s (userspace) = 10.967s. Jan 13 20:37:27.362859 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:37:27.795051 kubelet[1608]: E0113 20:37:27.794898 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:37:27.799458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:37:27.799688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:37:27.800130 systemd[1]: kubelet.service: Consumed 1.011s CPU time. Jan 13 20:37:36.029656 systemd[1]: Started sshd@3-10.0.0.63:22-10.0.0.1:55254.service - OpenSSH per-connection server daemon (10.0.0.1:55254). Jan 13 20:37:36.073822 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 55254 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:37:36.075418 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:36.079566 systemd-logind[1480]: New session 4 of user core. Jan 13 20:37:36.088205 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:37:36.143738 sshd[1623]: Connection closed by 10.0.0.1 port 55254 Jan 13 20:37:36.144249 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:36.154731 systemd[1]: sshd@3-10.0.0.63:22-10.0.0.1:55254.service: Deactivated successfully. Jan 13 20:37:36.156544 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:37:36.158175 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:37:36.167339 systemd[1]: Started sshd@4-10.0.0.63:22-10.0.0.1:55270.service - OpenSSH per-connection server daemon (10.0.0.1:55270). Jan 13 20:37:36.168355 systemd-logind[1480]: Removed session 4. Jan 13 20:37:36.206197 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 55270 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:37:36.207730 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:36.211846 systemd-logind[1480]: New session 5 of user core. Jan 13 20:37:36.221216 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:37:36.271993 sshd[1630]: Connection closed by 10.0.0.1 port 55270 Jan 13 20:37:36.272400 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:36.279591 systemd[1]: sshd@4-10.0.0.63:22-10.0.0.1:55270.service: Deactivated successfully. Jan 13 20:37:36.281380 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:37:36.282908 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:37:36.292314 systemd[1]: Started sshd@5-10.0.0.63:22-10.0.0.1:55280.service - OpenSSH per-connection server daemon (10.0.0.1:55280). Jan 13 20:37:36.293274 systemd-logind[1480]: Removed session 5. Jan 13 20:37:36.331525 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 55280 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:37:36.333026 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:36.336815 systemd-logind[1480]: New session 6 of user core. Jan 13 20:37:36.351271 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:37:36.406152 sshd[1637]: Connection closed by 10.0.0.1 port 55280 Jan 13 20:37:36.406512 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:36.420825 systemd[1]: sshd@5-10.0.0.63:22-10.0.0.1:55280.service: Deactivated successfully. Jan 13 20:37:36.422325 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:37:36.423737 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:37:36.434317 systemd[1]: Started sshd@6-10.0.0.63:22-10.0.0.1:55288.service - OpenSSH per-connection server daemon (10.0.0.1:55288). Jan 13 20:37:36.435203 systemd-logind[1480]: Removed session 6. Jan 13 20:37:36.474194 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 55288 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:37:36.475755 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:36.479718 systemd-logind[1480]: New session 7 of user core. Jan 13 20:37:36.489221 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:37:36.548999 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:37:36.549466 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:37:36.571018 sudo[1645]: pam_unix(sudo:session): session closed for user root Jan 13 20:37:36.572914 sshd[1644]: Connection closed by 10.0.0.1 port 55288 Jan 13 20:37:36.573292 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:36.581383 systemd[1]: sshd@6-10.0.0.63:22-10.0.0.1:55288.service: Deactivated successfully. Jan 13 20:37:36.583398 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:37:36.585628 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:37:36.600377 systemd[1]: Started sshd@7-10.0.0.63:22-10.0.0.1:55300.service - OpenSSH per-connection server daemon (10.0.0.1:55300). Jan 13 20:37:36.601656 systemd-logind[1480]: Removed session 7. Jan 13 20:37:36.639540 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 55300 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:37:36.640943 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:36.645072 systemd-logind[1480]: New session 8 of user core. Jan 13 20:37:36.654191 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:37:36.708288 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:37:36.708629 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:37:36.712545 sudo[1654]: pam_unix(sudo:session): session closed for user root Jan 13 20:37:36.719659 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:37:36.720092 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:37:36.740416 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:37:36.770104 augenrules[1676]: No rules Jan 13 20:37:36.772071 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:37:36.772394 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:37:36.773729 sudo[1653]: pam_unix(sudo:session): session closed for user root Jan 13 20:37:36.775206 sshd[1652]: Connection closed by 10.0.0.1 port 55300 Jan 13 20:37:36.775543 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:36.786855 systemd[1]: sshd@7-10.0.0.63:22-10.0.0.1:55300.service: Deactivated successfully. Jan 13 20:37:36.788583 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:37:36.790477 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:37:36.801453 systemd[1]: Started sshd@8-10.0.0.63:22-10.0.0.1:55314.service - OpenSSH per-connection server daemon (10.0.0.1:55314). Jan 13 20:37:36.802619 systemd-logind[1480]: Removed session 8. Jan 13 20:37:36.843782 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 55314 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:37:36.845387 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:36.849549 systemd-logind[1480]: New session 9 of user core. Jan 13 20:37:36.863240 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:37:36.918541 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:37:36.919008 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:37:37.239463 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:37:37.239616 (dockerd)[1708]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:37:37.484257 dockerd[1708]: time="2025-01-13T20:37:37.484197443Z" level=info msg="Starting up" Jan 13 20:37:37.561335 systemd[1]: var-lib-docker-metacopy\x2dcheck1747532419-merged.mount: Deactivated successfully. Jan 13 20:37:37.586733 dockerd[1708]: time="2025-01-13T20:37:37.586670019Z" level=info msg="Loading containers: start." Jan 13 20:37:37.770101 kernel: Initializing XFRM netlink socket Jan 13 20:37:37.804444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:37:37.820471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:37.865219 systemd-networkd[1407]: docker0: Link UP Jan 13 20:37:37.983264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:37.988777 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:37:38.150661 kubelet[1861]: E0113 20:37:38.150517 1861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:37:38.157580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:37:38.157801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:37:38.174584 dockerd[1708]: time="2025-01-13T20:37:38.174521851Z" level=info msg="Loading containers: done." Jan 13 20:37:38.188604 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1289237892-merged.mount: Deactivated successfully. Jan 13 20:37:38.192333 dockerd[1708]: time="2025-01-13T20:37:38.192289065Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:37:38.192405 dockerd[1708]: time="2025-01-13T20:37:38.192384664Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:37:38.192515 dockerd[1708]: time="2025-01-13T20:37:38.192493829Z" level=info msg="Daemon has completed initialization" Jan 13 20:37:38.230146 dockerd[1708]: time="2025-01-13T20:37:38.230096373Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:37:38.230305 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:37:43.909905 containerd[1497]: time="2025-01-13T20:37:43.909864166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:37:44.645688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088993661.mount: Deactivated successfully. Jan 13 20:37:45.617561 containerd[1497]: time="2025-01-13T20:37:45.617492910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:45.636027 containerd[1497]: time="2025-01-13T20:37:45.635949857Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27975483" Jan 13 20:37:45.649918 containerd[1497]: time="2025-01-13T20:37:45.649886157Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:45.676233 containerd[1497]: time="2025-01-13T20:37:45.676197311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:45.677385 containerd[1497]: time="2025-01-13T20:37:45.677333992Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 1.767426454s" Jan 13 20:37:45.677385 containerd[1497]: time="2025-01-13T20:37:45.677383445Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 20:37:45.678910 containerd[1497]: time="2025-01-13T20:37:45.678889629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:37:47.971429 containerd[1497]: time="2025-01-13T20:37:47.971335304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:47.972243 containerd[1497]: time="2025-01-13T20:37:47.972155301Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24702157" Jan 13 20:37:47.990153 containerd[1497]: time="2025-01-13T20:37:47.990104085Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:48.000680 containerd[1497]: time="2025-01-13T20:37:48.000631855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:48.001735 containerd[1497]: time="2025-01-13T20:37:48.001697333Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 2.322779781s" Jan 13 20:37:48.001735 containerd[1497]: time="2025-01-13T20:37:48.001730575Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 20:37:48.002207 containerd[1497]: time="2025-01-13T20:37:48.002187421Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:37:48.408011 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:37:48.417235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:48.567662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:48.572183 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:37:48.609901 kubelet[1987]: E0113 20:37:48.609836 1987 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:37:48.613877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:37:48.614108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:37:51.046292 containerd[1497]: time="2025-01-13T20:37:51.046216084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:51.081266 containerd[1497]: time="2025-01-13T20:37:51.081184140Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18652067" Jan 13 20:37:51.098068 containerd[1497]: time="2025-01-13T20:37:51.098020277Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:51.129752 containerd[1497]: time="2025-01-13T20:37:51.129678547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:51.130859 containerd[1497]: time="2025-01-13T20:37:51.130804828Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 3.128588432s" Jan 13 20:37:51.130859 containerd[1497]: time="2025-01-13T20:37:51.130842479Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 20:37:51.131508 containerd[1497]: time="2025-01-13T20:37:51.131304155Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:37:52.974261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318389410.mount: Deactivated successfully. Jan 13 20:37:53.295100 containerd[1497]: time="2025-01-13T20:37:53.294959217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:53.295803 containerd[1497]: time="2025-01-13T20:37:53.295736354Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 20:37:53.296788 containerd[1497]: time="2025-01-13T20:37:53.296747380Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:53.298903 containerd[1497]: time="2025-01-13T20:37:53.298864710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:53.299628 containerd[1497]: time="2025-01-13T20:37:53.299590801Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.168202148s" Jan 13 20:37:53.299628 containerd[1497]: time="2025-01-13T20:37:53.299617592Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 20:37:53.300155 containerd[1497]: time="2025-01-13T20:37:53.300121356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:37:53.827832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2280559224.mount: Deactivated successfully. Jan 13 20:37:54.685342 containerd[1497]: time="2025-01-13T20:37:54.685293293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:54.686135 containerd[1497]: time="2025-01-13T20:37:54.686095888Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:37:54.687566 containerd[1497]: time="2025-01-13T20:37:54.687533363Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:54.691111 containerd[1497]: time="2025-01-13T20:37:54.691062120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:54.692354 containerd[1497]: time="2025-01-13T20:37:54.692324086Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.392161853s" Jan 13 20:37:54.692391 containerd[1497]: time="2025-01-13T20:37:54.692354473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:37:54.692885 containerd[1497]: time="2025-01-13T20:37:54.692735618Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:37:55.596334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685175183.mount: Deactivated successfully. Jan 13 20:37:55.630453 containerd[1497]: time="2025-01-13T20:37:55.630404366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:55.635149 containerd[1497]: time="2025-01-13T20:37:55.635099319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 13 20:37:55.642333 containerd[1497]: time="2025-01-13T20:37:55.642281245Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:55.649223 containerd[1497]: time="2025-01-13T20:37:55.649172657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:37:55.649833 containerd[1497]: time="2025-01-13T20:37:55.649808429Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 957.05069ms" Jan 13 20:37:55.649880 containerd[1497]: time="2025-01-13T20:37:55.649833015Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 20:37:55.650426 containerd[1497]: time="2025-01-13T20:37:55.650265726Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:37:56.713987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175571499.mount: Deactivated successfully. Jan 13 20:37:58.640946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:37:58.650299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:37:58.794592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:37:58.799211 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:37:59.061114 kubelet[2095]: E0113 20:37:59.060762 2095 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:37:59.065557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:37:59.065783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:38:01.366596 containerd[1497]: time="2025-01-13T20:38:01.366547769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:01.368538 containerd[1497]: time="2025-01-13T20:38:01.368503453Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 13 20:38:01.369756 containerd[1497]: time="2025-01-13T20:38:01.369719013Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:01.372681 containerd[1497]: time="2025-01-13T20:38:01.372647063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:01.373924 containerd[1497]: time="2025-01-13T20:38:01.373894925Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.723600375s" Jan 13 20:38:01.373980 containerd[1497]: time="2025-01-13T20:38:01.373923640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 20:38:03.983884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:38:03.995308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:38:04.020325 systemd[1]: Reloading requested from client PID 2156 ('systemctl') (unit session-9.scope)... Jan 13 20:38:04.020340 systemd[1]: Reloading... Jan 13 20:38:04.105170 zram_generator::config[2198]: No configuration found. Jan 13 20:38:04.295108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:38:04.381563 systemd[1]: Reloading finished in 360 ms. Jan 13 20:38:04.442502 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:38:04.445523 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:38:04.445964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:38:04.459328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:38:04.608711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:38:04.613820 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:38:04.648784 kubelet[2246]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:38:04.648784 kubelet[2246]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:38:04.648784 kubelet[2246]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:38:04.649769 kubelet[2246]: I0113 20:38:04.649718 2246 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:38:04.813016 kubelet[2246]: I0113 20:38:04.812973 2246 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:38:04.813016 kubelet[2246]: I0113 20:38:04.813005 2246 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:38:04.813291 kubelet[2246]: I0113 20:38:04.813274 2246 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:38:04.833728 kubelet[2246]: I0113 20:38:04.833679 2246 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:38:04.836331 kubelet[2246]: E0113 20:38:04.835576 2246 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:04.841183 kubelet[2246]: E0113 20:38:04.841137 2246 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:38:04.841183 kubelet[2246]: I0113 20:38:04.841180 2246 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:38:04.847511 kubelet[2246]: I0113 20:38:04.847470 2246 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:38:04.847585 kubelet[2246]: I0113 20:38:04.847567 2246 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:38:04.847741 kubelet[2246]: I0113 20:38:04.847704 2246 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:38:04.847915 kubelet[2246]: I0113 20:38:04.847731 2246 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:38:04.847915 kubelet[2246]: I0113 20:38:04.847910 2246 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:38:04.848030 kubelet[2246]: I0113 20:38:04.847920 2246 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:38:04.848055 kubelet[2246]: I0113 20:38:04.848037 2246 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:38:04.849320 kubelet[2246]: I0113 20:38:04.849285 2246 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:38:04.849320 kubelet[2246]: I0113 20:38:04.849310 2246 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:38:04.849455 kubelet[2246]: I0113 20:38:04.849345 2246 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:38:04.849455 kubelet[2246]: I0113 20:38:04.849359 2246 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:38:04.853194 kubelet[2246]: W0113 20:38:04.852335 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 13 20:38:04.853194 kubelet[2246]: E0113 20:38:04.852384 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:04.853194 kubelet[2246]: W0113 20:38:04.852519 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 13 20:38:04.853194 kubelet[2246]: E0113 20:38:04.852612 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:04.853627 kubelet[2246]: I0113 20:38:04.853604 2246 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:38:04.856014 kubelet[2246]: I0113 20:38:04.855995 2246 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:38:04.856145 kubelet[2246]: W0113 20:38:04.856119 2246 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:38:04.856776 kubelet[2246]: I0113 20:38:04.856760 2246 server.go:1269] "Started kubelet" Jan 13 20:38:04.856989 kubelet[2246]: I0113 20:38:04.856849 2246 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:38:04.857023 kubelet[2246]: I0113 20:38:04.856973 2246 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:38:04.857570 kubelet[2246]: I0113 20:38:04.857349 2246 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:38:04.858602 kubelet[2246]: I0113 20:38:04.858574 2246 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:38:04.860118 kubelet[2246]: I0113 20:38:04.860020 2246 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:38:04.862977 kubelet[2246]: E0113 20:38:04.860143 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5b055a8a80c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:38:04.856737985 +0000 UTC m=+0.238934225,LastTimestamp:2025-01-13 20:38:04.856737985 +0000 UTC m=+0.238934225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:38:04.862977 kubelet[2246]: E0113 20:38:04.862422 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:38:04.862977 kubelet[2246]: I0113 20:38:04.862449 2246 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:38:04.862977 kubelet[2246]: I0113 20:38:04.862570 2246 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:38:04.862977 kubelet[2246]: I0113 20:38:04.862629 2246 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:38:04.862977 kubelet[2246]: I0113 20:38:04.862656 2246 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:38:04.862977 kubelet[2246]: W0113 20:38:04.862854 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 13 20:38:04.863317 kubelet[2246]: E0113 20:38:04.862890 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:04.863317 kubelet[2246]: E0113 20:38:04.863036 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="200ms" Jan 13 20:38:04.863938 kubelet[2246]: I0113 20:38:04.863914 2246 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:38:04.864066 kubelet[2246]: I0113 20:38:04.863984 2246 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:38:04.866134 kubelet[2246]: I0113 20:38:04.865841 2246 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:38:04.868162 kubelet[2246]: E0113 20:38:04.867547 2246 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:38:04.878421 kubelet[2246]: I0113 20:38:04.878378 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:38:04.879930 kubelet[2246]: I0113 20:38:04.879715 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:38:04.879930 kubelet[2246]: I0113 20:38:04.879744 2246 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:38:04.879930 kubelet[2246]: I0113 20:38:04.879760 2246 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:38:04.879930 kubelet[2246]: E0113 20:38:04.879807 2246 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:38:04.881306 kubelet[2246]: W0113 20:38:04.881272 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 13 20:38:04.881406 kubelet[2246]: E0113 20:38:04.881387 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:04.888246 kubelet[2246]: I0113 20:38:04.888226 2246 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:38:04.888246 kubelet[2246]: I0113 20:38:04.888242 2246 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:38:04.888331 kubelet[2246]: I0113 20:38:04.888260 2246 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:38:04.962899 kubelet[2246]: E0113 20:38:04.962853 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:38:04.980256 kubelet[2246]: E0113 20:38:04.980212 2246 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:38:05.063613 kubelet[2246]: E0113 20:38:05.063556 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:38:05.063971 kubelet[2246]: E0113 20:38:05.063912 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="400ms" Jan 13 20:38:05.163743 kubelet[2246]: E0113 20:38:05.163700 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:38:05.180930 kubelet[2246]: E0113 20:38:05.180884 2246 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:38:05.264316 kubelet[2246]: E0113 20:38:05.264243 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:38:05.324175 kubelet[2246]: I0113 20:38:05.324131 2246 policy_none.go:49] "None policy: Start" Jan 13 20:38:05.325379 kubelet[2246]: I0113 20:38:05.325347 2246 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:38:05.325379 kubelet[2246]: I0113 20:38:05.325381 2246 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:38:05.365412 kubelet[2246]: E0113 20:38:05.365326 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:38:05.373286 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:38:05.386823 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:38:05.390406 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:38:05.398184 kubelet[2246]: I0113 20:38:05.398137 2246 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:38:05.398458 kubelet[2246]: I0113 20:38:05.398421 2246 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:38:05.398509 kubelet[2246]: I0113 20:38:05.398442 2246 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:38:05.399528 kubelet[2246]: I0113 20:38:05.398717 2246 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:38:05.400102 kubelet[2246]: E0113 20:38:05.400064 2246 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:38:05.464929 kubelet[2246]: E0113 20:38:05.464797 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="800ms" Jan 13 20:38:05.500200 kubelet[2246]: I0113 20:38:05.500150 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:38:05.500536 kubelet[2246]: E0113 20:38:05.500490 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Jan 13 20:38:05.589494 systemd[1]: Created slice kubepods-burstable-pod32c8c266b8c4ccf9cb2cc82ef1778a17.slice - libcontainer container kubepods-burstable-pod32c8c266b8c4ccf9cb2cc82ef1778a17.slice. Jan 13 20:38:05.603444 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 20:38:05.606904 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 20:38:05.667618 kubelet[2246]: I0113 20:38:05.667570 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32c8c266b8c4ccf9cb2cc82ef1778a17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32c8c266b8c4ccf9cb2cc82ef1778a17\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:38:05.667618 kubelet[2246]: I0113 20:38:05.667599 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:05.667618 kubelet[2246]: I0113 20:38:05.667618 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:05.668026 kubelet[2246]: I0113 20:38:05.667637 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:05.668026 kubelet[2246]: I0113 20:38:05.667659 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32c8c266b8c4ccf9cb2cc82ef1778a17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32c8c266b8c4ccf9cb2cc82ef1778a17\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:38:05.668026 kubelet[2246]: I0113 20:38:05.667679 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32c8c266b8c4ccf9cb2cc82ef1778a17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32c8c266b8c4ccf9cb2cc82ef1778a17\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:38:05.668026 kubelet[2246]: I0113 20:38:05.667697 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:05.668026 kubelet[2246]: I0113 20:38:05.667718 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:05.668194 kubelet[2246]: I0113 20:38:05.667737 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:38:05.701559 kubelet[2246]: I0113 20:38:05.701532 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:38:05.701815 kubelet[2246]: E0113 20:38:05.701789 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Jan 13 20:38:05.902235 kubelet[2246]: E0113 20:38:05.902210 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:05.902772 containerd[1497]: time="2025-01-13T20:38:05.902725775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32c8c266b8c4ccf9cb2cc82ef1778a17,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:05.905985 kubelet[2246]: E0113 20:38:05.905944 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:05.906308 containerd[1497]: time="2025-01-13T20:38:05.906272839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:05.909551 kubelet[2246]: E0113 20:38:05.909519 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:05.909823 containerd[1497]: time="2025-01-13T20:38:05.909791169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:06.068328 kubelet[2246]: W0113 20:38:06.068239 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 13 20:38:06.068328 kubelet[2246]: E0113 20:38:06.068317 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:06.103608 kubelet[2246]: I0113 20:38:06.103564 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:38:06.104006 kubelet[2246]: E0113 20:38:06.103942 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Jan 13 20:38:06.117512 kubelet[2246]: W0113 20:38:06.117437 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 13 20:38:06.117512 kubelet[2246]: E0113 20:38:06.117500 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:06.123227 kubelet[2246]: W0113 20:38:06.123168 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 13 20:38:06.123227 kubelet[2246]: E0113 20:38:06.123210 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:06.265998 kubelet[2246]: E0113 20:38:06.265857 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="1.6s" Jan 13 20:38:06.279730 kubelet[2246]: W0113 20:38:06.279635 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Jan 13 20:38:06.279730 kubelet[2246]: E0113 20:38:06.279720 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.63:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:06.675826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588261478.mount: Deactivated successfully. Jan 13 20:38:06.683511 containerd[1497]: time="2025-01-13T20:38:06.683450089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:38:06.686671 containerd[1497]: time="2025-01-13T20:38:06.686606775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:38:06.687697 containerd[1497]: time="2025-01-13T20:38:06.687655728Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:38:06.689803 containerd[1497]: time="2025-01-13T20:38:06.689754966Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:38:06.690667 containerd[1497]: time="2025-01-13T20:38:06.690608698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:38:06.691876 containerd[1497]: time="2025-01-13T20:38:06.691847752Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:38:06.692702 containerd[1497]: time="2025-01-13T20:38:06.692627904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:38:06.693954 containerd[1497]: time="2025-01-13T20:38:06.693899991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:38:06.694600 containerd[1497]: time="2025-01-13T20:38:06.694572708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 788.22689ms" Jan 13 20:38:06.697499 containerd[1497]: time="2025-01-13T20:38:06.697458892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 787.598251ms" Jan 13 20:38:06.700191 containerd[1497]: time="2025-01-13T20:38:06.700133723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 797.3036ms" Jan 13 20:38:06.820163 containerd[1497]: time="2025-01-13T20:38:06.820057435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:06.820873 containerd[1497]: time="2025-01-13T20:38:06.820727318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:06.820924 containerd[1497]: time="2025-01-13T20:38:06.820884176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:06.820976 containerd[1497]: time="2025-01-13T20:38:06.820946524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:06.821135 containerd[1497]: time="2025-01-13T20:38:06.821067023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:06.821987 containerd[1497]: time="2025-01-13T20:38:06.821729893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:06.821987 containerd[1497]: time="2025-01-13T20:38:06.821783194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:06.821987 containerd[1497]: time="2025-01-13T20:38:06.821800907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:06.821987 containerd[1497]: time="2025-01-13T20:38:06.821904414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:06.821987 containerd[1497]: time="2025-01-13T20:38:06.821530304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:06.821987 containerd[1497]: time="2025-01-13T20:38:06.821974508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:06.822183 containerd[1497]: time="2025-01-13T20:38:06.822057585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:06.845384 systemd[1]: Started cri-containerd-2ceee4e3c383ee45337403a49f22490e9011824458bcd46287ea115d483ad7c3.scope - libcontainer container 2ceee4e3c383ee45337403a49f22490e9011824458bcd46287ea115d483ad7c3. Jan 13 20:38:06.850698 systemd[1]: Started cri-containerd-1e5f0d2378bcb4c5545fa07bd631f12e2af5b4fc25baf999f38be9f7afdedc85.scope - libcontainer container 1e5f0d2378bcb4c5545fa07bd631f12e2af5b4fc25baf999f38be9f7afdedc85. Jan 13 20:38:06.852901 systemd[1]: Started cri-containerd-40233c1f3d0123b9086e7b4ef08067cccd01058552d76d949d2c4a161e9460ec.scope - libcontainer container 40233c1f3d0123b9086e7b4ef08067cccd01058552d76d949d2c4a161e9460ec. Jan 13 20:38:06.893326 containerd[1497]: time="2025-01-13T20:38:06.893249179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ceee4e3c383ee45337403a49f22490e9011824458bcd46287ea115d483ad7c3\"" Jan 13 20:38:06.893625 containerd[1497]: time="2025-01-13T20:38:06.893583806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32c8c266b8c4ccf9cb2cc82ef1778a17,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e5f0d2378bcb4c5545fa07bd631f12e2af5b4fc25baf999f38be9f7afdedc85\"" Jan 13 20:38:06.894882 kubelet[2246]: E0113 20:38:06.894852 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:06.895189 kubelet[2246]: E0113 20:38:06.895070 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:06.904388 containerd[1497]: time="2025-01-13T20:38:06.904282270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"40233c1f3d0123b9086e7b4ef08067cccd01058552d76d949d2c4a161e9460ec\"" Jan 13 20:38:06.906283 kubelet[2246]: E0113 20:38:06.905578 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:06.906367 containerd[1497]: time="2025-01-13T20:38:06.905615012Z" level=info msg="CreateContainer within sandbox \"2ceee4e3c383ee45337403a49f22490e9011824458bcd46287ea115d483ad7c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:38:06.906367 containerd[1497]: time="2025-01-13T20:38:06.905921102Z" level=info msg="CreateContainer within sandbox \"1e5f0d2378bcb4c5545fa07bd631f12e2af5b4fc25baf999f38be9f7afdedc85\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:38:06.907311 containerd[1497]: time="2025-01-13T20:38:06.907275105Z" level=info msg="CreateContainer within sandbox \"40233c1f3d0123b9086e7b4ef08067cccd01058552d76d949d2c4a161e9460ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:38:06.907372 kubelet[2246]: I0113 20:38:06.907315 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:38:06.908029 kubelet[2246]: E0113 20:38:06.907667 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Jan 13 20:38:06.939446 containerd[1497]: time="2025-01-13T20:38:06.939325121Z" level=info msg="CreateContainer within sandbox \"2ceee4e3c383ee45337403a49f22490e9011824458bcd46287ea115d483ad7c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2ed30187154bf689135cc1eec6b6e47b33cce7bc4debec9b7a29a74d2643319d\"" Jan 13 20:38:06.940134 containerd[1497]: time="2025-01-13T20:38:06.940007939Z" level=info msg="StartContainer for \"2ed30187154bf689135cc1eec6b6e47b33cce7bc4debec9b7a29a74d2643319d\"" Jan 13 20:38:06.947924 containerd[1497]: time="2025-01-13T20:38:06.947867009Z" level=info msg="CreateContainer within sandbox \"40233c1f3d0123b9086e7b4ef08067cccd01058552d76d949d2c4a161e9460ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43a451d7b9b1ce3ab869c0f310b96db42f91cc27ba36866aa326d9f61e678f50\"" Jan 13 20:38:06.948434 containerd[1497]: time="2025-01-13T20:38:06.948410671Z" level=info msg="StartContainer for \"43a451d7b9b1ce3ab869c0f310b96db42f91cc27ba36866aa326d9f61e678f50\"" Jan 13 20:38:06.948852 containerd[1497]: time="2025-01-13T20:38:06.948729788Z" level=info msg="CreateContainer within sandbox \"1e5f0d2378bcb4c5545fa07bd631f12e2af5b4fc25baf999f38be9f7afdedc85\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4b66d101d633ccc533afecbc363ea6f3eb34d8aa2f64a7bca8b5ef5abc0835ac\"" Jan 13 20:38:06.949638 containerd[1497]: time="2025-01-13T20:38:06.949327443Z" level=info msg="StartContainer for \"4b66d101d633ccc533afecbc363ea6f3eb34d8aa2f64a7bca8b5ef5abc0835ac\"" Jan 13 20:38:06.969275 systemd[1]: Started cri-containerd-2ed30187154bf689135cc1eec6b6e47b33cce7bc4debec9b7a29a74d2643319d.scope - libcontainer container 2ed30187154bf689135cc1eec6b6e47b33cce7bc4debec9b7a29a74d2643319d. Jan 13 20:38:06.989543 systemd[1]: Started cri-containerd-43a451d7b9b1ce3ab869c0f310b96db42f91cc27ba36866aa326d9f61e678f50.scope - libcontainer container 43a451d7b9b1ce3ab869c0f310b96db42f91cc27ba36866aa326d9f61e678f50. Jan 13 20:38:06.994186 systemd[1]: Started cri-containerd-4b66d101d633ccc533afecbc363ea6f3eb34d8aa2f64a7bca8b5ef5abc0835ac.scope - libcontainer container 4b66d101d633ccc533afecbc363ea6f3eb34d8aa2f64a7bca8b5ef5abc0835ac. Jan 13 20:38:07.030128 kubelet[2246]: E0113 20:38:07.030036 2246 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.63:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:38:07.038868 containerd[1497]: time="2025-01-13T20:38:07.038783477Z" level=info msg="StartContainer for \"2ed30187154bf689135cc1eec6b6e47b33cce7bc4debec9b7a29a74d2643319d\" returns successfully" Jan 13 20:38:07.042840 containerd[1497]: time="2025-01-13T20:38:07.042785954Z" level=info msg="StartContainer for \"43a451d7b9b1ce3ab869c0f310b96db42f91cc27ba36866aa326d9f61e678f50\" returns successfully" Jan 13 20:38:07.043104 containerd[1497]: time="2025-01-13T20:38:07.043037862Z" level=info msg="StartContainer for \"4b66d101d633ccc533afecbc363ea6f3eb34d8aa2f64a7bca8b5ef5abc0835ac\" returns successfully" Jan 13 20:38:07.892548 kubelet[2246]: E0113 20:38:07.892497 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:07.899148 kubelet[2246]: E0113 20:38:07.899129 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:07.900121 kubelet[2246]: E0113 20:38:07.900068 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:08.017853 kubelet[2246]: E0113 20:38:08.017801 2246 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:38:08.362474 kubelet[2246]: E0113 20:38:08.362352 2246 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:38:08.509760 kubelet[2246]: I0113 20:38:08.509723 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:38:08.516910 kubelet[2246]: I0113 20:38:08.516871 2246 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:38:08.516910 kubelet[2246]: E0113 20:38:08.516904 2246 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 13 20:38:08.524235 kubelet[2246]: E0113 20:38:08.524198 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:38:08.854688 kubelet[2246]: I0113 20:38:08.854600 2246 apiserver.go:52] "Watching apiserver" Jan 13 20:38:08.863474 kubelet[2246]: I0113 20:38:08.863421 2246 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:38:08.905981 kubelet[2246]: E0113 20:38:08.905949 2246 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 13 20:38:08.906405 kubelet[2246]: E0113 20:38:08.906126 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:08.906405 kubelet[2246]: E0113 20:38:08.906165 2246 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 13 20:38:08.906405 kubelet[2246]: E0113 20:38:08.906334 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:10.046098 systemd[1]: Reloading requested from client PID 2527 ('systemctl') (unit session-9.scope)... Jan 13 20:38:10.046113 systemd[1]: Reloading... Jan 13 20:38:10.112115 zram_generator::config[2566]: No configuration found. Jan 13 20:38:10.220978 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:38:10.313014 systemd[1]: Reloading finished in 266 ms. Jan 13 20:38:10.357273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:38:10.374605 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:38:10.374900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:38:10.386545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:38:10.390479 update_engine[1481]: I20250113 20:38:10.390421 1481 update_attempter.cc:509] Updating boot flags... Jan 13 20:38:10.419106 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2613) Jan 13 20:38:10.456298 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2615) Jan 13 20:38:10.500107 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2615) Jan 13 20:38:10.555160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:38:10.561114 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:38:10.604737 kubelet[2626]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:38:10.604737 kubelet[2626]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:38:10.604737 kubelet[2626]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:38:10.605162 kubelet[2626]: I0113 20:38:10.604740 2626 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:38:10.612533 kubelet[2626]: I0113 20:38:10.612474 2626 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:38:10.612533 kubelet[2626]: I0113 20:38:10.612517 2626 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:38:10.612807 kubelet[2626]: I0113 20:38:10.612783 2626 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:38:10.614036 kubelet[2626]: I0113 20:38:10.614010 2626 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:38:10.616003 kubelet[2626]: I0113 20:38:10.615857 2626 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:38:10.618823 kubelet[2626]: E0113 20:38:10.618794 2626 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:38:10.618823 kubelet[2626]: I0113 20:38:10.618823 2626 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:38:10.623658 kubelet[2626]: I0113 20:38:10.623627 2626 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:38:10.623757 kubelet[2626]: I0113 20:38:10.623735 2626 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:38:10.623876 kubelet[2626]: I0113 20:38:10.623846 2626 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:38:10.624026 kubelet[2626]: I0113 20:38:10.623869 2626 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:38:10.624026 kubelet[2626]: I0113 20:38:10.624024 2626 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:38:10.624167 kubelet[2626]: I0113 20:38:10.624033 2626 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:38:10.624167 kubelet[2626]: I0113 20:38:10.624060 2626 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:38:10.624216 kubelet[2626]: I0113 20:38:10.624183 2626 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:38:10.624216 kubelet[2626]: I0113 20:38:10.624194 2626 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:38:10.624271 kubelet[2626]: I0113 20:38:10.624224 2626 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:38:10.624271 kubelet[2626]: I0113 20:38:10.624239 2626 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:38:10.628096 kubelet[2626]: I0113 20:38:10.624641 2626 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:38:10.628096 kubelet[2626]: I0113 20:38:10.624972 2626 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:38:10.628096 kubelet[2626]: I0113 20:38:10.625327 2626 server.go:1269] "Started kubelet" Jan 13 20:38:10.628096 kubelet[2626]: I0113 20:38:10.625813 2626 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:38:10.628096 kubelet[2626]: I0113 20:38:10.626204 2626 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:38:10.628096 kubelet[2626]: I0113 20:38:10.626240 2626 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:38:10.628096 kubelet[2626]: I0113 20:38:10.627059 2626 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:38:10.628921 kubelet[2626]: I0113 20:38:10.628895 2626 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:38:10.629861 kubelet[2626]: I0113 20:38:10.629823 2626 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:38:10.634208 kubelet[2626]: I0113 20:38:10.634190 2626 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:38:10.634829 kubelet[2626]: E0113 20:38:10.634812 2626 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:38:10.635608 kubelet[2626]: I0113 20:38:10.635583 2626 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:38:10.637498 kubelet[2626]: I0113 20:38:10.637469 2626 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:38:10.637590 kubelet[2626]: I0113 20:38:10.637563 2626 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:38:10.637856 kubelet[2626]: E0113 20:38:10.637834 2626 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:38:10.638452 kubelet[2626]: I0113 20:38:10.638414 2626 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:38:10.641758 kubelet[2626]: I0113 20:38:10.638417 2626 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:38:10.642712 kubelet[2626]: I0113 20:38:10.642680 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:38:10.644378 kubelet[2626]: I0113 20:38:10.644198 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:38:10.644378 kubelet[2626]: I0113 20:38:10.644228 2626 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:38:10.644378 kubelet[2626]: I0113 20:38:10.644246 2626 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:38:10.644378 kubelet[2626]: E0113 20:38:10.644304 2626 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:38:10.673627 kubelet[2626]: I0113 20:38:10.673597 2626 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:38:10.673627 kubelet[2626]: I0113 20:38:10.673616 2626 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:38:10.673627 kubelet[2626]: I0113 20:38:10.673636 2626 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:38:10.673847 kubelet[2626]: I0113 20:38:10.673781 2626 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:38:10.673847 kubelet[2626]: I0113 20:38:10.673791 2626 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:38:10.673847 kubelet[2626]: I0113 20:38:10.673808 2626 policy_none.go:49] "None policy: Start" Jan 13 20:38:10.674371 kubelet[2626]: I0113 20:38:10.674355 2626 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:38:10.674418 kubelet[2626]: I0113 20:38:10.674376 2626 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:38:10.674522 kubelet[2626]: I0113 20:38:10.674502 2626 state_mem.go:75] "Updated machine memory state" Jan 13 20:38:10.678950 kubelet[2626]: I0113 20:38:10.678832 2626 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:38:10.679008 kubelet[2626]: I0113 20:38:10.678988 2626 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:38:10.679038 kubelet[2626]: I0113 20:38:10.678997 2626 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:38:10.679306 kubelet[2626]: I0113 20:38:10.679165 2626 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:38:10.784654 kubelet[2626]: I0113 20:38:10.784605 2626 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:38:10.790685 kubelet[2626]: I0113 20:38:10.790648 2626 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 20:38:10.790841 kubelet[2626]: I0113 20:38:10.790771 2626 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:38:10.842889 kubelet[2626]: I0113 20:38:10.842841 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:10.842889 kubelet[2626]: I0113 20:38:10.842882 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:38:10.842889 kubelet[2626]: I0113 20:38:10.842905 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32c8c266b8c4ccf9cb2cc82ef1778a17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32c8c266b8c4ccf9cb2cc82ef1778a17\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:38:10.843114 kubelet[2626]: I0113 20:38:10.842925 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32c8c266b8c4ccf9cb2cc82ef1778a17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32c8c266b8c4ccf9cb2cc82ef1778a17\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:38:10.843114 kubelet[2626]: I0113 20:38:10.842947 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32c8c266b8c4ccf9cb2cc82ef1778a17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32c8c266b8c4ccf9cb2cc82ef1778a17\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:38:10.843114 kubelet[2626]: I0113 20:38:10.842966 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:10.843114 kubelet[2626]: I0113 20:38:10.843025 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:10.843114 kubelet[2626]: I0113 20:38:10.843059 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:10.843239 kubelet[2626]: I0113 20:38:10.843102 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:38:11.054179 sudo[2661]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:38:11.054602 sudo[2661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:38:11.058265 kubelet[2626]: E0113 20:38:11.058144 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:11.058265 kubelet[2626]: E0113 20:38:11.058173 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:11.058265 kubelet[2626]: E0113 20:38:11.058141 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:11.531710 sudo[2661]: pam_unix(sudo:session): session closed for user root Jan 13 20:38:11.624897 kubelet[2626]: I0113 20:38:11.624863 2626 apiserver.go:52] "Watching apiserver" Jan 13 20:38:11.636242 kubelet[2626]: I0113 20:38:11.636199 2626 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:38:11.656992 kubelet[2626]: E0113 20:38:11.656649 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:11.656992 kubelet[2626]: E0113 20:38:11.656907 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:11.663359 kubelet[2626]: E0113 20:38:11.663304 2626 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:38:11.663450 kubelet[2626]: E0113 20:38:11.663430 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:11.684679 kubelet[2626]: I0113 20:38:11.684620 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.684603041 podStartE2EDuration="1.684603041s" podCreationTimestamp="2025-01-13 20:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:38:11.676996863 +0000 UTC m=+1.111160848" watchObservedRunningTime="2025-01-13 20:38:11.684603041 +0000 UTC m=+1.118767016" Jan 13 20:38:11.694068 kubelet[2626]: I0113 20:38:11.693579 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.693559435 podStartE2EDuration="1.693559435s" podCreationTimestamp="2025-01-13 20:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:38:11.69318189 +0000 UTC m=+1.127345865" watchObservedRunningTime="2025-01-13 20:38:11.693559435 +0000 UTC m=+1.127723410" Jan 13 20:38:11.694068 kubelet[2626]: I0113 20:38:11.693707 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6936896909999999 podStartE2EDuration="1.693689691s" podCreationTimestamp="2025-01-13 20:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:38:11.685115471 +0000 UTC m=+1.119279456" watchObservedRunningTime="2025-01-13 20:38:11.693689691 +0000 UTC m=+1.127853676" Jan 13 20:38:12.658399 kubelet[2626]: E0113 20:38:12.658356 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:12.879529 sudo[1687]: pam_unix(sudo:session): session closed for user root Jan 13 20:38:12.880757 sshd[1686]: Connection closed by 10.0.0.1 port 55314 Jan 13 20:38:12.881178 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:12.884963 systemd[1]: sshd@8-10.0.0.63:22-10.0.0.1:55314.service: Deactivated successfully. Jan 13 20:38:12.887098 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:38:12.887346 systemd[1]: session-9.scope: Consumed 4.520s CPU time, 153.5M memory peak, 0B memory swap peak. Jan 13 20:38:12.887814 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:38:12.888810 systemd-logind[1480]: Removed session 9. Jan 13 20:38:14.939654 kubelet[2626]: E0113 20:38:14.939612 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:15.405236 kubelet[2626]: I0113 20:38:15.405131 2626 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:38:15.405548 containerd[1497]: time="2025-01-13T20:38:15.405503975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:38:15.405945 kubelet[2626]: I0113 20:38:15.405792 2626 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:38:15.851735 kubelet[2626]: E0113 20:38:15.851588 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:16.106942 kubelet[2626]: W0113 20:38:16.106330 2626 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 13 20:38:16.106942 kubelet[2626]: E0113 20:38:16.106380 2626 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 13 20:38:16.107664 kubelet[2626]: W0113 20:38:16.107259 2626 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 13 20:38:16.107664 kubelet[2626]: E0113 20:38:16.107304 2626 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 13 20:38:16.107664 kubelet[2626]: W0113 20:38:16.107554 2626 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 13 20:38:16.107664 kubelet[2626]: E0113 20:38:16.107572 2626 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 13 20:38:16.113044 systemd[1]: Created slice kubepods-besteffort-pod3d160b2a_b4fa_4e57_b80a_464f93d8d277.slice - libcontainer container kubepods-besteffort-pod3d160b2a_b4fa_4e57_b80a_464f93d8d277.slice. Jan 13 20:38:16.126477 systemd[1]: Created slice kubepods-burstable-pod58a465d2_8934_4385_94f7_ee2aa3ae31a0.slice - libcontainer container kubepods-burstable-pod58a465d2_8934_4385_94f7_ee2aa3ae31a0.slice. Jan 13 20:38:16.177469 kubelet[2626]: I0113 20:38:16.177408 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58a465d2-8934-4385-94f7-ee2aa3ae31a0-clustermesh-secrets\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177469 kubelet[2626]: I0113 20:38:16.177457 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-run\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177469 kubelet[2626]: I0113 20:38:16.177476 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-bpf-maps\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177666 kubelet[2626]: I0113 20:38:16.177492 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-host-proc-sys-net\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177666 kubelet[2626]: I0113 20:38:16.177510 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-host-proc-sys-kernel\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177666 kubelet[2626]: I0113 20:38:16.177525 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hubble-tls\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177666 kubelet[2626]: I0113 20:38:16.177556 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d160b2a-b4fa-4e57-b80a-464f93d8d277-kube-proxy\") pod \"kube-proxy-p4tkd\" (UID: \"3d160b2a-b4fa-4e57-b80a-464f93d8d277\") " pod="kube-system/kube-proxy-p4tkd" Jan 13 20:38:16.177666 kubelet[2626]: I0113 20:38:16.177580 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cni-path\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177666 kubelet[2626]: I0113 20:38:16.177596 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-etc-cni-netd\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177804 kubelet[2626]: I0113 20:38:16.177612 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d160b2a-b4fa-4e57-b80a-464f93d8d277-xtables-lock\") pod \"kube-proxy-p4tkd\" (UID: \"3d160b2a-b4fa-4e57-b80a-464f93d8d277\") " pod="kube-system/kube-proxy-p4tkd" Jan 13 20:38:16.177804 kubelet[2626]: I0113 20:38:16.177643 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-xtables-lock\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177804 kubelet[2626]: I0113 20:38:16.177667 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-config-path\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177804 kubelet[2626]: I0113 20:38:16.177701 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbhm6\" (UniqueName: \"kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-kube-api-access-fbhm6\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177804 kubelet[2626]: I0113 20:38:16.177725 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-lib-modules\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177912 kubelet[2626]: I0113 20:38:16.177746 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d160b2a-b4fa-4e57-b80a-464f93d8d277-lib-modules\") pod \"kube-proxy-p4tkd\" (UID: \"3d160b2a-b4fa-4e57-b80a-464f93d8d277\") " pod="kube-system/kube-proxy-p4tkd" Jan 13 20:38:16.177912 kubelet[2626]: I0113 20:38:16.177760 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8pj9\" (UniqueName: \"kubernetes.io/projected/3d160b2a-b4fa-4e57-b80a-464f93d8d277-kube-api-access-h8pj9\") pod \"kube-proxy-p4tkd\" (UID: \"3d160b2a-b4fa-4e57-b80a-464f93d8d277\") " pod="kube-system/kube-proxy-p4tkd" Jan 13 20:38:16.177912 kubelet[2626]: I0113 20:38:16.177775 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hostproc\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.177912 kubelet[2626]: I0113 20:38:16.177788 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-cgroup\") pod \"cilium-lb9kb\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " pod="kube-system/cilium-lb9kb" Jan 13 20:38:16.282437 kubelet[2626]: E0113 20:38:16.282395 2626 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 20:38:16.282437 kubelet[2626]: E0113 20:38:16.282422 2626 projected.go:194] Error preparing data for projected volume kube-api-access-h8pj9 for pod kube-system/kube-proxy-p4tkd: configmap "kube-root-ca.crt" not found Jan 13 20:38:16.282560 kubelet[2626]: E0113 20:38:16.282469 2626 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d160b2a-b4fa-4e57-b80a-464f93d8d277-kube-api-access-h8pj9 podName:3d160b2a-b4fa-4e57-b80a-464f93d8d277 nodeName:}" failed. No retries permitted until 2025-01-13 20:38:16.782453692 +0000 UTC m=+6.216617667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h8pj9" (UniqueName: "kubernetes.io/projected/3d160b2a-b4fa-4e57-b80a-464f93d8d277-kube-api-access-h8pj9") pod "kube-proxy-p4tkd" (UID: "3d160b2a-b4fa-4e57-b80a-464f93d8d277") : configmap "kube-root-ca.crt" not found Jan 13 20:38:16.283669 kubelet[2626]: E0113 20:38:16.283538 2626 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 20:38:16.283669 kubelet[2626]: E0113 20:38:16.283570 2626 projected.go:194] Error preparing data for projected volume kube-api-access-fbhm6 for pod kube-system/cilium-lb9kb: configmap "kube-root-ca.crt" not found Jan 13 20:38:16.283669 kubelet[2626]: E0113 20:38:16.283622 2626 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-kube-api-access-fbhm6 podName:58a465d2-8934-4385-94f7-ee2aa3ae31a0 nodeName:}" failed. No retries permitted until 2025-01-13 20:38:16.783604746 +0000 UTC m=+6.217768791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fbhm6" (UniqueName: "kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-kube-api-access-fbhm6") pod "cilium-lb9kb" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0") : configmap "kube-root-ca.crt" not found Jan 13 20:38:16.358023 kubelet[2626]: E0113 20:38:16.357887 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:16.404943 systemd[1]: Created slice kubepods-besteffort-podf556f878_838f_42db_ae14_a2ce81aa22fc.slice - libcontainer container kubepods-besteffort-podf556f878_838f_42db_ae14_a2ce81aa22fc.slice. Jan 13 20:38:16.480544 kubelet[2626]: I0113 20:38:16.480479 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f556f878-838f-42db-ae14-a2ce81aa22fc-cilium-config-path\") pod \"cilium-operator-5d85765b45-fzgs8\" (UID: \"f556f878-838f-42db-ae14-a2ce81aa22fc\") " pod="kube-system/cilium-operator-5d85765b45-fzgs8" Jan 13 20:38:16.480742 kubelet[2626]: I0113 20:38:16.480619 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skt49\" (UniqueName: \"kubernetes.io/projected/f556f878-838f-42db-ae14-a2ce81aa22fc-kube-api-access-skt49\") pod \"cilium-operator-5d85765b45-fzgs8\" (UID: \"f556f878-838f-42db-ae14-a2ce81aa22fc\") " pod="kube-system/cilium-operator-5d85765b45-fzgs8" Jan 13 20:38:17.007918 kubelet[2626]: E0113 20:38:17.007854 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:17.009278 containerd[1497]: time="2025-01-13T20:38:17.008689221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fzgs8,Uid:f556f878-838f-42db-ae14-a2ce81aa22fc,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:17.025249 kubelet[2626]: E0113 20:38:17.025206 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:17.025698 containerd[1497]: time="2025-01-13T20:38:17.025656522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4tkd,Uid:3d160b2a-b4fa-4e57-b80a-464f93d8d277,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:17.164952 containerd[1497]: time="2025-01-13T20:38:17.164383182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:17.164952 containerd[1497]: time="2025-01-13T20:38:17.164969538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:17.165214 containerd[1497]: time="2025-01-13T20:38:17.165005286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:17.165214 containerd[1497]: time="2025-01-13T20:38:17.165117447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:17.165947 containerd[1497]: time="2025-01-13T20:38:17.165832076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:17.166154 containerd[1497]: time="2025-01-13T20:38:17.166050077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:17.166288 containerd[1497]: time="2025-01-13T20:38:17.166232031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:17.166604 containerd[1497]: time="2025-01-13T20:38:17.166570881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:17.189207 systemd[1]: Started cri-containerd-9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6.scope - libcontainer container 9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6. Jan 13 20:38:17.190990 systemd[1]: Started cri-containerd-a8ba5085b1c52a6ab341bdedc6271973a3abc919f0889175f5a17cefb9643f71.scope - libcontainer container a8ba5085b1c52a6ab341bdedc6271973a3abc919f0889175f5a17cefb9643f71. Jan 13 20:38:17.213013 containerd[1497]: time="2025-01-13T20:38:17.212965144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4tkd,Uid:3d160b2a-b4fa-4e57-b80a-464f93d8d277,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8ba5085b1c52a6ab341bdedc6271973a3abc919f0889175f5a17cefb9643f71\"" Jan 13 20:38:17.213844 kubelet[2626]: E0113 20:38:17.213813 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:17.215437 containerd[1497]: time="2025-01-13T20:38:17.215404447Z" level=info msg="CreateContainer within sandbox \"a8ba5085b1c52a6ab341bdedc6271973a3abc919f0889175f5a17cefb9643f71\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:38:17.231274 containerd[1497]: time="2025-01-13T20:38:17.231166022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fzgs8,Uid:f556f878-838f-42db-ae14-a2ce81aa22fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6\"" Jan 13 20:38:17.231840 kubelet[2626]: E0113 20:38:17.231809 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:17.233295 containerd[1497]: time="2025-01-13T20:38:17.233269331Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:38:17.239616 containerd[1497]: time="2025-01-13T20:38:17.239576884Z" level=info msg="CreateContainer within sandbox \"a8ba5085b1c52a6ab341bdedc6271973a3abc919f0889175f5a17cefb9643f71\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"487f4d3d0d6f87d0a1cd8c155a7defddfff3cf7f43d98ba71c88d8f75b3c69ec\"" Jan 13 20:38:17.240226 containerd[1497]: time="2025-01-13T20:38:17.240037474Z" level=info msg="StartContainer for \"487f4d3d0d6f87d0a1cd8c155a7defddfff3cf7f43d98ba71c88d8f75b3c69ec\"" Jan 13 20:38:17.274222 systemd[1]: Started cri-containerd-487f4d3d0d6f87d0a1cd8c155a7defddfff3cf7f43d98ba71c88d8f75b3c69ec.scope - libcontainer container 487f4d3d0d6f87d0a1cd8c155a7defddfff3cf7f43d98ba71c88d8f75b3c69ec. Jan 13 20:38:17.279979 kubelet[2626]: E0113 20:38:17.279951 2626 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 20:38:17.279979 kubelet[2626]: E0113 20:38:17.279974 2626 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-lb9kb: failed to sync secret cache: timed out waiting for the condition Jan 13 20:38:17.280109 kubelet[2626]: E0113 20:38:17.280026 2626 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hubble-tls podName:58a465d2-8934-4385-94f7-ee2aa3ae31a0 nodeName:}" failed. No retries permitted until 2025-01-13 20:38:17.780010619 +0000 UTC m=+7.214174594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hubble-tls") pod "cilium-lb9kb" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:38:17.308841 containerd[1497]: time="2025-01-13T20:38:17.308802574Z" level=info msg="StartContainer for \"487f4d3d0d6f87d0a1cd8c155a7defddfff3cf7f43d98ba71c88d8f75b3c69ec\" returns successfully" Jan 13 20:38:17.666387 kubelet[2626]: E0113 20:38:17.666354 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:17.929000 kubelet[2626]: E0113 20:38:17.928839 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:17.929652 containerd[1497]: time="2025-01-13T20:38:17.929555573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lb9kb,Uid:58a465d2-8934-4385-94f7-ee2aa3ae31a0,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:17.971728 containerd[1497]: time="2025-01-13T20:38:17.971021556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:17.971728 containerd[1497]: time="2025-01-13T20:38:17.971676282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:17.971728 containerd[1497]: time="2025-01-13T20:38:17.971695678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:17.971939 containerd[1497]: time="2025-01-13T20:38:17.971794676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:17.997337 systemd[1]: Started cri-containerd-8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075.scope - libcontainer container 8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075. Jan 13 20:38:18.019443 containerd[1497]: time="2025-01-13T20:38:18.019391504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lb9kb,Uid:58a465d2-8934-4385-94f7-ee2aa3ae31a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\"" Jan 13 20:38:18.020167 kubelet[2626]: E0113 20:38:18.020138 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:18.955130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963194119.mount: Deactivated successfully. Jan 13 20:38:19.716439 containerd[1497]: time="2025-01-13T20:38:19.716381012Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:19.729457 containerd[1497]: time="2025-01-13T20:38:19.729393960Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907197" Jan 13 20:38:19.737293 containerd[1497]: time="2025-01-13T20:38:19.737248331Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:19.738487 containerd[1497]: time="2025-01-13T20:38:19.738462500Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.505159787s" Jan 13 20:38:19.738487 containerd[1497]: time="2025-01-13T20:38:19.738491204Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:38:19.739506 containerd[1497]: time="2025-01-13T20:38:19.739484317Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:38:19.740539 containerd[1497]: time="2025-01-13T20:38:19.740509631Z" level=info msg="CreateContainer within sandbox \"9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:38:20.178978 containerd[1497]: time="2025-01-13T20:38:20.178923393Z" level=info msg="CreateContainer within sandbox \"9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\"" Jan 13 20:38:20.179555 containerd[1497]: time="2025-01-13T20:38:20.179513455Z" level=info msg="StartContainer for \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\"" Jan 13 20:38:20.207462 systemd[1]: Started cri-containerd-e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7.scope - libcontainer container e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7. Jan 13 20:38:20.632571 containerd[1497]: time="2025-01-13T20:38:20.632274749Z" level=info msg="StartContainer for \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\" returns successfully" Jan 13 20:38:20.675057 kubelet[2626]: E0113 20:38:20.675020 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:20.721122 kubelet[2626]: I0113 20:38:20.716850 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p4tkd" podStartSLOduration=4.716831139 podStartE2EDuration="4.716831139s" podCreationTimestamp="2025-01-13 20:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:38:17.674880802 +0000 UTC m=+7.109044777" watchObservedRunningTime="2025-01-13 20:38:20.716831139 +0000 UTC m=+10.150995114" Jan 13 20:38:21.676462 kubelet[2626]: E0113 20:38:21.676425 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:24.944321 kubelet[2626]: E0113 20:38:24.944266 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:24.981931 kubelet[2626]: I0113 20:38:24.981855 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-fzgs8" podStartSLOduration=6.475429735 podStartE2EDuration="8.981838006s" podCreationTimestamp="2025-01-13 20:38:16 +0000 UTC" firstStartedPulling="2025-01-13 20:38:17.232907498 +0000 UTC m=+6.667071473" lastFinishedPulling="2025-01-13 20:38:19.739315769 +0000 UTC m=+9.173479744" observedRunningTime="2025-01-13 20:38:20.719722459 +0000 UTC m=+10.153886424" watchObservedRunningTime="2025-01-13 20:38:24.981838006 +0000 UTC m=+14.416001991" Jan 13 20:38:25.877297 kubelet[2626]: E0113 20:38:25.877262 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:26.365611 kubelet[2626]: E0113 20:38:26.365529 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:31.405787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113764591.mount: Deactivated successfully. Jan 13 20:38:35.907500 containerd[1497]: time="2025-01-13T20:38:35.907437291Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:35.927009 containerd[1497]: time="2025-01-13T20:38:35.926890049Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734747" Jan 13 20:38:35.940858 containerd[1497]: time="2025-01-13T20:38:35.940789914Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:35.966252 containerd[1497]: time="2025-01-13T20:38:35.966187191Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.226671505s" Jan 13 20:38:35.966252 containerd[1497]: time="2025-01-13T20:38:35.966239469Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:38:35.971394 containerd[1497]: time="2025-01-13T20:38:35.971356223Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:38:36.018527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737428613.mount: Deactivated successfully. Jan 13 20:38:36.050358 containerd[1497]: time="2025-01-13T20:38:36.050289080Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\"" Jan 13 20:38:36.051062 containerd[1497]: time="2025-01-13T20:38:36.050839314Z" level=info msg="StartContainer for \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\"" Jan 13 20:38:36.088281 systemd[1]: Started cri-containerd-a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d.scope - libcontainer container a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d. Jan 13 20:38:36.149479 systemd[1]: cri-containerd-a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d.scope: Deactivated successfully. Jan 13 20:38:36.497304 containerd[1497]: time="2025-01-13T20:38:36.497254314Z" level=info msg="StartContainer for \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\" returns successfully" Jan 13 20:38:36.818097 containerd[1497]: time="2025-01-13T20:38:36.817912252Z" level=info msg="shim disconnected" id=a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d namespace=k8s.io Jan 13 20:38:36.818097 containerd[1497]: time="2025-01-13T20:38:36.817976782Z" level=warning msg="cleaning up after shim disconnected" id=a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d namespace=k8s.io Jan 13 20:38:36.818097 containerd[1497]: time="2025-01-13T20:38:36.817985178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:38:36.942672 kubelet[2626]: E0113 20:38:36.942634 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:36.944395 containerd[1497]: time="2025-01-13T20:38:36.944354111Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:38:37.014789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d-rootfs.mount: Deactivated successfully. Jan 13 20:38:37.183652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260612512.mount: Deactivated successfully. Jan 13 20:38:37.327445 containerd[1497]: time="2025-01-13T20:38:37.327391671Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\"" Jan 13 20:38:37.329806 containerd[1497]: time="2025-01-13T20:38:37.329775118Z" level=info msg="StartContainer for \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\"" Jan 13 20:38:37.361232 systemd[1]: Started cri-containerd-9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32.scope - libcontainer container 9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32. Jan 13 20:38:37.444226 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:38:37.444552 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:38:37.444633 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:38:37.448314 containerd[1497]: time="2025-01-13T20:38:37.447874920Z" level=info msg="StartContainer for \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\" returns successfully" Jan 13 20:38:37.451492 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:38:37.451747 systemd[1]: cri-containerd-9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32.scope: Deactivated successfully. Jan 13 20:38:37.473352 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:38:37.562279 containerd[1497]: time="2025-01-13T20:38:37.562211303Z" level=info msg="shim disconnected" id=9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32 namespace=k8s.io Jan 13 20:38:37.562279 containerd[1497]: time="2025-01-13T20:38:37.562269322Z" level=warning msg="cleaning up after shim disconnected" id=9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32 namespace=k8s.io Jan 13 20:38:37.562279 containerd[1497]: time="2025-01-13T20:38:37.562277247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:38:37.945701 kubelet[2626]: E0113 20:38:37.945645 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:37.947693 containerd[1497]: time="2025-01-13T20:38:37.947609585Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:38:38.015244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32-rootfs.mount: Deactivated successfully. Jan 13 20:38:38.734259 containerd[1497]: time="2025-01-13T20:38:38.734193078Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\"" Jan 13 20:38:38.734899 containerd[1497]: time="2025-01-13T20:38:38.734829042Z" level=info msg="StartContainer for \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\"" Jan 13 20:38:38.770299 systemd[1]: Started cri-containerd-39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28.scope - libcontainer container 39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28. Jan 13 20:38:38.802227 systemd[1]: cri-containerd-39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28.scope: Deactivated successfully. Jan 13 20:38:38.817757 containerd[1497]: time="2025-01-13T20:38:38.817704912Z" level=info msg="StartContainer for \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\" returns successfully" Jan 13 20:38:38.949835 kubelet[2626]: E0113 20:38:38.949744 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:38.954869 containerd[1497]: time="2025-01-13T20:38:38.954808431Z" level=info msg="shim disconnected" id=39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28 namespace=k8s.io Jan 13 20:38:38.954869 containerd[1497]: time="2025-01-13T20:38:38.954858986Z" level=warning msg="cleaning up after shim disconnected" id=39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28 namespace=k8s.io Jan 13 20:38:38.954869 containerd[1497]: time="2025-01-13T20:38:38.954867662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:38:39.015862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28-rootfs.mount: Deactivated successfully. Jan 13 20:38:39.953620 kubelet[2626]: E0113 20:38:39.953556 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:39.955836 containerd[1497]: time="2025-01-13T20:38:39.955506721Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:38:40.348641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010687237.mount: Deactivated successfully. Jan 13 20:38:40.465768 containerd[1497]: time="2025-01-13T20:38:40.465713784Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\"" Jan 13 20:38:40.466264 containerd[1497]: time="2025-01-13T20:38:40.466243759Z" level=info msg="StartContainer for \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\"" Jan 13 20:38:40.502315 systemd[1]: Started cri-containerd-a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11.scope - libcontainer container a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11. Jan 13 20:38:40.525550 systemd[1]: cri-containerd-a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11.scope: Deactivated successfully. Jan 13 20:38:40.603608 containerd[1497]: time="2025-01-13T20:38:40.603489082Z" level=info msg="StartContainer for \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\" returns successfully" Jan 13 20:38:40.622979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11-rootfs.mount: Deactivated successfully. Jan 13 20:38:40.715102 containerd[1497]: time="2025-01-13T20:38:40.715009821Z" level=info msg="shim disconnected" id=a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11 namespace=k8s.io Jan 13 20:38:40.715102 containerd[1497]: time="2025-01-13T20:38:40.715069503Z" level=warning msg="cleaning up after shim disconnected" id=a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11 namespace=k8s.io Jan 13 20:38:40.715102 containerd[1497]: time="2025-01-13T20:38:40.715099309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:38:40.957820 kubelet[2626]: E0113 20:38:40.957768 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:40.959967 containerd[1497]: time="2025-01-13T20:38:40.959927927Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:38:41.248330 systemd[1]: Started sshd@9-10.0.0.63:22-10.0.0.1:33678.service - OpenSSH per-connection server daemon (10.0.0.1:33678). Jan 13 20:38:41.419191 containerd[1497]: time="2025-01-13T20:38:41.419118332Z" level=info msg="CreateContainer within sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\"" Jan 13 20:38:41.421225 containerd[1497]: time="2025-01-13T20:38:41.421164945Z" level=info msg="StartContainer for \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\"" Jan 13 20:38:41.448184 systemd[1]: run-containerd-runc-k8s.io-26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39-runc.Ga5gYC.mount: Deactivated successfully. Jan 13 20:38:41.458271 systemd[1]: Started cri-containerd-26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39.scope - libcontainer container 26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39. Jan 13 20:38:41.465349 sshd[3315]: Accepted publickey for core from 10.0.0.1 port 33678 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:41.467243 sshd-session[3315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:41.476353 systemd-logind[1480]: New session 10 of user core. Jan 13 20:38:41.488437 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:38:41.542918 containerd[1497]: time="2025-01-13T20:38:41.542758957Z" level=info msg="StartContainer for \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\" returns successfully" Jan 13 20:38:41.626780 kubelet[2626]: I0113 20:38:41.626728 2626 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:38:41.692747 sshd[3345]: Connection closed by 10.0.0.1 port 33678 Jan 13 20:38:41.696286 sshd-session[3315]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:41.702760 systemd[1]: sshd@9-10.0.0.63:22-10.0.0.1:33678.service: Deactivated successfully. Jan 13 20:38:41.706670 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:38:41.709629 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:38:41.712477 systemd-logind[1480]: Removed session 10. Jan 13 20:38:41.717565 systemd[1]: Created slice kubepods-burstable-pod87e404c5_909b_4ff1_8aac_b6499cdce3e5.slice - libcontainer container kubepods-burstable-pod87e404c5_909b_4ff1_8aac_b6499cdce3e5.slice. Jan 13 20:38:41.723191 systemd[1]: Created slice kubepods-burstable-podfd3bcfd2_13f2_4df3_b03c_dcf1fb9ee912.slice - libcontainer container kubepods-burstable-podfd3bcfd2_13f2_4df3_b03c_dcf1fb9ee912.slice. Jan 13 20:38:41.835148 kubelet[2626]: I0113 20:38:41.834977 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87e404c5-909b-4ff1-8aac-b6499cdce3e5-config-volume\") pod \"coredns-6f6b679f8f-q9hnb\" (UID: \"87e404c5-909b-4ff1-8aac-b6499cdce3e5\") " pod="kube-system/coredns-6f6b679f8f-q9hnb" Jan 13 20:38:41.835148 kubelet[2626]: I0113 20:38:41.835056 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd3bcfd2-13f2-4df3-b03c-dcf1fb9ee912-config-volume\") pod \"coredns-6f6b679f8f-6zlsb\" (UID: \"fd3bcfd2-13f2-4df3-b03c-dcf1fb9ee912\") " pod="kube-system/coredns-6f6b679f8f-6zlsb" Jan 13 20:38:41.835148 kubelet[2626]: I0113 20:38:41.835114 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdsmt\" (UniqueName: \"kubernetes.io/projected/87e404c5-909b-4ff1-8aac-b6499cdce3e5-kube-api-access-bdsmt\") pod \"coredns-6f6b679f8f-q9hnb\" (UID: \"87e404c5-909b-4ff1-8aac-b6499cdce3e5\") " pod="kube-system/coredns-6f6b679f8f-q9hnb" Jan 13 20:38:41.835148 kubelet[2626]: I0113 20:38:41.835131 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9n78\" (UniqueName: \"kubernetes.io/projected/fd3bcfd2-13f2-4df3-b03c-dcf1fb9ee912-kube-api-access-m9n78\") pod \"coredns-6f6b679f8f-6zlsb\" (UID: \"fd3bcfd2-13f2-4df3-b03c-dcf1fb9ee912\") " pod="kube-system/coredns-6f6b679f8f-6zlsb" Jan 13 20:38:41.962104 kubelet[2626]: E0113 20:38:41.962011 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:42.020505 kubelet[2626]: E0113 20:38:42.020456 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:42.021207 containerd[1497]: time="2025-01-13T20:38:42.021039576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q9hnb,Uid:87e404c5-909b-4ff1-8aac-b6499cdce3e5,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:42.026370 kubelet[2626]: E0113 20:38:42.026346 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:42.026766 containerd[1497]: time="2025-01-13T20:38:42.026723848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6zlsb,Uid:fd3bcfd2-13f2-4df3-b03c-dcf1fb9ee912,Namespace:kube-system,Attempt:0,}" Jan 13 20:38:42.041937 kubelet[2626]: I0113 20:38:42.041847 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lb9kb" podStartSLOduration=8.095824106 podStartE2EDuration="26.04183077s" podCreationTimestamp="2025-01-13 20:38:16 +0000 UTC" firstStartedPulling="2025-01-13 20:38:18.020864492 +0000 UTC m=+7.455028467" lastFinishedPulling="2025-01-13 20:38:35.966871156 +0000 UTC m=+25.401035131" observedRunningTime="2025-01-13 20:38:42.041280787 +0000 UTC m=+31.475444762" watchObservedRunningTime="2025-01-13 20:38:42.04183077 +0000 UTC m=+31.475994745" Jan 13 20:38:42.963561 kubelet[2626]: E0113 20:38:42.963525 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:43.788549 systemd-networkd[1407]: cilium_host: Link UP Jan 13 20:38:43.788722 systemd-networkd[1407]: cilium_net: Link UP Jan 13 20:38:43.788926 systemd-networkd[1407]: cilium_net: Gained carrier Jan 13 20:38:43.789123 systemd-networkd[1407]: cilium_host: Gained carrier Jan 13 20:38:43.789287 systemd-networkd[1407]: cilium_net: Gained IPv6LL Jan 13 20:38:43.789514 systemd-networkd[1407]: cilium_host: Gained IPv6LL Jan 13 20:38:43.903696 systemd-networkd[1407]: cilium_vxlan: Link UP Jan 13 20:38:43.903711 systemd-networkd[1407]: cilium_vxlan: Gained carrier Jan 13 20:38:43.965103 kubelet[2626]: E0113 20:38:43.965047 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:44.140114 kernel: NET: Registered PF_ALG protocol family Jan 13 20:38:44.855734 systemd-networkd[1407]: lxc_health: Link UP Jan 13 20:38:44.866614 systemd-networkd[1407]: lxc_health: Gained carrier Jan 13 20:38:45.014450 systemd-networkd[1407]: lxc96d652ab8da4: Link UP Jan 13 20:38:45.025130 kernel: eth0: renamed from tmp8bc48 Jan 13 20:38:45.033664 systemd-networkd[1407]: lxc96d652ab8da4: Gained carrier Jan 13 20:38:45.037800 systemd-networkd[1407]: lxc645dfdea3295: Link UP Jan 13 20:38:45.048109 kernel: eth0: renamed from tmp33746 Jan 13 20:38:45.057787 systemd-networkd[1407]: lxc645dfdea3295: Gained carrier Jan 13 20:38:45.408326 systemd-networkd[1407]: cilium_vxlan: Gained IPv6LL Jan 13 20:38:45.930855 kubelet[2626]: E0113 20:38:45.930803 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:45.968689 kubelet[2626]: E0113 20:38:45.968649 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:45.984242 systemd-networkd[1407]: lxc_health: Gained IPv6LL Jan 13 20:38:46.304241 systemd-networkd[1407]: lxc96d652ab8da4: Gained IPv6LL Jan 13 20:38:46.705688 systemd[1]: Started sshd@10-10.0.0.63:22-10.0.0.1:33694.service - OpenSSH per-connection server daemon (10.0.0.1:33694). Jan 13 20:38:46.752465 sshd[3848]: Accepted publickey for core from 10.0.0.1 port 33694 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:46.754434 sshd-session[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:46.758183 systemd-logind[1480]: New session 11 of user core. Jan 13 20:38:46.767211 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:38:46.884454 sshd[3850]: Connection closed by 10.0.0.1 port 33694 Jan 13 20:38:46.884820 sshd-session[3848]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:46.888550 systemd[1]: sshd@10-10.0.0.63:22-10.0.0.1:33694.service: Deactivated successfully. Jan 13 20:38:46.890557 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:38:46.891239 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:38:46.892071 systemd-logind[1480]: Removed session 11. Jan 13 20:38:46.970818 kubelet[2626]: E0113 20:38:46.970667 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:47.074257 systemd-networkd[1407]: lxc645dfdea3295: Gained IPv6LL Jan 13 20:38:48.921238 containerd[1497]: time="2025-01-13T20:38:48.921012649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:48.921238 containerd[1497]: time="2025-01-13T20:38:48.921102418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:48.921238 containerd[1497]: time="2025-01-13T20:38:48.921118067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:48.922452 containerd[1497]: time="2025-01-13T20:38:48.922140957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:38:48.922452 containerd[1497]: time="2025-01-13T20:38:48.922219785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:38:48.922452 containerd[1497]: time="2025-01-13T20:38:48.922242377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:48.922452 containerd[1497]: time="2025-01-13T20:38:48.922346122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:48.922742 containerd[1497]: time="2025-01-13T20:38:48.922648680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:38:48.948566 systemd[1]: run-containerd-runc-k8s.io-8bc488b12ec4ca25b2878e73bcb72009c2b107e370028bced4eac3920f7c43c6-runc.OqudgB.mount: Deactivated successfully. Jan 13 20:38:48.966222 systemd[1]: Started cri-containerd-3374657c90ab199676427a87a9500307887ef0dbb9d01d181e9aa0e52b1ff50a.scope - libcontainer container 3374657c90ab199676427a87a9500307887ef0dbb9d01d181e9aa0e52b1ff50a. Jan 13 20:38:48.967763 systemd[1]: Started cri-containerd-8bc488b12ec4ca25b2878e73bcb72009c2b107e370028bced4eac3920f7c43c6.scope - libcontainer container 8bc488b12ec4ca25b2878e73bcb72009c2b107e370028bced4eac3920f7c43c6. Jan 13 20:38:48.980153 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:38:48.982693 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:38:49.010278 containerd[1497]: time="2025-01-13T20:38:49.010228294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6zlsb,Uid:fd3bcfd2-13f2-4df3-b03c-dcf1fb9ee912,Namespace:kube-system,Attempt:0,} returns sandbox id \"3374657c90ab199676427a87a9500307887ef0dbb9d01d181e9aa0e52b1ff50a\"" Jan 13 20:38:49.012324 kubelet[2626]: E0113 20:38:49.012045 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:49.014284 containerd[1497]: time="2025-01-13T20:38:49.014230715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q9hnb,Uid:87e404c5-909b-4ff1-8aac-b6499cdce3e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bc488b12ec4ca25b2878e73bcb72009c2b107e370028bced4eac3920f7c43c6\"" Jan 13 20:38:49.015025 kubelet[2626]: E0113 20:38:49.014988 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:49.016301 containerd[1497]: time="2025-01-13T20:38:49.016224376Z" level=info msg="CreateContainer within sandbox \"3374657c90ab199676427a87a9500307887ef0dbb9d01d181e9aa0e52b1ff50a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:38:49.016554 containerd[1497]: time="2025-01-13T20:38:49.016511766Z" level=info msg="CreateContainer within sandbox \"8bc488b12ec4ca25b2878e73bcb72009c2b107e370028bced4eac3920f7c43c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:38:49.344105 containerd[1497]: time="2025-01-13T20:38:49.343952481Z" level=info msg="CreateContainer within sandbox \"8bc488b12ec4ca25b2878e73bcb72009c2b107e370028bced4eac3920f7c43c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"553da7de69769fb24635b8119a11f14989968da09a3e3daa843107568393b465\"" Jan 13 20:38:49.344850 containerd[1497]: time="2025-01-13T20:38:49.344804340Z" level=info msg="StartContainer for \"553da7de69769fb24635b8119a11f14989968da09a3e3daa843107568393b465\"" Jan 13 20:38:49.349106 containerd[1497]: time="2025-01-13T20:38:49.349040169Z" level=info msg="CreateContainer within sandbox \"3374657c90ab199676427a87a9500307887ef0dbb9d01d181e9aa0e52b1ff50a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3189f49d4309b45884a498eec43c9e20ae631003cb4449d1096005ba68ac68cd\"" Jan 13 20:38:49.349694 containerd[1497]: time="2025-01-13T20:38:49.349567909Z" level=info msg="StartContainer for \"3189f49d4309b45884a498eec43c9e20ae631003cb4449d1096005ba68ac68cd\"" Jan 13 20:38:49.375248 systemd[1]: Started cri-containerd-553da7de69769fb24635b8119a11f14989968da09a3e3daa843107568393b465.scope - libcontainer container 553da7de69769fb24635b8119a11f14989968da09a3e3daa843107568393b465. Jan 13 20:38:49.378536 systemd[1]: Started cri-containerd-3189f49d4309b45884a498eec43c9e20ae631003cb4449d1096005ba68ac68cd.scope - libcontainer container 3189f49d4309b45884a498eec43c9e20ae631003cb4449d1096005ba68ac68cd. Jan 13 20:38:49.424566 containerd[1497]: time="2025-01-13T20:38:49.416662305Z" level=info msg="StartContainer for \"3189f49d4309b45884a498eec43c9e20ae631003cb4449d1096005ba68ac68cd\" returns successfully" Jan 13 20:38:49.424758 containerd[1497]: time="2025-01-13T20:38:49.416663708Z" level=info msg="StartContainer for \"553da7de69769fb24635b8119a11f14989968da09a3e3daa843107568393b465\" returns successfully" Jan 13 20:38:49.978595 kubelet[2626]: E0113 20:38:49.978562 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:49.980562 kubelet[2626]: E0113 20:38:49.980481 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:50.000349 kubelet[2626]: I0113 20:38:49.998601 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-q9hnb" podStartSLOduration=33.998579836 podStartE2EDuration="33.998579836s" podCreationTimestamp="2025-01-13 20:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:38:49.998259205 +0000 UTC m=+39.432423180" watchObservedRunningTime="2025-01-13 20:38:49.998579836 +0000 UTC m=+39.432743811" Jan 13 20:38:50.000349 kubelet[2626]: I0113 20:38:49.998710 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6zlsb" podStartSLOduration=33.998704641 podStartE2EDuration="33.998704641s" podCreationTimestamp="2025-01-13 20:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:38:49.988969463 +0000 UTC m=+39.423133438" watchObservedRunningTime="2025-01-13 20:38:49.998704641 +0000 UTC m=+39.432868617" Jan 13 20:38:50.983427 kubelet[2626]: E0113 20:38:50.983388 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:50.983427 kubelet[2626]: E0113 20:38:50.983448 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:51.899452 systemd[1]: Started sshd@11-10.0.0.63:22-10.0.0.1:39762.service - OpenSSH per-connection server daemon (10.0.0.1:39762). Jan 13 20:38:51.950217 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 39762 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:51.952302 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:51.957051 systemd-logind[1480]: New session 12 of user core. Jan 13 20:38:51.968332 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:38:51.984871 kubelet[2626]: E0113 20:38:51.984839 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:51.985369 kubelet[2626]: E0113 20:38:51.984987 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:38:52.130952 sshd[4045]: Connection closed by 10.0.0.1 port 39762 Jan 13 20:38:52.131325 sshd-session[4043]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:52.135062 systemd[1]: sshd@11-10.0.0.63:22-10.0.0.1:39762.service: Deactivated successfully. Jan 13 20:38:52.137060 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:38:52.137668 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:38:52.138614 systemd-logind[1480]: Removed session 12. Jan 13 20:38:57.147496 systemd[1]: Started sshd@12-10.0.0.63:22-10.0.0.1:39768.service - OpenSSH per-connection server daemon (10.0.0.1:39768). Jan 13 20:38:57.189776 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 39768 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:57.191226 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:57.195020 systemd-logind[1480]: New session 13 of user core. Jan 13 20:38:57.205274 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:38:57.349122 sshd[4060]: Connection closed by 10.0.0.1 port 39768 Jan 13 20:38:57.349622 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:57.364405 systemd[1]: sshd@12-10.0.0.63:22-10.0.0.1:39768.service: Deactivated successfully. Jan 13 20:38:57.366467 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:38:57.368060 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:38:57.385428 systemd[1]: Started sshd@13-10.0.0.63:22-10.0.0.1:39780.service - OpenSSH per-connection server daemon (10.0.0.1:39780). Jan 13 20:38:57.386494 systemd-logind[1480]: Removed session 13. Jan 13 20:38:57.426052 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 39780 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:57.427546 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:57.431356 systemd-logind[1480]: New session 14 of user core. Jan 13 20:38:57.439223 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:38:57.671517 sshd[4075]: Connection closed by 10.0.0.1 port 39780 Jan 13 20:38:57.671985 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:57.680509 systemd[1]: sshd@13-10.0.0.63:22-10.0.0.1:39780.service: Deactivated successfully. Jan 13 20:38:57.682603 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:38:57.684520 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:38:57.691590 systemd[1]: Started sshd@14-10.0.0.63:22-10.0.0.1:39790.service - OpenSSH per-connection server daemon (10.0.0.1:39790). Jan 13 20:38:57.692661 systemd-logind[1480]: Removed session 14. Jan 13 20:38:57.741736 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 39790 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:57.743687 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:57.747831 systemd-logind[1480]: New session 15 of user core. Jan 13 20:38:57.758216 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:38:57.884893 sshd[4087]: Connection closed by 10.0.0.1 port 39790 Jan 13 20:38:57.885337 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:57.890642 systemd[1]: sshd@14-10.0.0.63:22-10.0.0.1:39790.service: Deactivated successfully. Jan 13 20:38:57.893457 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:38:57.894623 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:38:57.896805 systemd-logind[1480]: Removed session 15. Jan 13 20:39:02.898609 systemd[1]: Started sshd@15-10.0.0.63:22-10.0.0.1:54976.service - OpenSSH per-connection server daemon (10.0.0.1:54976). Jan 13 20:39:02.945726 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 54976 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:02.947492 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:02.952212 systemd-logind[1480]: New session 16 of user core. Jan 13 20:39:02.966374 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:39:03.085513 sshd[4102]: Connection closed by 10.0.0.1 port 54976 Jan 13 20:39:03.085878 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:03.089675 systemd[1]: sshd@15-10.0.0.63:22-10.0.0.1:54976.service: Deactivated successfully. Jan 13 20:39:03.091668 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:39:03.092341 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:39:03.093301 systemd-logind[1480]: Removed session 16. Jan 13 20:39:08.099421 systemd[1]: Started sshd@16-10.0.0.63:22-10.0.0.1:54978.service - OpenSSH per-connection server daemon (10.0.0.1:54978). Jan 13 20:39:08.145649 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 54978 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:08.147648 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:08.152360 systemd-logind[1480]: New session 17 of user core. Jan 13 20:39:08.166392 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:39:08.301255 sshd[4118]: Connection closed by 10.0.0.1 port 54978 Jan 13 20:39:08.301618 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:08.304515 systemd[1]: sshd@16-10.0.0.63:22-10.0.0.1:54978.service: Deactivated successfully. Jan 13 20:39:08.306414 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:39:08.308187 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:39:08.309176 systemd-logind[1480]: Removed session 17. Jan 13 20:39:13.313374 systemd[1]: Started sshd@17-10.0.0.63:22-10.0.0.1:39136.service - OpenSSH per-connection server daemon (10.0.0.1:39136). Jan 13 20:39:13.355919 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 39136 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:13.357387 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:13.361695 systemd-logind[1480]: New session 18 of user core. Jan 13 20:39:13.370235 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:39:13.489395 sshd[4134]: Connection closed by 10.0.0.1 port 39136 Jan 13 20:39:13.489744 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:13.498224 systemd[1]: sshd@17-10.0.0.63:22-10.0.0.1:39136.service: Deactivated successfully. Jan 13 20:39:13.500632 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:39:13.502282 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:39:13.508318 systemd[1]: Started sshd@18-10.0.0.63:22-10.0.0.1:39150.service - OpenSSH per-connection server daemon (10.0.0.1:39150). Jan 13 20:39:13.509281 systemd-logind[1480]: Removed session 18. Jan 13 20:39:13.548454 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 39150 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:13.550451 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:13.554986 systemd-logind[1480]: New session 19 of user core. Jan 13 20:39:13.566255 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:39:13.802658 sshd[4148]: Connection closed by 10.0.0.1 port 39150 Jan 13 20:39:13.803168 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:13.811200 systemd[1]: sshd@18-10.0.0.63:22-10.0.0.1:39150.service: Deactivated successfully. Jan 13 20:39:13.813131 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:39:13.814672 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:39:13.821466 systemd[1]: Started sshd@19-10.0.0.63:22-10.0.0.1:39154.service - OpenSSH per-connection server daemon (10.0.0.1:39154). Jan 13 20:39:13.822430 systemd-logind[1480]: Removed session 19. Jan 13 20:39:13.863407 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 39154 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:13.864829 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:13.869314 systemd-logind[1480]: New session 20 of user core. Jan 13 20:39:13.880252 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:39:15.709148 sshd[4160]: Connection closed by 10.0.0.1 port 39154 Jan 13 20:39:15.709520 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:15.719353 systemd[1]: sshd@19-10.0.0.63:22-10.0.0.1:39154.service: Deactivated successfully. Jan 13 20:39:15.721588 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:39:15.724359 systemd-logind[1480]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:39:15.734520 systemd[1]: Started sshd@20-10.0.0.63:22-10.0.0.1:39162.service - OpenSSH per-connection server daemon (10.0.0.1:39162). Jan 13 20:39:15.736284 systemd-logind[1480]: Removed session 20. Jan 13 20:39:15.774044 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 39162 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:15.775624 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:15.779752 systemd-logind[1480]: New session 21 of user core. Jan 13 20:39:15.787200 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:39:16.039804 sshd[4186]: Connection closed by 10.0.0.1 port 39162 Jan 13 20:39:16.040363 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:16.051752 systemd[1]: sshd@20-10.0.0.63:22-10.0.0.1:39162.service: Deactivated successfully. Jan 13 20:39:16.054112 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:39:16.057073 systemd-logind[1480]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:39:16.075577 systemd[1]: Started sshd@21-10.0.0.63:22-10.0.0.1:39178.service - OpenSSH per-connection server daemon (10.0.0.1:39178). Jan 13 20:39:16.076776 systemd-logind[1480]: Removed session 21. Jan 13 20:39:16.114880 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 39178 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:16.116816 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:16.123236 systemd-logind[1480]: New session 22 of user core. Jan 13 20:39:16.133420 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:39:16.260881 sshd[4198]: Connection closed by 10.0.0.1 port 39178 Jan 13 20:39:16.261303 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:16.266346 systemd[1]: sshd@21-10.0.0.63:22-10.0.0.1:39178.service: Deactivated successfully. Jan 13 20:39:16.268424 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:39:16.269243 systemd-logind[1480]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:39:16.270377 systemd-logind[1480]: Removed session 22. Jan 13 20:39:21.277823 systemd[1]: Started sshd@22-10.0.0.63:22-10.0.0.1:40782.service - OpenSSH per-connection server daemon (10.0.0.1:40782). Jan 13 20:39:21.320710 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 40782 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:21.322348 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:21.326553 systemd-logind[1480]: New session 23 of user core. Jan 13 20:39:21.343299 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:39:21.458524 sshd[4216]: Connection closed by 10.0.0.1 port 40782 Jan 13 20:39:21.458926 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:21.462827 systemd[1]: sshd@22-10.0.0.63:22-10.0.0.1:40782.service: Deactivated successfully. Jan 13 20:39:21.465112 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:39:21.465823 systemd-logind[1480]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:39:21.466670 systemd-logind[1480]: Removed session 23. Jan 13 20:39:25.645391 kubelet[2626]: E0113 20:39:25.645331 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:26.470542 systemd[1]: Started sshd@23-10.0.0.63:22-10.0.0.1:40794.service - OpenSSH per-connection server daemon (10.0.0.1:40794). Jan 13 20:39:26.518834 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 40794 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:26.520648 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:26.525441 systemd-logind[1480]: New session 24 of user core. Jan 13 20:39:26.536347 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:39:26.646505 kubelet[2626]: E0113 20:39:26.645414 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:26.658181 sshd[4233]: Connection closed by 10.0.0.1 port 40794 Jan 13 20:39:26.658545 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:26.663452 systemd[1]: sshd@23-10.0.0.63:22-10.0.0.1:40794.service: Deactivated successfully. Jan 13 20:39:26.665894 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:39:26.666578 systemd-logind[1480]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:39:26.667659 systemd-logind[1480]: Removed session 24. Jan 13 20:39:31.670572 systemd[1]: Started sshd@24-10.0.0.63:22-10.0.0.1:54584.service - OpenSSH per-connection server daemon (10.0.0.1:54584). Jan 13 20:39:31.716468 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 54584 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:31.718182 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:31.722344 systemd-logind[1480]: New session 25 of user core. Jan 13 20:39:31.732351 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:39:31.840407 sshd[4248]: Connection closed by 10.0.0.1 port 54584 Jan 13 20:39:31.840786 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:31.844170 systemd[1]: sshd@24-10.0.0.63:22-10.0.0.1:54584.service: Deactivated successfully. Jan 13 20:39:31.845885 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:39:31.846546 systemd-logind[1480]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:39:31.847510 systemd-logind[1480]: Removed session 25. Jan 13 20:39:36.852376 systemd[1]: Started sshd@25-10.0.0.63:22-10.0.0.1:54588.service - OpenSSH per-connection server daemon (10.0.0.1:54588). Jan 13 20:39:36.899758 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 54588 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:36.901578 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:36.906045 systemd-logind[1480]: New session 26 of user core. Jan 13 20:39:36.915279 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:39:37.025758 sshd[4263]: Connection closed by 10.0.0.1 port 54588 Jan 13 20:39:37.026250 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:37.041885 systemd[1]: sshd@25-10.0.0.63:22-10.0.0.1:54588.service: Deactivated successfully. Jan 13 20:39:37.044211 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:39:37.045971 systemd-logind[1480]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:39:37.051333 systemd[1]: Started sshd@26-10.0.0.63:22-10.0.0.1:54602.service - OpenSSH per-connection server daemon (10.0.0.1:54602). Jan 13 20:39:37.052325 systemd-logind[1480]: Removed session 26. Jan 13 20:39:37.091095 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 54602 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:37.093034 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:37.097291 systemd-logind[1480]: New session 27 of user core. Jan 13 20:39:37.106258 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:39:38.523457 containerd[1497]: time="2025-01-13T20:39:38.523408817Z" level=info msg="StopContainer for \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\" with timeout 30 (s)" Jan 13 20:39:38.541044 containerd[1497]: time="2025-01-13T20:39:38.540981871Z" level=info msg="StopContainer for \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\" with timeout 2 (s)" Jan 13 20:39:38.541560 containerd[1497]: time="2025-01-13T20:39:38.541507408Z" level=info msg="Stop container \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\" with signal terminated" Jan 13 20:39:38.542877 containerd[1497]: time="2025-01-13T20:39:38.542837299Z" level=info msg="Stop container \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\" with signal terminated" Jan 13 20:39:38.546139 containerd[1497]: time="2025-01-13T20:39:38.546069058Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:39:38.551113 systemd-networkd[1407]: lxc_health: Link DOWN Jan 13 20:39:38.551129 systemd-networkd[1407]: lxc_health: Lost carrier Jan 13 20:39:38.559301 systemd[1]: cri-containerd-e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7.scope: Deactivated successfully. Jan 13 20:39:38.584036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7-rootfs.mount: Deactivated successfully. Jan 13 20:39:38.584905 systemd[1]: cri-containerd-26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39.scope: Deactivated successfully. Jan 13 20:39:38.586098 systemd[1]: cri-containerd-26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39.scope: Consumed 7.603s CPU time. Jan 13 20:39:38.591043 containerd[1497]: time="2025-01-13T20:39:38.590960506Z" level=info msg="shim disconnected" id=e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7 namespace=k8s.io Jan 13 20:39:38.591043 containerd[1497]: time="2025-01-13T20:39:38.591024648Z" level=warning msg="cleaning up after shim disconnected" id=e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7 namespace=k8s.io Jan 13 20:39:38.591043 containerd[1497]: time="2025-01-13T20:39:38.591033885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:38.608425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39-rootfs.mount: Deactivated successfully. Jan 13 20:39:38.614432 containerd[1497]: time="2025-01-13T20:39:38.614341173Z" level=info msg="StopContainer for \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\" returns successfully" Jan 13 20:39:38.616979 containerd[1497]: time="2025-01-13T20:39:38.616461144Z" level=info msg="shim disconnected" id=26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39 namespace=k8s.io Jan 13 20:39:38.616979 containerd[1497]: time="2025-01-13T20:39:38.616516579Z" level=warning msg="cleaning up after shim disconnected" id=26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39 namespace=k8s.io Jan 13 20:39:38.616979 containerd[1497]: time="2025-01-13T20:39:38.616528201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:38.618534 containerd[1497]: time="2025-01-13T20:39:38.618490742Z" level=info msg="StopPodSandbox for \"9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6\"" Jan 13 20:39:38.633978 containerd[1497]: time="2025-01-13T20:39:38.618551006Z" level=info msg="Container to stop \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:38.636503 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6-shm.mount: Deactivated successfully. Jan 13 20:39:38.641835 systemd[1]: cri-containerd-9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6.scope: Deactivated successfully. Jan 13 20:39:38.652696 containerd[1497]: time="2025-01-13T20:39:38.652656652Z" level=info msg="StopContainer for \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\" returns successfully" Jan 13 20:39:38.653053 containerd[1497]: time="2025-01-13T20:39:38.653029890Z" level=info msg="StopPodSandbox for \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\"" Jan 13 20:39:38.653158 containerd[1497]: time="2025-01-13T20:39:38.653096195Z" level=info msg="Container to stop \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:38.653158 containerd[1497]: time="2025-01-13T20:39:38.653135570Z" level=info msg="Container to stop \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:38.653158 containerd[1497]: time="2025-01-13T20:39:38.653145980Z" level=info msg="Container to stop \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:38.653158 containerd[1497]: time="2025-01-13T20:39:38.653157441Z" level=info msg="Container to stop \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:38.653315 containerd[1497]: time="2025-01-13T20:39:38.653168292Z" level=info msg="Container to stop \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:39:38.655901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075-shm.mount: Deactivated successfully. Jan 13 20:39:38.660984 systemd[1]: cri-containerd-8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075.scope: Deactivated successfully. Jan 13 20:39:38.770788 containerd[1497]: time="2025-01-13T20:39:38.770561106Z" level=info msg="shim disconnected" id=9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6 namespace=k8s.io Jan 13 20:39:38.770788 containerd[1497]: time="2025-01-13T20:39:38.770630598Z" level=warning msg="cleaning up after shim disconnected" id=9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6 namespace=k8s.io Jan 13 20:39:38.770788 containerd[1497]: time="2025-01-13T20:39:38.770642421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:38.771142 containerd[1497]: time="2025-01-13T20:39:38.770904948Z" level=info msg="shim disconnected" id=8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075 namespace=k8s.io Jan 13 20:39:38.771142 containerd[1497]: time="2025-01-13T20:39:38.770933793Z" level=warning msg="cleaning up after shim disconnected" id=8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075 namespace=k8s.io Jan 13 20:39:38.771142 containerd[1497]: time="2025-01-13T20:39:38.770945315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:38.786905 containerd[1497]: time="2025-01-13T20:39:38.786668902Z" level=info msg="TearDown network for sandbox \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" successfully" Jan 13 20:39:38.786905 containerd[1497]: time="2025-01-13T20:39:38.786704068Z" level=info msg="StopPodSandbox for \"8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075\" returns successfully" Jan 13 20:39:38.786905 containerd[1497]: time="2025-01-13T20:39:38.786864492Z" level=info msg="TearDown network for sandbox \"9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6\" successfully" Jan 13 20:39:38.786905 containerd[1497]: time="2025-01-13T20:39:38.786882306Z" level=info msg="StopPodSandbox for \"9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6\" returns successfully" Jan 13 20:39:38.868547 kubelet[2626]: I0113 20:39:38.868482 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-xtables-lock\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.868547 kubelet[2626]: I0113 20:39:38.868542 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbhm6\" (UniqueName: \"kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-kube-api-access-fbhm6\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.868547 kubelet[2626]: I0113 20:39:38.868561 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-run\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.869176 kubelet[2626]: I0113 20:39:38.868576 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-host-proc-sys-kernel\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.869176 kubelet[2626]: I0113 20:39:38.868592 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-etc-cni-netd\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.869176 kubelet[2626]: I0113 20:39:38.868610 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-lib-modules\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.869176 kubelet[2626]: I0113 20:39:38.868625 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-bpf-maps\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.869176 kubelet[2626]: I0113 20:39:38.868641 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-host-proc-sys-net\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.869176 kubelet[2626]: I0113 20:39:38.868655 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cni-path\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.869427 kubelet[2626]: I0113 20:39:38.868670 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-cgroup\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.869427 kubelet[2626]: I0113 20:39:38.868657 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.869427 kubelet[2626]: I0113 20:39:38.868691 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skt49\" (UniqueName: \"kubernetes.io/projected/f556f878-838f-42db-ae14-a2ce81aa22fc-kube-api-access-skt49\") pod \"f556f878-838f-42db-ae14-a2ce81aa22fc\" (UID: \"f556f878-838f-42db-ae14-a2ce81aa22fc\") " Jan 13 20:39:38.869427 kubelet[2626]: I0113 20:39:38.868774 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hostproc\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.869427 kubelet[2626]: I0113 20:39:38.868820 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f556f878-838f-42db-ae14-a2ce81aa22fc-cilium-config-path\") pod \"f556f878-838f-42db-ae14-a2ce81aa22fc\" (UID: \"f556f878-838f-42db-ae14-a2ce81aa22fc\") " Jan 13 20:39:38.869427 kubelet[2626]: I0113 20:39:38.868847 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hubble-tls\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.871963 kubelet[2626]: I0113 20:39:38.868871 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-config-path\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.871963 kubelet[2626]: I0113 20:39:38.868927 2626 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.871963 kubelet[2626]: I0113 20:39:38.869654 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.871963 kubelet[2626]: I0113 20:39:38.869688 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.871963 kubelet[2626]: I0113 20:39:38.869701 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.872124 kubelet[2626]: I0113 20:39:38.869718 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.872273 kubelet[2626]: I0113 20:39:38.872210 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.872345 kubelet[2626]: I0113 20:39:38.872273 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.872345 kubelet[2626]: I0113 20:39:38.872295 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.872345 kubelet[2626]: I0113 20:39:38.872314 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.872345 kubelet[2626]: I0113 20:39:38.872336 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:39:38.874106 kubelet[2626]: I0113 20:39:38.873946 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-kube-api-access-fbhm6" (OuterVolumeSpecName: "kube-api-access-fbhm6") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "kube-api-access-fbhm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:39:38.874106 kubelet[2626]: I0113 20:39:38.874056 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f556f878-838f-42db-ae14-a2ce81aa22fc-kube-api-access-skt49" (OuterVolumeSpecName: "kube-api-access-skt49") pod "f556f878-838f-42db-ae14-a2ce81aa22fc" (UID: "f556f878-838f-42db-ae14-a2ce81aa22fc"). InnerVolumeSpecName "kube-api-access-skt49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:39:38.874222 kubelet[2626]: I0113 20:39:38.874135 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f556f878-838f-42db-ae14-a2ce81aa22fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f556f878-838f-42db-ae14-a2ce81aa22fc" (UID: "f556f878-838f-42db-ae14-a2ce81aa22fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:39:38.874402 kubelet[2626]: I0113 20:39:38.874362 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:39:38.875001 kubelet[2626]: I0113 20:39:38.874947 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:39:38.969478 kubelet[2626]: I0113 20:39:38.969426 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58a465d2-8934-4385-94f7-ee2aa3ae31a0-clustermesh-secrets\") pod \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\" (UID: \"58a465d2-8934-4385-94f7-ee2aa3ae31a0\") " Jan 13 20:39:38.969673 kubelet[2626]: I0113 20:39:38.969504 2626 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969673 kubelet[2626]: I0113 20:39:38.969516 2626 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969673 kubelet[2626]: I0113 20:39:38.969529 2626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fbhm6\" (UniqueName: \"kubernetes.io/projected/58a465d2-8934-4385-94f7-ee2aa3ae31a0-kube-api-access-fbhm6\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969673 kubelet[2626]: I0113 20:39:38.969538 2626 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969673 kubelet[2626]: I0113 20:39:38.969559 2626 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969673 kubelet[2626]: I0113 20:39:38.969568 2626 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969673 kubelet[2626]: I0113 20:39:38.969577 2626 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969673 kubelet[2626]: I0113 20:39:38.969586 2626 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969878 kubelet[2626]: I0113 20:39:38.969594 2626 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969878 kubelet[2626]: I0113 20:39:38.969602 2626 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969878 kubelet[2626]: I0113 20:39:38.969611 2626 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969878 kubelet[2626]: I0113 20:39:38.969619 2626 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-skt49\" (UniqueName: \"kubernetes.io/projected/f556f878-838f-42db-ae14-a2ce81aa22fc-kube-api-access-skt49\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969878 kubelet[2626]: I0113 20:39:38.969626 2626 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58a465d2-8934-4385-94f7-ee2aa3ae31a0-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.969878 kubelet[2626]: I0113 20:39:38.969635 2626 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f556f878-838f-42db-ae14-a2ce81aa22fc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:38.972808 kubelet[2626]: I0113 20:39:38.972752 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a465d2-8934-4385-94f7-ee2aa3ae31a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "58a465d2-8934-4385-94f7-ee2aa3ae31a0" (UID: "58a465d2-8934-4385-94f7-ee2aa3ae31a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:39:39.070148 kubelet[2626]: I0113 20:39:39.069943 2626 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58a465d2-8934-4385-94f7-ee2aa3ae31a0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 20:39:39.080072 kubelet[2626]: I0113 20:39:39.080037 2626 scope.go:117] "RemoveContainer" containerID="26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39" Jan 13 20:39:39.082437 containerd[1497]: time="2025-01-13T20:39:39.082385765Z" level=info msg="RemoveContainer for \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\"" Jan 13 20:39:39.087964 systemd[1]: Removed slice kubepods-burstable-pod58a465d2_8934_4385_94f7_ee2aa3ae31a0.slice - libcontainer container kubepods-burstable-pod58a465d2_8934_4385_94f7_ee2aa3ae31a0.slice. Jan 13 20:39:39.088089 systemd[1]: kubepods-burstable-pod58a465d2_8934_4385_94f7_ee2aa3ae31a0.slice: Consumed 7.708s CPU time. Jan 13 20:39:39.089320 systemd[1]: Removed slice kubepods-besteffort-podf556f878_838f_42db_ae14_a2ce81aa22fc.slice - libcontainer container kubepods-besteffort-podf556f878_838f_42db_ae14_a2ce81aa22fc.slice. Jan 13 20:39:39.180921 containerd[1497]: time="2025-01-13T20:39:39.180784302Z" level=info msg="RemoveContainer for \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\" returns successfully" Jan 13 20:39:39.181217 kubelet[2626]: I0113 20:39:39.181163 2626 scope.go:117] "RemoveContainer" containerID="a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11" Jan 13 20:39:39.182214 containerd[1497]: time="2025-01-13T20:39:39.182152396Z" level=info msg="RemoveContainer for \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\"" Jan 13 20:39:39.224878 containerd[1497]: time="2025-01-13T20:39:39.224819867Z" level=info msg="RemoveContainer for \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\" returns successfully" Jan 13 20:39:39.225130 kubelet[2626]: I0113 20:39:39.225099 2626 scope.go:117] "RemoveContainer" containerID="39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28" Jan 13 20:39:39.226604 containerd[1497]: time="2025-01-13T20:39:39.226576006Z" level=info msg="RemoveContainer for \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\"" Jan 13 20:39:39.232939 containerd[1497]: time="2025-01-13T20:39:39.232876221Z" level=info msg="RemoveContainer for \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\" returns successfully" Jan 13 20:39:39.233475 kubelet[2626]: I0113 20:39:39.233305 2626 scope.go:117] "RemoveContainer" containerID="9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32" Jan 13 20:39:39.234860 containerd[1497]: time="2025-01-13T20:39:39.234826969Z" level=info msg="RemoveContainer for \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\"" Jan 13 20:39:39.239447 containerd[1497]: time="2025-01-13T20:39:39.239387555Z" level=info msg="RemoveContainer for \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\" returns successfully" Jan 13 20:39:39.239700 kubelet[2626]: I0113 20:39:39.239654 2626 scope.go:117] "RemoveContainer" containerID="a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d" Jan 13 20:39:39.241032 containerd[1497]: time="2025-01-13T20:39:39.240982448Z" level=info msg="RemoveContainer for \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\"" Jan 13 20:39:39.272356 containerd[1497]: time="2025-01-13T20:39:39.272285545Z" level=info msg="RemoveContainer for \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\" returns successfully" Jan 13 20:39:39.272610 kubelet[2626]: I0113 20:39:39.272561 2626 scope.go:117] "RemoveContainer" containerID="26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39" Jan 13 20:39:39.272936 containerd[1497]: time="2025-01-13T20:39:39.272881073Z" level=error msg="ContainerStatus for \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\": not found" Jan 13 20:39:39.280696 kubelet[2626]: E0113 20:39:39.280630 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\": not found" containerID="26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39" Jan 13 20:39:39.280899 kubelet[2626]: I0113 20:39:39.280688 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39"} err="failed to get container status \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\": rpc error: code = NotFound desc = an error occurred when try to find container \"26686c23c52b1d56a116bda91c57ca35e451c5aae3d84f88a426ec0493aa0f39\": not found" Jan 13 20:39:39.280899 kubelet[2626]: I0113 20:39:39.280807 2626 scope.go:117] "RemoveContainer" containerID="a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11" Jan 13 20:39:39.281612 containerd[1497]: time="2025-01-13T20:39:39.281123109Z" level=error msg="ContainerStatus for \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\": not found" Jan 13 20:39:39.281808 kubelet[2626]: E0113 20:39:39.281353 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\": not found" containerID="a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11" Jan 13 20:39:39.281808 kubelet[2626]: I0113 20:39:39.281403 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11"} err="failed to get container status \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3679834232571fab6e26b7b7ba8a14530a95c7a54973c75570d592757122e11\": not found" Jan 13 20:39:39.281808 kubelet[2626]: I0113 20:39:39.281438 2626 scope.go:117] "RemoveContainer" containerID="39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28" Jan 13 20:39:39.281808 kubelet[2626]: E0113 20:39:39.281732 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\": not found" containerID="39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28" Jan 13 20:39:39.281808 kubelet[2626]: I0113 20:39:39.281751 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28"} err="failed to get container status \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\": rpc error: code = NotFound desc = an error occurred when try to find container \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\": not found" Jan 13 20:39:39.281808 kubelet[2626]: I0113 20:39:39.281769 2626 scope.go:117] "RemoveContainer" containerID="9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32" Jan 13 20:39:39.282023 containerd[1497]: time="2025-01-13T20:39:39.281644828Z" level=error msg="ContainerStatus for \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39bac8e51757da5f92894525604b54f8fac893df439b97c59941aaf6c6349a28\": not found" Jan 13 20:39:39.282167 containerd[1497]: time="2025-01-13T20:39:39.282067950Z" level=error msg="ContainerStatus for \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\": not found" Jan 13 20:39:39.282306 kubelet[2626]: E0113 20:39:39.282277 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\": not found" containerID="9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32" Jan 13 20:39:39.282348 kubelet[2626]: I0113 20:39:39.282302 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32"} err="failed to get container status \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ffd35b1d7a51f573416d5a4888e7fc7dedbead985f51886c6139774a9629c32\": not found" Jan 13 20:39:39.282348 kubelet[2626]: I0113 20:39:39.282327 2626 scope.go:117] "RemoveContainer" containerID="a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d" Jan 13 20:39:39.282501 containerd[1497]: time="2025-01-13T20:39:39.282470914Z" level=error msg="ContainerStatus for \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\": not found" Jan 13 20:39:39.282599 kubelet[2626]: E0113 20:39:39.282582 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\": not found" containerID="a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d" Jan 13 20:39:39.282653 kubelet[2626]: I0113 20:39:39.282600 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d"} err="failed to get container status \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4bb3395eeaaa5378cdb36cdd03d5dd7023b5058ffccf3e18c4b46777046359d\": not found" Jan 13 20:39:39.282653 kubelet[2626]: I0113 20:39:39.282613 2626 scope.go:117] "RemoveContainer" containerID="e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7" Jan 13 20:39:39.283825 containerd[1497]: time="2025-01-13T20:39:39.283763705Z" level=info msg="RemoveContainer for \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\"" Jan 13 20:39:39.288064 containerd[1497]: time="2025-01-13T20:39:39.288007951Z" level=info msg="RemoveContainer for \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\" returns successfully" Jan 13 20:39:39.288286 kubelet[2626]: I0113 20:39:39.288251 2626 scope.go:117] "RemoveContainer" containerID="e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7" Jan 13 20:39:39.288614 containerd[1497]: time="2025-01-13T20:39:39.288565147Z" level=error msg="ContainerStatus for \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\": not found" Jan 13 20:39:39.288733 kubelet[2626]: E0113 20:39:39.288707 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\": not found" containerID="e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7" Jan 13 20:39:39.288785 kubelet[2626]: I0113 20:39:39.288736 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7"} err="failed to get container status \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9e29de2bd6918f3ac1ddda86415af1220987c8e9e5711c3a5db13c33a7f61b7\": not found" Jan 13 20:39:39.518255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8453947d006ff1ca9b9a86b8efc95e9735b8d469e6d3c6ad1b595a5ed741e075-rootfs.mount: Deactivated successfully. Jan 13 20:39:39.518378 systemd[1]: var-lib-kubelet-pods-58a465d2\x2d8934\x2d4385\x2d94f7\x2dee2aa3ae31a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:39:39.518479 systemd[1]: var-lib-kubelet-pods-58a465d2\x2d8934\x2d4385\x2d94f7\x2dee2aa3ae31a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:39:39.518576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ee23ba358ad6ee883c74a1bb0c34689ece6b4c9d3fe539ad7aba4dea9e0a4f6-rootfs.mount: Deactivated successfully. Jan 13 20:39:39.518669 systemd[1]: var-lib-kubelet-pods-58a465d2\x2d8934\x2d4385\x2d94f7\x2dee2aa3ae31a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfbhm6.mount: Deactivated successfully. Jan 13 20:39:39.518765 systemd[1]: var-lib-kubelet-pods-f556f878\x2d838f\x2d42db\x2dae14\x2da2ce81aa22fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskt49.mount: Deactivated successfully. Jan 13 20:39:39.645905 kubelet[2626]: E0113 20:39:39.645454 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:40.426361 sshd[4277]: Connection closed by 10.0.0.1 port 54602 Jan 13 20:39:40.426906 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:40.439917 systemd[1]: sshd@26-10.0.0.63:22-10.0.0.1:54602.service: Deactivated successfully. Jan 13 20:39:40.441958 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:39:40.443523 systemd-logind[1480]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:39:40.450419 systemd[1]: Started sshd@27-10.0.0.63:22-10.0.0.1:54614.service - OpenSSH per-connection server daemon (10.0.0.1:54614). Jan 13 20:39:40.451617 systemd-logind[1480]: Removed session 27. Jan 13 20:39:40.496927 sshd[4433]: Accepted publickey for core from 10.0.0.1 port 54614 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:40.498803 sshd-session[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:40.504245 systemd-logind[1480]: New session 28 of user core. Jan 13 20:39:40.513230 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:39:40.648205 kubelet[2626]: I0113 20:39:40.648158 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a465d2-8934-4385-94f7-ee2aa3ae31a0" path="/var/lib/kubelet/pods/58a465d2-8934-4385-94f7-ee2aa3ae31a0/volumes" Jan 13 20:39:40.649200 kubelet[2626]: I0113 20:39:40.649173 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f556f878-838f-42db-ae14-a2ce81aa22fc" path="/var/lib/kubelet/pods/f556f878-838f-42db-ae14-a2ce81aa22fc/volumes" Jan 13 20:39:40.696571 kubelet[2626]: E0113 20:39:40.696421 2626 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:39:41.208907 sshd[4435]: Connection closed by 10.0.0.1 port 54614 Jan 13 20:39:41.209311 sshd-session[4433]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:41.217894 systemd[1]: sshd@27-10.0.0.63:22-10.0.0.1:54614.service: Deactivated successfully. Jan 13 20:39:41.220487 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:39:41.222585 systemd-logind[1480]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:39:41.231405 systemd[1]: Started sshd@28-10.0.0.63:22-10.0.0.1:40624.service - OpenSSH per-connection server daemon (10.0.0.1:40624). Jan 13 20:39:41.232545 systemd-logind[1480]: Removed session 28. Jan 13 20:39:41.270235 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 40624 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:41.272373 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:41.277964 systemd-logind[1480]: New session 29 of user core. Jan 13 20:39:41.282296 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:39:41.335003 sshd[4451]: Connection closed by 10.0.0.1 port 40624 Jan 13 20:39:41.335372 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:41.348907 systemd[1]: sshd@28-10.0.0.63:22-10.0.0.1:40624.service: Deactivated successfully. Jan 13 20:39:41.350618 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:39:41.352021 systemd-logind[1480]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:39:41.359321 systemd[1]: Started sshd@29-10.0.0.63:22-10.0.0.1:40640.service - OpenSSH per-connection server daemon (10.0.0.1:40640). Jan 13 20:39:41.360439 systemd-logind[1480]: Removed session 29. Jan 13 20:39:41.399437 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 40640 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:41.401076 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:41.405602 systemd-logind[1480]: New session 30 of user core. Jan 13 20:39:41.420196 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 20:39:41.557767 kubelet[2626]: E0113 20:39:41.556443 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f556f878-838f-42db-ae14-a2ce81aa22fc" containerName="cilium-operator" Jan 13 20:39:41.557767 kubelet[2626]: E0113 20:39:41.556523 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58a465d2-8934-4385-94f7-ee2aa3ae31a0" containerName="mount-cgroup" Jan 13 20:39:41.557767 kubelet[2626]: E0113 20:39:41.556531 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58a465d2-8934-4385-94f7-ee2aa3ae31a0" containerName="clean-cilium-state" Jan 13 20:39:41.557767 kubelet[2626]: E0113 20:39:41.556539 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58a465d2-8934-4385-94f7-ee2aa3ae31a0" containerName="cilium-agent" Jan 13 20:39:41.557767 kubelet[2626]: E0113 20:39:41.556549 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58a465d2-8934-4385-94f7-ee2aa3ae31a0" containerName="apply-sysctl-overwrites" Jan 13 20:39:41.557767 kubelet[2626]: E0113 20:39:41.556558 2626 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="58a465d2-8934-4385-94f7-ee2aa3ae31a0" containerName="mount-bpf-fs" Jan 13 20:39:41.557767 kubelet[2626]: I0113 20:39:41.556587 2626 memory_manager.go:354] "RemoveStaleState removing state" podUID="f556f878-838f-42db-ae14-a2ce81aa22fc" containerName="cilium-operator" Jan 13 20:39:41.557767 kubelet[2626]: I0113 20:39:41.556596 2626 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a465d2-8934-4385-94f7-ee2aa3ae31a0" containerName="cilium-agent" Jan 13 20:39:41.568394 systemd[1]: Created slice kubepods-burstable-poda37f1561_67d8_4404_907a_07c7a27ecf84.slice - libcontainer container kubepods-burstable-poda37f1561_67d8_4404_907a_07c7a27ecf84.slice. Jan 13 20:39:41.685589 kubelet[2626]: I0113 20:39:41.685490 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-hostproc\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.685589 kubelet[2626]: I0113 20:39:41.685566 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-host-proc-sys-kernel\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.685589 kubelet[2626]: I0113 20:39:41.685600 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-bpf-maps\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686283 kubelet[2626]: I0113 20:39:41.685622 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-cni-path\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686283 kubelet[2626]: I0113 20:39:41.685641 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-etc-cni-netd\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686283 kubelet[2626]: I0113 20:39:41.685668 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a37f1561-67d8-4404-907a-07c7a27ecf84-cilium-config-path\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686283 kubelet[2626]: I0113 20:39:41.685712 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a37f1561-67d8-4404-907a-07c7a27ecf84-clustermesh-secrets\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686283 kubelet[2626]: I0113 20:39:41.685736 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a37f1561-67d8-4404-907a-07c7a27ecf84-hubble-tls\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686283 kubelet[2626]: I0113 20:39:41.685761 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dlhv\" (UniqueName: \"kubernetes.io/projected/a37f1561-67d8-4404-907a-07c7a27ecf84-kube-api-access-2dlhv\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686469 kubelet[2626]: I0113 20:39:41.685817 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-host-proc-sys-net\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686469 kubelet[2626]: I0113 20:39:41.685844 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-xtables-lock\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686469 kubelet[2626]: I0113 20:39:41.685868 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-cilium-cgroup\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686469 kubelet[2626]: I0113 20:39:41.685888 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-lib-modules\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686469 kubelet[2626]: I0113 20:39:41.685911 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a37f1561-67d8-4404-907a-07c7a27ecf84-cilium-ipsec-secrets\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.686469 kubelet[2626]: I0113 20:39:41.685933 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a37f1561-67d8-4404-907a-07c7a27ecf84-cilium-run\") pod \"cilium-k9pdz\" (UID: \"a37f1561-67d8-4404-907a-07c7a27ecf84\") " pod="kube-system/cilium-k9pdz" Jan 13 20:39:41.872555 kubelet[2626]: E0113 20:39:41.872432 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:41.873146 containerd[1497]: time="2025-01-13T20:39:41.872978266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9pdz,Uid:a37f1561-67d8-4404-907a-07c7a27ecf84,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:41.996235 containerd[1497]: time="2025-01-13T20:39:41.995669717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:41.996235 containerd[1497]: time="2025-01-13T20:39:41.995922817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:41.996235 containerd[1497]: time="2025-01-13T20:39:41.995940841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:41.996235 containerd[1497]: time="2025-01-13T20:39:41.996062061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:42.026372 systemd[1]: Started cri-containerd-2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a.scope - libcontainer container 2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a. Jan 13 20:39:42.053718 containerd[1497]: time="2025-01-13T20:39:42.053663248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9pdz,Uid:a37f1561-67d8-4404-907a-07c7a27ecf84,Namespace:kube-system,Attempt:0,} returns sandbox id \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\"" Jan 13 20:39:42.054643 kubelet[2626]: E0113 20:39:42.054611 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:42.056435 containerd[1497]: time="2025-01-13T20:39:42.056399471Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:39:42.078852 containerd[1497]: time="2025-01-13T20:39:42.078762658Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2a36837aaa17792a58fc7957c4d3962e9d9f65494681205741fabec69d7ab364\"" Jan 13 20:39:42.079459 containerd[1497]: time="2025-01-13T20:39:42.079410335Z" level=info msg="StartContainer for \"2a36837aaa17792a58fc7957c4d3962e9d9f65494681205741fabec69d7ab364\"" Jan 13 20:39:42.108261 systemd[1]: Started cri-containerd-2a36837aaa17792a58fc7957c4d3962e9d9f65494681205741fabec69d7ab364.scope - libcontainer container 2a36837aaa17792a58fc7957c4d3962e9d9f65494681205741fabec69d7ab364. Jan 13 20:39:42.133967 containerd[1497]: time="2025-01-13T20:39:42.133857445Z" level=info msg="StartContainer for \"2a36837aaa17792a58fc7957c4d3962e9d9f65494681205741fabec69d7ab364\" returns successfully" Jan 13 20:39:42.144440 systemd[1]: cri-containerd-2a36837aaa17792a58fc7957c4d3962e9d9f65494681205741fabec69d7ab364.scope: Deactivated successfully. Jan 13 20:39:42.179283 containerd[1497]: time="2025-01-13T20:39:42.179191500Z" level=info msg="shim disconnected" id=2a36837aaa17792a58fc7957c4d3962e9d9f65494681205741fabec69d7ab364 namespace=k8s.io Jan 13 20:39:42.179283 containerd[1497]: time="2025-01-13T20:39:42.179253066Z" level=warning msg="cleaning up after shim disconnected" id=2a36837aaa17792a58fc7957c4d3962e9d9f65494681205741fabec69d7ab364 namespace=k8s.io Jan 13 20:39:42.179283 containerd[1497]: time="2025-01-13T20:39:42.179261251Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:42.286230 kubelet[2626]: I0113 20:39:42.286158 2626 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:39:42Z","lastTransitionTime":"2025-01-13T20:39:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:39:43.093698 kubelet[2626]: E0113 20:39:43.093661 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:43.095712 containerd[1497]: time="2025-01-13T20:39:43.095642678Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:39:43.320486 containerd[1497]: time="2025-01-13T20:39:43.320402849Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8790cbc1f379f7122df14e639653900ac52761f601a7784f1e8ac00e0464f185\"" Jan 13 20:39:43.321167 containerd[1497]: time="2025-01-13T20:39:43.321058161Z" level=info msg="StartContainer for \"8790cbc1f379f7122df14e639653900ac52761f601a7784f1e8ac00e0464f185\"" Jan 13 20:39:43.353250 systemd[1]: Started cri-containerd-8790cbc1f379f7122df14e639653900ac52761f601a7784f1e8ac00e0464f185.scope - libcontainer container 8790cbc1f379f7122df14e639653900ac52761f601a7784f1e8ac00e0464f185. Jan 13 20:39:43.383482 containerd[1497]: time="2025-01-13T20:39:43.383437853Z" level=info msg="StartContainer for \"8790cbc1f379f7122df14e639653900ac52761f601a7784f1e8ac00e0464f185\" returns successfully" Jan 13 20:39:43.389114 systemd[1]: cri-containerd-8790cbc1f379f7122df14e639653900ac52761f601a7784f1e8ac00e0464f185.scope: Deactivated successfully. Jan 13 20:39:43.416417 containerd[1497]: time="2025-01-13T20:39:43.416351353Z" level=info msg="shim disconnected" id=8790cbc1f379f7122df14e639653900ac52761f601a7784f1e8ac00e0464f185 namespace=k8s.io Jan 13 20:39:43.416417 containerd[1497]: time="2025-01-13T20:39:43.416411126Z" level=warning msg="cleaning up after shim disconnected" id=8790cbc1f379f7122df14e639653900ac52761f601a7784f1e8ac00e0464f185 namespace=k8s.io Jan 13 20:39:43.416417 containerd[1497]: time="2025-01-13T20:39:43.416420454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:43.791573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8790cbc1f379f7122df14e639653900ac52761f601a7784f1e8ac00e0464f185-rootfs.mount: Deactivated successfully. Jan 13 20:39:44.096757 kubelet[2626]: E0113 20:39:44.096625 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:44.098710 containerd[1497]: time="2025-01-13T20:39:44.098556858Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:39:44.127279 containerd[1497]: time="2025-01-13T20:39:44.127222721Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b8e770afcc04854850615a80367ba457286d96e8e192d3f4862dde1db35e4500\"" Jan 13 20:39:44.127807 containerd[1497]: time="2025-01-13T20:39:44.127759798Z" level=info msg="StartContainer for \"b8e770afcc04854850615a80367ba457286d96e8e192d3f4862dde1db35e4500\"" Jan 13 20:39:44.162222 systemd[1]: Started cri-containerd-b8e770afcc04854850615a80367ba457286d96e8e192d3f4862dde1db35e4500.scope - libcontainer container b8e770afcc04854850615a80367ba457286d96e8e192d3f4862dde1db35e4500. Jan 13 20:39:44.196844 containerd[1497]: time="2025-01-13T20:39:44.196752638Z" level=info msg="StartContainer for \"b8e770afcc04854850615a80367ba457286d96e8e192d3f4862dde1db35e4500\" returns successfully" Jan 13 20:39:44.197036 systemd[1]: cri-containerd-b8e770afcc04854850615a80367ba457286d96e8e192d3f4862dde1db35e4500.scope: Deactivated successfully. Jan 13 20:39:44.224289 containerd[1497]: time="2025-01-13T20:39:44.224195856Z" level=info msg="shim disconnected" id=b8e770afcc04854850615a80367ba457286d96e8e192d3f4862dde1db35e4500 namespace=k8s.io Jan 13 20:39:44.224289 containerd[1497]: time="2025-01-13T20:39:44.224262522Z" level=warning msg="cleaning up after shim disconnected" id=b8e770afcc04854850615a80367ba457286d96e8e192d3f4862dde1db35e4500 namespace=k8s.io Jan 13 20:39:44.224289 containerd[1497]: time="2025-01-13T20:39:44.224273863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:44.791765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8e770afcc04854850615a80367ba457286d96e8e192d3f4862dde1db35e4500-rootfs.mount: Deactivated successfully. Jan 13 20:39:45.101290 kubelet[2626]: E0113 20:39:45.101134 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:45.102485 containerd[1497]: time="2025-01-13T20:39:45.102455671Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:39:45.320567 containerd[1497]: time="2025-01-13T20:39:45.320484496Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b333614af60974889fd872d677b6908c5e24d48580ccf2bccaec9863001eb8e\"" Jan 13 20:39:45.321347 containerd[1497]: time="2025-01-13T20:39:45.321299700Z" level=info msg="StartContainer for \"7b333614af60974889fd872d677b6908c5e24d48580ccf2bccaec9863001eb8e\"" Jan 13 20:39:45.363413 systemd[1]: Started cri-containerd-7b333614af60974889fd872d677b6908c5e24d48580ccf2bccaec9863001eb8e.scope - libcontainer container 7b333614af60974889fd872d677b6908c5e24d48580ccf2bccaec9863001eb8e. Jan 13 20:39:45.391544 systemd[1]: cri-containerd-7b333614af60974889fd872d677b6908c5e24d48580ccf2bccaec9863001eb8e.scope: Deactivated successfully. Jan 13 20:39:45.400583 containerd[1497]: time="2025-01-13T20:39:45.400509776Z" level=info msg="StartContainer for \"7b333614af60974889fd872d677b6908c5e24d48580ccf2bccaec9863001eb8e\" returns successfully" Jan 13 20:39:45.447712 containerd[1497]: time="2025-01-13T20:39:45.447631630Z" level=info msg="shim disconnected" id=7b333614af60974889fd872d677b6908c5e24d48580ccf2bccaec9863001eb8e namespace=k8s.io Jan 13 20:39:45.447712 containerd[1497]: time="2025-01-13T20:39:45.447691113Z" level=warning msg="cleaning up after shim disconnected" id=7b333614af60974889fd872d677b6908c5e24d48580ccf2bccaec9863001eb8e namespace=k8s.io Jan 13 20:39:45.447712 containerd[1497]: time="2025-01-13T20:39:45.447699639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:45.698247 kubelet[2626]: E0113 20:39:45.698178 2626 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:39:45.791863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b333614af60974889fd872d677b6908c5e24d48580ccf2bccaec9863001eb8e-rootfs.mount: Deactivated successfully. Jan 13 20:39:46.105300 kubelet[2626]: E0113 20:39:46.105173 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:46.107354 containerd[1497]: time="2025-01-13T20:39:46.107311871Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:39:46.243596 containerd[1497]: time="2025-01-13T20:39:46.243509937Z" level=info msg="CreateContainer within sandbox \"2aec2a7e8f0693264763c804e42a0b375eb725f5ac9e9c4e0c1f9b9e8f6b709a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9566b49fd561b6439c4da2876baca9df03336078d4a90b6de81f1f3521b014a3\"" Jan 13 20:39:46.244378 containerd[1497]: time="2025-01-13T20:39:46.244319340Z" level=info msg="StartContainer for \"9566b49fd561b6439c4da2876baca9df03336078d4a90b6de81f1f3521b014a3\"" Jan 13 20:39:46.275294 systemd[1]: Started cri-containerd-9566b49fd561b6439c4da2876baca9df03336078d4a90b6de81f1f3521b014a3.scope - libcontainer container 9566b49fd561b6439c4da2876baca9df03336078d4a90b6de81f1f3521b014a3. Jan 13 20:39:46.309980 containerd[1497]: time="2025-01-13T20:39:46.309798334Z" level=info msg="StartContainer for \"9566b49fd561b6439c4da2876baca9df03336078d4a90b6de81f1f3521b014a3\" returns successfully" Jan 13 20:39:46.645142 kubelet[2626]: E0113 20:39:46.645103 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:46.751127 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:39:47.109649 kubelet[2626]: E0113 20:39:47.109527 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:47.123441 kubelet[2626]: I0113 20:39:47.123387 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k9pdz" podStartSLOduration=6.123367589 podStartE2EDuration="6.123367589s" podCreationTimestamp="2025-01-13 20:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:47.1232091 +0000 UTC m=+96.557373095" watchObservedRunningTime="2025-01-13 20:39:47.123367589 +0000 UTC m=+96.557531564" Jan 13 20:39:48.111263 kubelet[2626]: E0113 20:39:48.111223 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:49.113181 kubelet[2626]: E0113 20:39:49.113128 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:50.035171 systemd-networkd[1407]: lxc_health: Link UP Jan 13 20:39:50.049552 systemd-networkd[1407]: lxc_health: Gained carrier Jan 13 20:39:51.715167 systemd-networkd[1407]: lxc_health: Gained IPv6LL Jan 13 20:39:51.875315 kubelet[2626]: E0113 20:39:51.874595 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:52.119942 kubelet[2626]: E0113 20:39:52.119819 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:52.645167 kubelet[2626]: E0113 20:39:52.645115 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:53.121755 kubelet[2626]: E0113 20:39:53.121639 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:55.644868 kubelet[2626]: E0113 20:39:55.644803 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:56.433048 sshd[4459]: Connection closed by 10.0.0.1 port 40640 Jan 13 20:39:56.435307 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:56.438915 systemd[1]: sshd@29-10.0.0.63:22-10.0.0.1:40640.service: Deactivated successfully. Jan 13 20:39:56.441451 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 20:39:56.442164 systemd-logind[1480]: Session 30 logged out. Waiting for processes to exit. Jan 13 20:39:56.443125 systemd-logind[1480]: Removed session 30.