Feb 13 15:22:29.989450 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:22:29.989478 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:22:29.989493 kernel: BIOS-provided physical RAM map: Feb 13 15:22:29.989502 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:22:29.989510 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:22:29.989518 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:22:29.989529 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:22:29.989538 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:22:29.989547 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:22:29.989556 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:22:29.989567 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:22:29.989576 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:22:29.989585 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:22:29.989594 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:22:29.989605 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:22:29.989615 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:22:29.989627 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:22:29.989637 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:22:29.989646 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:22:29.989656 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:22:29.989665 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:22:29.989675 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:22:29.989684 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:22:29.989694 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:22:29.989703 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:22:29.989713 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:22:29.989722 kernel: NX (Execute Disable) protection: active Feb 13 15:22:29.989767 kernel: APIC: Static calls initialized Feb 13 15:22:29.989777 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:22:29.989786 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:22:29.989796 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:22:29.989805 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:22:29.989814 kernel: extended physical RAM map: Feb 13 15:22:29.989824 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:22:29.989859 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:22:29.989869 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:22:29.989878 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:22:29.989887 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:22:29.989900 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:22:29.989909 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:22:29.989923 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:22:29.989932 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:22:29.989943 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:22:29.989953 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:22:29.989963 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:22:29.989976 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:22:29.989986 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:22:29.989996 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:22:29.990006 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:22:29.990017 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:22:29.990027 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:22:29.990037 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:22:29.990047 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:22:29.990057 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:22:29.990070 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:22:29.990080 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:22:29.990090 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:22:29.990100 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:22:29.990111 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:22:29.990121 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:22:29.990131 kernel: efi: EFI v2.7 by EDK II Feb 13 15:22:29.990141 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:22:29.990151 kernel: random: crng init done Feb 13 15:22:29.990162 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:22:29.990172 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:22:29.990184 kernel: secureboot: Secure boot disabled Feb 13 15:22:29.990194 kernel: SMBIOS 2.8 present. Feb 13 15:22:29.990205 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:22:29.990215 kernel: Hypervisor detected: KVM Feb 13 15:22:29.990225 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:22:29.990235 kernel: kvm-clock: using sched offset of 2793852036 cycles Feb 13 15:22:29.990246 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:22:29.990256 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:22:29.990267 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:22:29.990278 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:22:29.990288 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:22:29.990302 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:22:29.990312 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:22:29.990323 kernel: Using GB pages for direct mapping Feb 13 15:22:29.990333 kernel: ACPI: Early table checksum verification disabled Feb 13 15:22:29.990344 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:22:29.990354 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:22:29.990365 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:22:29.990375 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:22:29.990385 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:22:29.990398 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:22:29.990409 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:22:29.990419 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:22:29.990430 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:22:29.990440 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:22:29.990450 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:22:29.990461 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:22:29.990471 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:22:29.990484 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:22:29.990495 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:22:29.990505 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:22:29.990515 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:22:29.990525 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:22:29.990536 kernel: No NUMA configuration found Feb 13 15:22:29.990546 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:22:29.990557 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:22:29.990569 kernel: Zone ranges: Feb 13 15:22:29.990580 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:22:29.990595 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:22:29.990606 kernel: Normal empty Feb 13 15:22:29.990616 kernel: Movable zone start for each node Feb 13 15:22:29.990626 kernel: Early memory node ranges Feb 13 15:22:29.990636 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:22:29.990647 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:22:29.990657 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:22:29.990667 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:22:29.990677 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:22:29.990691 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:22:29.990701 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:22:29.990711 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:22:29.990721 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:22:29.990756 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:22:29.990766 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:22:29.990786 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:22:29.990799 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:22:29.990810 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:22:29.990821 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:22:29.990832 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:22:29.990851 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:22:29.990865 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:22:29.990876 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:22:29.990887 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:22:29.990898 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:22:29.990909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:22:29.990922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:22:29.990933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:22:29.990944 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:22:29.990955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:22:29.990967 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:22:29.990978 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:22:29.990988 kernel: TSC deadline timer available Feb 13 15:22:29.990999 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:22:29.991010 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:22:29.991024 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:22:29.991035 kernel: kvm-guest: setup PV sched yield Feb 13 15:22:29.991046 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:22:29.991057 kernel: Booting paravirtualized kernel on KVM Feb 13 15:22:29.991068 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:22:29.991079 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:22:29.991090 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:22:29.991101 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:22:29.991112 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:22:29.991122 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:22:29.991136 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:22:29.991148 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:22:29.991160 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:22:29.991171 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:22:29.991182 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:22:29.991193 kernel: Fallback order for Node 0: 0 Feb 13 15:22:29.991204 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:22:29.991214 kernel: Policy zone: DMA32 Feb 13 15:22:29.991228 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:22:29.991239 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 15:22:29.991250 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:22:29.991261 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:22:29.991272 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:22:29.991283 kernel: Dynamic Preempt: voluntary Feb 13 15:22:29.991294 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:22:29.991306 kernel: rcu: RCU event tracing is enabled. Feb 13 15:22:29.991317 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:22:29.991331 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:22:29.991342 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:22:29.991353 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:22:29.991364 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:22:29.991375 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:22:29.991386 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:22:29.991397 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:22:29.991408 kernel: Console: colour dummy device 80x25 Feb 13 15:22:29.991418 kernel: printk: console [ttyS0] enabled Feb 13 15:22:29.991432 kernel: ACPI: Core revision 20230628 Feb 13 15:22:29.991443 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:22:29.991454 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:22:29.991465 kernel: x2apic enabled Feb 13 15:22:29.991475 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:22:29.991486 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:22:29.991496 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:22:29.991507 kernel: kvm-guest: setup PV IPIs Feb 13 15:22:29.991517 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:22:29.991530 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:22:29.991541 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:22:29.991552 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:22:29.991565 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:22:29.991577 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:22:29.991590 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:22:29.991601 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:22:29.991612 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:22:29.991623 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:22:29.991637 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:22:29.991648 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:22:29.991659 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:22:29.991670 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:22:29.991681 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:22:29.991693 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:22:29.991704 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:22:29.991715 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:22:29.991742 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:22:29.991753 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:22:29.991764 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:22:29.991776 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:22:29.991787 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:22:29.991798 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:22:29.991809 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:22:29.991819 kernel: landlock: Up and running. Feb 13 15:22:29.991830 kernel: SELinux: Initializing. Feb 13 15:22:29.991853 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:22:29.991864 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:22:29.991875 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:22:29.991886 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:22:29.991897 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:22:29.991908 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:22:29.991919 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:22:29.991930 kernel: ... version: 0 Feb 13 15:22:29.991941 kernel: ... bit width: 48 Feb 13 15:22:29.991955 kernel: ... generic registers: 6 Feb 13 15:22:29.991966 kernel: ... value mask: 0000ffffffffffff Feb 13 15:22:29.991977 kernel: ... max period: 00007fffffffffff Feb 13 15:22:29.991988 kernel: ... fixed-purpose events: 0 Feb 13 15:22:29.991999 kernel: ... event mask: 000000000000003f Feb 13 15:22:29.992009 kernel: signal: max sigframe size: 1776 Feb 13 15:22:29.992020 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:22:29.992031 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:22:29.992042 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:22:29.992056 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:22:29.992067 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:22:29.992078 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:22:29.992089 kernel: smpboot: Max logical packages: 1 Feb 13 15:22:29.992099 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:22:29.992110 kernel: devtmpfs: initialized Feb 13 15:22:29.992121 kernel: x86/mm: Memory block size: 128MB Feb 13 15:22:29.992132 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:22:29.992143 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:22:29.992157 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:22:29.992168 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:22:29.992179 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:22:29.992190 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:22:29.992202 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:22:29.992213 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:22:29.992223 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:22:29.992234 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:22:29.992245 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:22:29.992259 kernel: audit: type=2000 audit(1739460149.277:1): state=initialized audit_enabled=0 res=1 Feb 13 15:22:29.992270 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:22:29.992281 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:22:29.992292 kernel: cpuidle: using governor menu Feb 13 15:22:29.992303 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:22:29.992314 kernel: dca service started, version 1.12.1 Feb 13 15:22:29.992325 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:22:29.992336 kernel: PCI: Using configuration type 1 for base access Feb 13 15:22:29.992347 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:22:29.992360 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:22:29.992371 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:22:29.992382 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:22:29.992393 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:22:29.992404 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:22:29.992415 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:22:29.992426 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:22:29.992437 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:22:29.992448 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:22:29.992462 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:22:29.992473 kernel: ACPI: Interpreter enabled Feb 13 15:22:29.992484 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:22:29.992494 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:22:29.992506 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:22:29.992516 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:22:29.992527 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:22:29.992538 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:22:29.992777 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:22:29.992954 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:22:29.993101 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:22:29.993114 kernel: PCI host bridge to bus 0000:00 Feb 13 15:22:29.993273 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:22:29.993407 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:22:29.993541 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:22:29.993683 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:22:29.993910 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:22:29.994046 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:22:29.994185 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:22:29.994356 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:22:29.994520 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:22:29.994670 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:22:29.994856 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:22:29.995008 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:22:29.995157 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:22:29.995309 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:22:29.995477 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:22:29.995628 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:22:29.995805 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:22:29.995968 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:22:29.996135 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:22:29.996287 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:22:29.996436 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:22:29.996586 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:22:29.996767 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:22:29.996933 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:22:29.997083 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:22:29.997230 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:22:29.997426 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:22:29.997639 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:22:29.997884 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:22:29.998043 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:22:29.998196 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:22:29.998339 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:22:29.998492 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:22:29.998634 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:22:29.998649 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:22:29.998660 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:22:29.998670 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:22:29.998685 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:22:29.998695 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:22:29.998705 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:22:29.998716 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:22:29.998741 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:22:29.998751 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:22:29.998761 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:22:29.998772 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:22:29.998782 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:22:29.998796 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:22:29.998806 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:22:29.998817 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:22:29.998827 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:22:29.998845 kernel: iommu: Default domain type: Translated Feb 13 15:22:29.998856 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:22:29.998867 kernel: efivars: Registered efivars operations Feb 13 15:22:29.998877 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:22:29.998887 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:22:29.998901 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:22:29.998911 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:22:29.998921 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:22:29.998931 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:22:29.998942 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:22:29.998952 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:22:29.998962 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:22:29.998972 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:22:29.999117 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:22:29.999263 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:22:29.999409 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:22:29.999422 kernel: vgaarb: loaded Feb 13 15:22:29.999433 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:22:29.999444 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:22:29.999461 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:22:29.999484 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:22:29.999501 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:22:29.999527 kernel: pnp: PnP ACPI init Feb 13 15:22:29.999769 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:22:29.999785 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:22:29.999796 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:22:29.999807 kernel: NET: Registered PF_INET protocol family Feb 13 15:22:29.999852 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:22:29.999867 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:22:29.999878 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:22:29.999892 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:22:29.999902 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:22:29.999913 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:22:29.999924 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:22:29.999935 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:22:29.999946 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:22:29.999956 kernel: NET: Registered PF_XDP protocol family Feb 13 15:22:30.000180 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:22:30.000332 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:22:30.000481 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:22:30.000616 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:22:30.000767 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:22:30.000915 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:22:30.001050 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:22:30.001186 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:22:30.001200 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:22:30.001212 kernel: Initialise system trusted keyrings Feb 13 15:22:30.001227 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:22:30.001238 kernel: Key type asymmetric registered Feb 13 15:22:30.001249 kernel: Asymmetric key parser 'x509' registered Feb 13 15:22:30.001260 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:22:30.001271 kernel: io scheduler mq-deadline registered Feb 13 15:22:30.001281 kernel: io scheduler kyber registered Feb 13 15:22:30.001292 kernel: io scheduler bfq registered Feb 13 15:22:30.001304 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:22:30.001315 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:22:30.001330 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:22:30.001344 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:22:30.001355 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:22:30.001366 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:22:30.001378 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:22:30.001389 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:22:30.001403 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:22:30.001560 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:22:30.001700 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:22:30.001715 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 15:22:30.001984 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:22:29 UTC (1739460149) Feb 13 15:22:30.002144 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:22:30.002159 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:22:30.002175 kernel: efifb: probing for efifb Feb 13 15:22:30.002187 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:22:30.002198 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:22:30.002209 kernel: efifb: scrolling: redraw Feb 13 15:22:30.002220 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:22:30.002234 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:22:30.002245 kernel: fb0: EFI VGA frame buffer device Feb 13 15:22:30.002256 kernel: pstore: Using crash dump compression: deflate Feb 13 15:22:30.002267 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:22:30.002279 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:22:30.002293 kernel: Segment Routing with IPv6 Feb 13 15:22:30.002305 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:22:30.002316 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:22:30.002327 kernel: Key type dns_resolver registered Feb 13 15:22:30.002338 kernel: IPI shorthand broadcast: enabled Feb 13 15:22:30.002350 kernel: sched_clock: Marking stable (613003796, 155061092)->(818158629, -50093741) Feb 13 15:22:30.002361 kernel: registered taskstats version 1 Feb 13 15:22:30.002372 kernel: Loading compiled-in X.509 certificates Feb 13 15:22:30.002384 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:22:30.002400 kernel: Key type .fscrypt registered Feb 13 15:22:30.002411 kernel: Key type fscrypt-provisioning registered Feb 13 15:22:30.002422 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:22:30.002433 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:22:30.002445 kernel: ima: No architecture policies found Feb 13 15:22:30.002457 kernel: clk: Disabling unused clocks Feb 13 15:22:30.002469 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:22:30.002481 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:22:30.002497 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:22:30.002510 kernel: Run /init as init process Feb 13 15:22:30.002522 kernel: with arguments: Feb 13 15:22:30.002534 kernel: /init Feb 13 15:22:30.002546 kernel: with environment: Feb 13 15:22:30.002558 kernel: HOME=/ Feb 13 15:22:30.002570 kernel: TERM=linux Feb 13 15:22:30.002582 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:22:30.002600 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:22:30.002622 systemd[1]: Detected virtualization kvm. Feb 13 15:22:30.002635 systemd[1]: Detected architecture x86-64. Feb 13 15:22:30.002648 systemd[1]: Running in initrd. Feb 13 15:22:30.002660 systemd[1]: No hostname configured, using default hostname. Feb 13 15:22:30.002672 systemd[1]: Hostname set to . Feb 13 15:22:30.002686 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:22:30.002698 kernel: hrtimer: interrupt took 4199886 ns Feb 13 15:22:30.002711 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:22:30.002746 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:22:30.002760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:22:30.002774 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:22:30.002787 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:22:30.002800 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:22:30.002813 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:22:30.002829 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:22:30.002858 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:22:30.002872 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:22:30.002885 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:22:30.002898 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:22:30.002911 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:22:30.002925 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:22:30.002938 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:22:30.002950 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:22:30.002968 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:22:30.002981 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:22:30.002993 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:22:30.003003 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:22:30.003014 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:22:30.003025 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:22:30.003036 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:22:30.003046 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:22:30.003060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:22:30.003070 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:22:30.003081 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:22:30.003092 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:22:30.003102 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:22:30.003113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:30.003123 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:22:30.003134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:22:30.003145 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:22:30.003189 systemd-journald[193]: Collecting audit messages is disabled. Feb 13 15:22:30.003219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:22:30.003230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:22:30.003242 systemd-journald[193]: Journal started Feb 13 15:22:30.003265 systemd-journald[193]: Runtime Journal (/run/log/journal/6eb8baee90ad4a68b34d4b1e617968c0) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:22:30.000244 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:22:30.005147 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:22:30.020939 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:22:30.023201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:22:30.025412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:30.029906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:22:30.036685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:22:30.056771 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:22:30.057875 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:22:30.062365 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:22:30.063507 kernel: Bridge firewalling registered Feb 13 15:22:30.074181 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:22:30.075720 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:22:30.078250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:22:30.088967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:22:30.092041 dracut-cmdline[225]: dracut-dracut-053 Feb 13 15:22:30.098065 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:22:30.114539 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:22:30.125027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:22:30.168024 systemd-resolved[256]: Positive Trust Anchors: Feb 13 15:22:30.168054 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:22:30.168125 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:22:30.171486 systemd-resolved[256]: Defaulting to hostname 'linux'. Feb 13 15:22:30.172760 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:22:30.186147 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:22:30.226803 kernel: SCSI subsystem initialized Feb 13 15:22:30.244909 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:22:30.260771 kernel: iscsi: registered transport (tcp) Feb 13 15:22:30.295039 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:22:30.295122 kernel: QLogic iSCSI HBA Driver Feb 13 15:22:30.383319 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:22:30.397135 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:22:30.428774 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:22:30.428879 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:22:30.428895 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:22:30.476954 kernel: raid6: avx2x4 gen() 27007 MB/s Feb 13 15:22:30.493774 kernel: raid6: avx2x2 gen() 22643 MB/s Feb 13 15:22:30.511080 kernel: raid6: avx2x1 gen() 18711 MB/s Feb 13 15:22:30.511161 kernel: raid6: using algorithm avx2x4 gen() 27007 MB/s Feb 13 15:22:30.530868 kernel: raid6: .... xor() 6218 MB/s, rmw enabled Feb 13 15:22:30.530945 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:22:30.559776 kernel: xor: automatically using best checksumming function avx Feb 13 15:22:30.882771 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:22:30.904953 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:22:30.920043 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:22:30.948150 systemd-udevd[415]: Using default interface naming scheme 'v255'. Feb 13 15:22:30.954122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:22:30.979218 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:22:31.005549 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Feb 13 15:22:31.093644 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:22:31.123984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:22:31.205614 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:22:31.224040 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:22:31.250899 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:22:31.261955 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:22:31.268318 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:22:31.271861 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:22:31.297209 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:22:31.302857 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:22:31.330049 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:22:31.331524 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:22:31.331545 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:22:31.331562 kernel: GPT:9289727 != 19775487 Feb 13 15:22:31.331589 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:22:31.331604 kernel: GPT:9289727 != 19775487 Feb 13 15:22:31.331618 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:22:31.331633 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:22:31.312288 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:22:31.320851 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:22:31.321015 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:22:31.327186 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:22:31.328491 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:22:31.328698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:31.330063 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:31.354778 kernel: libata version 3.00 loaded. Feb 13 15:22:31.358115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:31.363212 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:22:31.363251 kernel: AES CTR mode by8 optimization enabled Feb 13 15:22:31.365836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:22:31.366448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:31.377778 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:22:31.401507 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:22:31.401534 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:22:31.401785 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:22:31.401992 kernel: scsi host0: ahci Feb 13 15:22:31.402203 kernel: scsi host1: ahci Feb 13 15:22:31.402419 kernel: scsi host2: ahci Feb 13 15:22:31.402620 kernel: scsi host3: ahci Feb 13 15:22:31.402842 kernel: scsi host4: ahci Feb 13 15:22:31.405913 kernel: scsi host5: ahci Feb 13 15:22:31.406092 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:22:31.406107 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (479) Feb 13 15:22:31.406121 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:22:31.406134 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:22:31.406147 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:22:31.406160 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:22:31.406180 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:22:31.378337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:31.415108 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:22:31.419757 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (466) Feb 13 15:22:31.421879 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:22:31.424940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:31.438393 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:22:31.451206 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:22:31.452713 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:22:31.472966 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:22:31.475021 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:22:31.484980 disk-uuid[574]: Primary Header is updated. Feb 13 15:22:31.484980 disk-uuid[574]: Secondary Entries is updated. Feb 13 15:22:31.484980 disk-uuid[574]: Secondary Header is updated. Feb 13 15:22:31.488775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:22:31.493511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:22:31.708786 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:22:31.708878 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:22:31.716763 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:22:31.716853 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:22:31.717760 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:22:31.717780 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:22:31.719124 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:22:31.719140 kernel: ata3.00: applying bridge limits Feb 13 15:22:31.720161 kernel: ata3.00: configured for UDMA/100 Feb 13 15:22:31.720750 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:22:31.770264 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:22:31.792363 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:22:31.792377 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:22:32.498786 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:22:32.499023 disk-uuid[580]: The operation has completed successfully. Feb 13 15:22:32.529896 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:22:32.530026 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:22:32.556107 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:22:32.560204 sh[599]: Success Feb 13 15:22:32.573757 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:22:32.608887 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:22:32.625130 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:22:32.628607 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:22:32.639551 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:22:32.639615 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:22:32.639646 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:22:32.640597 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:22:32.641430 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:22:32.646554 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:22:32.647301 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:22:32.658897 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:22:32.661226 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:22:32.671866 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:22:32.671910 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:22:32.671921 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:22:32.675776 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:22:32.685040 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:22:32.686843 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:22:32.696099 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:22:32.703905 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:22:32.762819 ignition[693]: Ignition 2.20.0 Feb 13 15:22:32.762832 ignition[693]: Stage: fetch-offline Feb 13 15:22:32.762872 ignition[693]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:32.762882 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:22:32.762973 ignition[693]: parsed url from cmdline: "" Feb 13 15:22:32.762977 ignition[693]: no config URL provided Feb 13 15:22:32.762983 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:22:32.762992 ignition[693]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:22:32.763022 ignition[693]: op(1): [started] loading QEMU firmware config module Feb 13 15:22:32.763027 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:22:32.770660 ignition[693]: op(1): [finished] loading QEMU firmware config module Feb 13 15:22:32.785089 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:22:32.793909 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:22:32.816271 ignition[693]: parsing config with SHA512: d0a1f238447af2f5df3a7fbaef89f756868355c905c5681915e1176733f0cc78495d39b4f0fb62bbae0fe8e80ac9640dbdc1768511162a69386de2ee0cc54cd6 Feb 13 15:22:32.819814 systemd-networkd[787]: lo: Link UP Feb 13 15:22:32.819823 systemd-networkd[787]: lo: Gained carrier Feb 13 15:22:32.822603 unknown[693]: fetched base config from "system" Feb 13 15:22:32.822683 systemd-networkd[787]: Enumeration completed Feb 13 15:22:32.823361 ignition[693]: fetch-offline: fetch-offline passed Feb 13 15:22:32.822861 unknown[693]: fetched user config from "qemu" Feb 13 15:22:32.823466 ignition[693]: Ignition finished successfully Feb 13 15:22:32.822910 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:22:32.825002 systemd[1]: Reached target network.target - Network. Feb 13 15:22:32.826745 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:22:32.827926 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:22:32.827931 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:22:32.828818 systemd-networkd[787]: eth0: Link UP Feb 13 15:22:32.828822 systemd-networkd[787]: eth0: Gained carrier Feb 13 15:22:32.828829 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:22:32.829851 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:22:32.837922 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:22:32.841794 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:22:32.852415 ignition[790]: Ignition 2.20.0 Feb 13 15:22:32.852428 ignition[790]: Stage: kargs Feb 13 15:22:32.852614 ignition[790]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:32.852626 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:22:32.853677 ignition[790]: kargs: kargs passed Feb 13 15:22:32.853740 ignition[790]: Ignition finished successfully Feb 13 15:22:32.860468 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:22:32.882933 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:22:32.895805 ignition[800]: Ignition 2.20.0 Feb 13 15:22:32.895816 ignition[800]: Stage: disks Feb 13 15:22:32.895974 ignition[800]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:32.895985 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:22:32.896786 ignition[800]: disks: disks passed Feb 13 15:22:32.896831 ignition[800]: Ignition finished successfully Feb 13 15:22:32.902598 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:22:32.904972 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:22:32.905050 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:22:32.907135 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:22:32.909484 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:22:32.911588 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:22:32.927927 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:22:32.941471 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:22:32.948118 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:22:32.961851 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:22:33.061774 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:22:33.062316 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:22:33.063163 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:22:33.073814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:22:33.075810 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:22:33.077491 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:22:33.085519 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (819) Feb 13 15:22:33.085539 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:22:33.085550 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:22:33.085560 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:22:33.077538 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:22:33.089788 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:22:33.077565 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:22:33.085955 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:22:33.091053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:22:33.100884 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:22:33.134872 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:22:33.139154 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:22:33.144432 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:22:33.148871 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:22:33.237836 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:22:33.246119 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:22:33.250655 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:22:33.257752 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:22:33.276922 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:22:33.278982 ignition[933]: INFO : Ignition 2.20.0 Feb 13 15:22:33.278982 ignition[933]: INFO : Stage: mount Feb 13 15:22:33.278982 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:33.278982 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:22:33.278982 ignition[933]: INFO : mount: mount passed Feb 13 15:22:33.278982 ignition[933]: INFO : Ignition finished successfully Feb 13 15:22:33.281303 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:22:33.294944 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:22:33.638941 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:22:33.650917 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:22:33.658303 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (946) Feb 13 15:22:33.658359 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:22:33.658370 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:22:33.659161 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:22:33.662778 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:22:33.663599 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:22:33.700865 ignition[963]: INFO : Ignition 2.20.0 Feb 13 15:22:33.702152 ignition[963]: INFO : Stage: files Feb 13 15:22:33.702152 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:33.702152 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:22:33.705342 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:22:33.705342 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:22:33.705342 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:22:33.709902 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:22:33.709902 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:22:33.709902 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:22:33.709902 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:22:33.709902 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:22:33.707122 unknown[963]: wrote ssh authorized keys file for user: core Feb 13 15:22:33.771966 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:22:34.134614 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:22:34.134614 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:22:34.138594 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:22:34.486120 systemd-networkd[787]: eth0: Gained IPv6LL Feb 13 15:22:34.715188 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:22:34.953789 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:22:34.953789 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:22:34.958198 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 15:22:35.240979 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:22:35.856639 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:22:35.856639 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:22:35.861222 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:22:35.861222 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:22:35.861222 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:22:35.861222 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:22:35.861222 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:22:35.861222 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:22:35.861222 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:22:35.861222 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:22:35.891059 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:22:35.896595 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:22:35.898596 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:22:35.898596 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:22:35.898596 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:22:35.898596 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:22:35.898596 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:22:35.898596 ignition[963]: INFO : files: files passed Feb 13 15:22:35.898596 ignition[963]: INFO : Ignition finished successfully Feb 13 15:22:35.900072 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:22:35.914877 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:22:35.918004 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:22:35.920023 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:22:35.920171 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:22:35.928314 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:22:35.930951 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:22:35.930951 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:22:35.934025 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:22:35.937987 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:22:35.940887 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:22:35.955876 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:22:35.979173 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:22:35.979312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:22:35.981652 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:22:35.983757 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:22:35.985808 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:22:35.986603 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:22:36.004136 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:22:36.006989 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:22:36.020389 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:22:36.021885 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:22:36.024363 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:22:36.026622 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:22:36.026831 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:22:36.029263 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:22:36.030909 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:22:36.033050 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:22:36.035205 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:22:36.037375 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:22:36.039594 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:22:36.041894 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:22:36.044315 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:22:36.046707 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:22:36.049013 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:22:36.050883 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:22:36.051030 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:22:36.053430 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:22:36.054952 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:22:36.057247 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:22:36.057363 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:22:36.059665 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:22:36.059875 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:22:36.062556 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:22:36.062668 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:22:36.064782 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:22:36.066719 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:22:36.066889 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:22:36.069496 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:22:36.071397 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:22:36.073490 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:22:36.073644 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:22:36.075594 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:22:36.075753 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:22:36.077831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:22:36.078001 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:22:36.079967 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:22:36.080128 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:22:36.095177 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:22:36.098412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:22:36.098525 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:22:36.098697 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:22:36.099268 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:22:36.099402 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:22:36.107201 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:22:36.107340 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:22:36.111138 ignition[1017]: INFO : Ignition 2.20.0 Feb 13 15:22:36.111138 ignition[1017]: INFO : Stage: umount Feb 13 15:22:36.111138 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:22:36.111138 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:22:36.111138 ignition[1017]: INFO : umount: umount passed Feb 13 15:22:36.111138 ignition[1017]: INFO : Ignition finished successfully Feb 13 15:22:36.112856 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:22:36.113020 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:22:36.115956 systemd[1]: Stopped target network.target - Network. Feb 13 15:22:36.117376 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:22:36.117455 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:22:36.119325 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:22:36.119374 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:22:36.121288 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:22:36.121337 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:22:36.123423 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:22:36.123489 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:22:36.125678 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:22:36.127790 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:22:36.129790 systemd-networkd[787]: eth0: DHCPv6 lease lost Feb 13 15:22:36.131233 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:22:36.131846 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:22:36.131998 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:22:36.135529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:22:36.135638 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:22:36.151968 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:22:36.153981 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:22:36.154057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:22:36.158402 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:22:36.161827 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:22:36.162011 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:22:36.177048 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:22:36.177228 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:22:36.385772 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:22:36.386751 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:22:36.388973 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:22:36.389967 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:22:36.394426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:22:36.394488 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:22:36.400206 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:22:36.400255 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:22:36.403671 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:22:36.403788 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:22:36.406917 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:22:36.406978 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:22:36.409881 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:22:36.409943 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:22:36.413367 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:22:36.413433 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:22:36.427874 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:22:36.429051 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:22:36.429106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:22:36.429348 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:22:36.429391 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:22:36.429675 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:22:36.429743 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:22:36.430205 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:22:36.430248 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:22:36.430552 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:22:36.430595 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:22:36.431097 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:22:36.431138 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:22:36.431434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:22:36.431483 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:36.445222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:22:36.445339 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:22:36.447756 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:22:36.452938 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:22:36.470653 systemd[1]: Switching root. Feb 13 15:22:36.503328 systemd-journald[193]: Journal stopped Feb 13 15:22:37.630061 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Feb 13 15:22:37.630138 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:22:37.630156 kernel: SELinux: policy capability open_perms=1 Feb 13 15:22:37.630172 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:22:37.630186 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:22:37.630201 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:22:37.630225 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:22:37.630239 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:22:37.630259 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:22:37.630282 kernel: audit: type=1403 audit(1739460156.880:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:22:37.630300 systemd[1]: Successfully loaded SELinux policy in 44.491ms. Feb 13 15:22:37.630325 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.843ms. Feb 13 15:22:37.630344 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:22:37.630382 systemd[1]: Detected virtualization kvm. Feb 13 15:22:37.630399 systemd[1]: Detected architecture x86-64. Feb 13 15:22:37.630432 systemd[1]: Detected first boot. Feb 13 15:22:37.630455 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:22:37.630471 zram_generator::config[1061]: No configuration found. Feb 13 15:22:37.630489 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:22:37.630506 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:22:37.630522 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:22:37.630539 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:22:37.630557 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:22:37.630574 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:22:37.630592 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:22:37.630611 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:22:37.630629 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:22:37.630663 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:22:37.630681 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:22:37.630697 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:22:37.630716 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:22:37.630811 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:22:37.630829 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:22:37.630851 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:22:37.630868 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:22:37.630884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:22:37.630900 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:22:37.630916 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:22:37.630932 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:22:37.630949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:22:37.630965 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:22:37.630985 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:22:37.631001 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:22:37.631019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:22:37.631035 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:22:37.631053 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:22:37.631069 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:22:37.631087 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:22:37.631103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:22:37.631119 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:22:37.631138 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:22:37.631155 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:22:37.631172 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:22:37.631188 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:22:37.631205 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:22:37.631222 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:22:37.631238 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:22:37.631253 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:22:37.631268 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:22:37.631290 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:22:37.631307 systemd[1]: Reached target machines.target - Containers. Feb 13 15:22:37.631322 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:22:37.631338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:22:37.631356 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:22:37.631372 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:22:37.631388 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:22:37.631407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:22:37.631427 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:22:37.631443 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:22:37.631459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:22:37.631477 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:22:37.631493 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:22:37.631510 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:22:37.631526 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:22:37.631544 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:22:37.631567 kernel: loop: module loaded Feb 13 15:22:37.631586 kernel: fuse: init (API version 7.39) Feb 13 15:22:37.631602 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:22:37.631617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:22:37.631632 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:22:37.631657 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:22:37.631698 systemd-journald[1128]: Collecting audit messages is disabled. Feb 13 15:22:37.631738 kernel: ACPI: bus type drm_connector registered Feb 13 15:22:37.631754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:22:37.631773 systemd-journald[1128]: Journal started Feb 13 15:22:37.631801 systemd-journald[1128]: Runtime Journal (/run/log/journal/6eb8baee90ad4a68b34d4b1e617968c0) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:22:37.402616 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:22:37.419588 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:22:37.420083 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:22:37.638968 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:22:37.639029 systemd[1]: Stopped verity-setup.service. Feb 13 15:22:37.639051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:22:37.643760 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:22:37.646389 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:22:37.648159 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:22:37.649968 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:22:37.651441 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:22:37.652907 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:22:37.654374 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:22:37.656100 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:22:37.657992 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:22:37.659919 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:22:37.660151 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:22:37.662076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:22:37.662301 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:22:37.664053 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:22:37.664231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:22:37.665779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:22:37.666001 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:22:37.668162 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:22:37.668459 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:22:37.670414 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:22:37.670737 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:22:37.672823 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:22:37.714256 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:22:37.717238 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:22:37.727271 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:22:37.738817 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:22:37.741328 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:22:37.742606 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:22:37.742660 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:22:37.745435 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:22:37.748451 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:22:37.753430 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:22:37.755023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:22:37.758715 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:22:37.761422 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:22:37.762890 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:22:37.765907 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:22:37.767460 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:22:37.769013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:22:37.776503 systemd-journald[1128]: Time spent on flushing to /var/log/journal/6eb8baee90ad4a68b34d4b1e617968c0 is 35.322ms for 1045 entries. Feb 13 15:22:37.776503 systemd-journald[1128]: System Journal (/var/log/journal/6eb8baee90ad4a68b34d4b1e617968c0) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:22:37.888488 systemd-journald[1128]: Received client request to flush runtime journal. Feb 13 15:22:37.888547 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 15:22:37.888581 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:22:37.780717 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:22:37.784933 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:22:37.789148 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:22:37.808862 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:22:37.811181 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:22:37.834888 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:22:37.842128 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:22:37.852985 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:22:37.864974 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:22:37.874975 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:22:37.888141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:22:37.890144 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:22:37.895004 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 15:22:37.895026 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 15:22:37.895765 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:22:37.904307 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 15:22:37.904133 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:22:37.910924 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:22:37.912966 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:22:37.913575 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:22:37.941537 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:22:37.944746 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 15:22:37.949924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:22:37.968928 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Feb 13 15:22:37.968949 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Feb 13 15:22:37.974834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:22:38.029815 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 15:22:38.051794 kernel: loop4: detected capacity change from 0 to 205544 Feb 13 15:22:38.058767 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 15:22:38.069155 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:22:38.070819 (sd-merge)[1203]: Merged extensions into '/usr'. Feb 13 15:22:38.075511 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:22:38.075530 systemd[1]: Reloading... Feb 13 15:22:38.145777 zram_generator::config[1226]: No configuration found. Feb 13 15:22:38.392886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:22:38.458108 systemd[1]: Reloading finished in 382 ms. Feb 13 15:22:38.485298 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:22:38.491024 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:22:38.507883 systemd[1]: Starting ensure-sysext.service... Feb 13 15:22:38.512911 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:22:38.540268 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:22:38.540640 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:22:38.541613 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:22:38.541950 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Feb 13 15:22:38.542031 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Feb 13 15:22:38.545770 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:22:38.545783 systemd-tmpfiles[1266]: Skipping /boot Feb 13 15:22:38.549949 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:22:38.549967 systemd[1]: Reloading... Feb 13 15:22:38.566919 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:22:38.567067 systemd-tmpfiles[1266]: Skipping /boot Feb 13 15:22:38.625193 zram_generator::config[1297]: No configuration found. Feb 13 15:22:38.742604 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:22:38.800526 systemd[1]: Reloading finished in 250 ms. Feb 13 15:22:38.824142 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:22:38.836218 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:22:38.837846 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:22:38.860043 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:22:38.862815 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:22:38.865179 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:22:38.869065 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:22:38.877911 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:22:38.879950 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:22:38.884867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:22:38.885088 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:22:38.886826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:22:38.896045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:22:38.903025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:22:38.904277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:22:38.908992 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:22:38.910044 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:22:38.911494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:22:38.912401 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:22:38.914339 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:22:38.914540 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:22:38.916522 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:22:38.916933 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:22:38.921414 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Feb 13 15:22:38.927211 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:22:38.928460 augenrules[1363]: No rules Feb 13 15:22:38.929427 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:22:38.929644 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:22:38.941882 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:22:38.947071 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:22:38.949011 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:22:38.954500 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:22:38.962001 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:22:38.963428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:22:38.965867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:22:38.969931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:22:38.975127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:22:38.992985 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:22:38.996095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:22:38.999880 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:22:39.012580 augenrules[1382]: /sbin/augenrules: No change Feb 13 15:22:39.004408 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:22:39.005662 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:22:39.005692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:22:39.006123 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:22:39.015412 systemd[1]: Finished ensure-sysext.service. Feb 13 15:22:39.018055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:22:39.018264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:22:39.019980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:22:39.020145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:22:39.022261 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:22:39.022430 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:22:39.025233 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:22:39.025404 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:22:39.027087 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:22:39.047479 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:22:39.048695 augenrules[1429]: No rules Feb 13 15:22:39.080606 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1377) Feb 13 15:22:39.049358 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:22:39.049420 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:22:39.109900 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:22:39.111534 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:22:39.111867 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:22:39.119045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:22:39.127941 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:22:39.139283 systemd-resolved[1338]: Positive Trust Anchors: Feb 13 15:22:39.139307 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:22:39.139337 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:22:39.141682 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:22:39.147741 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:22:39.149663 systemd-resolved[1338]: Defaulting to hostname 'linux'. Feb 13 15:22:39.153746 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:22:39.155395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:22:39.156940 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:22:39.183440 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:22:39.183929 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:22:39.184111 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 15:22:39.184129 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:22:39.184334 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:22:39.245905 systemd-networkd[1409]: lo: Link UP Feb 13 15:22:39.245916 systemd-networkd[1409]: lo: Gained carrier Feb 13 15:22:39.247651 systemd-networkd[1409]: Enumeration completed Feb 13 15:22:39.247766 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:22:39.249028 systemd[1]: Reached target network.target - Network. Feb 13 15:22:39.249697 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:22:39.249702 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:22:39.252349 systemd-networkd[1409]: eth0: Link UP Feb 13 15:22:39.252362 systemd-networkd[1409]: eth0: Gained carrier Feb 13 15:22:39.252374 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:22:39.267134 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:22:39.281876 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.18/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:22:39.290321 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:22:39.291681 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:22:39.745663 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:22:39.745707 systemd-timesyncd[1437]: Initial clock synchronization to Thu 2025-02-13 15:22:39.745567 UTC. Feb 13 15:22:39.746058 systemd-resolved[1338]: Clock change detected. Flushing caches. Feb 13 15:22:39.792356 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:22:39.793985 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:39.802766 kernel: kvm_amd: TSC scaling supported Feb 13 15:22:39.802854 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:22:39.802869 kernel: kvm_amd: Nested Paging enabled Feb 13 15:22:39.802892 kernel: kvm_amd: LBR virtualization supported Feb 13 15:22:39.804527 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:22:39.804587 kernel: kvm_amd: Virtual GIF supported Feb 13 15:22:39.806228 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:22:39.806714 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:39.822531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:22:39.850364 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:22:39.889747 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:22:39.897577 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:22:39.899297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:22:39.906805 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:22:39.945460 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:22:39.953506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:22:39.954788 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:22:39.956039 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:22:39.957388 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:22:39.958821 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:22:39.960017 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:22:39.961263 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:22:39.962494 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:22:39.962525 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:22:39.963457 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:22:39.973806 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:22:39.976619 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:22:39.986167 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:22:39.988550 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:22:39.990146 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:22:39.991392 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:22:39.992544 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:22:39.993582 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:22:39.993611 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:22:39.994603 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:22:39.996813 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:22:39.999423 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:22:40.001467 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:22:40.005276 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:22:40.007789 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:22:40.009362 jq[1471]: false Feb 13 15:22:40.010492 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:22:40.013432 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:22:40.025474 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:22:40.029779 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:22:40.037535 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:22:40.039400 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:22:40.040034 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:22:40.041458 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:22:40.047361 extend-filesystems[1472]: Found loop3 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found loop4 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found loop5 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found sr0 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found vda Feb 13 15:22:40.047361 extend-filesystems[1472]: Found vda1 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found vda2 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found vda3 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found usr Feb 13 15:22:40.047361 extend-filesystems[1472]: Found vda4 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found vda6 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found vda7 Feb 13 15:22:40.047361 extend-filesystems[1472]: Found vda9 Feb 13 15:22:40.047361 extend-filesystems[1472]: Checking size of /dev/vda9 Feb 13 15:22:40.046425 dbus-daemon[1470]: [system] SELinux support is enabled Feb 13 15:22:40.048730 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:22:40.107676 extend-filesystems[1472]: Resized partition /dev/vda9 Feb 13 15:22:40.108834 update_engine[1485]: I20250213 15:22:40.059541 1485 main.cc:92] Flatcar Update Engine starting Feb 13 15:22:40.108834 update_engine[1485]: I20250213 15:22:40.060802 1485 update_check_scheduler.cc:74] Next update check in 5m39s Feb 13 15:22:40.052810 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:22:40.062214 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:22:40.109307 jq[1486]: true Feb 13 15:22:40.091527 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:22:40.091985 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:22:40.093432 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:22:40.093650 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:22:40.104749 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:22:40.104971 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:22:40.116240 systemd-logind[1481]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:22:40.116261 systemd-logind[1481]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:22:40.116462 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:22:40.120506 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1380) Feb 13 15:22:40.121537 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:22:40.135145 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:22:40.138031 tar[1494]: linux-amd64/helm Feb 13 15:22:40.137910 systemd-logind[1481]: New seat seat0. Feb 13 15:22:40.142765 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:22:40.159366 jq[1496]: true Feb 13 15:22:40.179773 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:22:40.179918 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:22:40.181303 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:22:40.181428 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:22:40.184535 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:22:40.224670 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:22:40.237813 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:22:40.258933 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:22:40.268553 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:22:40.276540 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:22:40.276778 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:22:40.291697 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:22:40.294395 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:22:40.391905 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:22:40.401794 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:22:40.404790 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:22:40.406381 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:22:40.538355 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:22:40.567716 extend-filesystems[1495]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:22:40.567716 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:22:40.567716 extend-filesystems[1495]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:22:40.573611 extend-filesystems[1472]: Resized filesystem in /dev/vda9 Feb 13 15:22:40.569382 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:22:40.569691 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:22:40.580298 bash[1528]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:22:40.582745 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:22:40.592357 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:22:40.812055 containerd[1497]: time="2025-02-13T15:22:40.811967950Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:22:40.821044 tar[1494]: linux-amd64/LICENSE Feb 13 15:22:40.821044 tar[1494]: linux-amd64/README.md Feb 13 15:22:40.836933 containerd[1497]: time="2025-02-13T15:22:40.836856184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:40.838985 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839136891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839191003Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839212142Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839506384Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839537442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839634655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839651536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839917946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839944987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839965425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840224 containerd[1497]: time="2025-02-13T15:22:40.839978580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840541 containerd[1497]: time="2025-02-13T15:22:40.840121438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840541 containerd[1497]: time="2025-02-13T15:22:40.840430437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840604 containerd[1497]: time="2025-02-13T15:22:40.840556764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:22:40.840604 containerd[1497]: time="2025-02-13T15:22:40.840569478Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:22:40.840693 containerd[1497]: time="2025-02-13T15:22:40.840672351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:22:40.840750 containerd[1497]: time="2025-02-13T15:22:40.840732343Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:22:40.847809 containerd[1497]: time="2025-02-13T15:22:40.847774339Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:22:40.847854 containerd[1497]: time="2025-02-13T15:22:40.847824984Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:22:40.847854 containerd[1497]: time="2025-02-13T15:22:40.847840874Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:22:40.847894 containerd[1497]: time="2025-02-13T15:22:40.847860000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:22:40.847894 containerd[1497]: time="2025-02-13T15:22:40.847875048Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:22:40.848051 containerd[1497]: time="2025-02-13T15:22:40.848022795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:22:40.850250 containerd[1497]: time="2025-02-13T15:22:40.850203886Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:22:40.850432 containerd[1497]: time="2025-02-13T15:22:40.850403791Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:22:40.850472 containerd[1497]: time="2025-02-13T15:22:40.850429238Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:22:40.850472 containerd[1497]: time="2025-02-13T15:22:40.850450258Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:22:40.850523 containerd[1497]: time="2025-02-13T15:22:40.850470155Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:22:40.850523 containerd[1497]: time="2025-02-13T15:22:40.850487748Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:22:40.850523 containerd[1497]: time="2025-02-13T15:22:40.850504109Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:22:40.850609 containerd[1497]: time="2025-02-13T15:22:40.850521852Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:22:40.850609 containerd[1497]: time="2025-02-13T15:22:40.850540737Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:22:40.850609 containerd[1497]: time="2025-02-13T15:22:40.850559122Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:22:40.850609 containerd[1497]: time="2025-02-13T15:22:40.850577216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:22:40.850609 containerd[1497]: time="2025-02-13T15:22:40.850593737Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:22:40.850729 containerd[1497]: time="2025-02-13T15:22:40.850636888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850729 containerd[1497]: time="2025-02-13T15:22:40.850657396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850729 containerd[1497]: time="2025-02-13T15:22:40.850676813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850729 containerd[1497]: time="2025-02-13T15:22:40.850693233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850729 containerd[1497]: time="2025-02-13T15:22:40.850709674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850729 containerd[1497]: time="2025-02-13T15:22:40.850726897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850896 containerd[1497]: time="2025-02-13T15:22:40.850742496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850896 containerd[1497]: time="2025-02-13T15:22:40.850760409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850896 containerd[1497]: time="2025-02-13T15:22:40.850777772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850896 containerd[1497]: time="2025-02-13T15:22:40.850797058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850896 containerd[1497]: time="2025-02-13T15:22:40.850821253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850896 containerd[1497]: time="2025-02-13T15:22:40.850837845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850896 containerd[1497]: time="2025-02-13T15:22:40.850853504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.850896 containerd[1497]: time="2025-02-13T15:22:40.850871758Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:22:40.850896 containerd[1497]: time="2025-02-13T15:22:40.850897506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.851114 containerd[1497]: time="2025-02-13T15:22:40.850916713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.851114 containerd[1497]: time="2025-02-13T15:22:40.850931831Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:22:40.851114 containerd[1497]: time="2025-02-13T15:22:40.851003345Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:22:40.851114 containerd[1497]: time="2025-02-13T15:22:40.851027460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:22:40.851114 containerd[1497]: time="2025-02-13T15:22:40.851042789Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:22:40.851114 containerd[1497]: time="2025-02-13T15:22:40.851059851Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:22:40.851114 containerd[1497]: time="2025-02-13T15:22:40.851072665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.851114 containerd[1497]: time="2025-02-13T15:22:40.851093945Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:22:40.851114 containerd[1497]: time="2025-02-13T15:22:40.851108973Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:22:40.851386 containerd[1497]: time="2025-02-13T15:22:40.851125644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:22:40.851584 containerd[1497]: time="2025-02-13T15:22:40.851530003Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:22:40.851771 containerd[1497]: time="2025-02-13T15:22:40.851582541Z" level=info msg="Connect containerd service" Feb 13 15:22:40.851771 containerd[1497]: time="2025-02-13T15:22:40.851623969Z" level=info msg="using legacy CRI server" Feb 13 15:22:40.851771 containerd[1497]: time="2025-02-13T15:22:40.851636442Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:22:40.851861 containerd[1497]: time="2025-02-13T15:22:40.851776916Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:22:40.852433 containerd[1497]: time="2025-02-13T15:22:40.852401727Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:22:40.852680 containerd[1497]: time="2025-02-13T15:22:40.852576676Z" level=info msg="Start subscribing containerd event" Feb 13 15:22:40.852680 containerd[1497]: time="2025-02-13T15:22:40.852656796Z" level=info msg="Start recovering state" Feb 13 15:22:40.852868 containerd[1497]: time="2025-02-13T15:22:40.852836463Z" level=info msg="Start event monitor" Feb 13 15:22:40.852923 containerd[1497]: time="2025-02-13T15:22:40.852869946Z" level=info msg="Start snapshots syncer" Feb 13 15:22:40.852923 containerd[1497]: time="2025-02-13T15:22:40.852840440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:22:40.852923 containerd[1497]: time="2025-02-13T15:22:40.852883090Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:22:40.852997 containerd[1497]: time="2025-02-13T15:22:40.852926512Z" level=info msg="Start streaming server" Feb 13 15:22:40.852997 containerd[1497]: time="2025-02-13T15:22:40.852946850Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:22:40.853074 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:22:40.854343 containerd[1497]: time="2025-02-13T15:22:40.853175989Z" level=info msg="containerd successfully booted in 0.044518s" Feb 13 15:22:41.525537 systemd-networkd[1409]: eth0: Gained IPv6LL Feb 13 15:22:41.528996 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:22:41.531185 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:22:41.547634 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:22:41.550689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:22:41.553260 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:22:41.570953 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:22:41.571252 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:22:41.573022 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:22:41.578133 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:22:42.672347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:22:42.673953 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:22:42.675233 systemd[1]: Startup finished in 775ms (kernel) + 7.140s (initrd) + 5.390s (userspace) = 13.305s. Feb 13 15:22:42.677254 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:22:43.283939 kubelet[1583]: E0213 15:22:43.283858 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:22:43.287895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:22:43.288097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:22:43.288448 systemd[1]: kubelet.service: Consumed 1.609s CPU time. Feb 13 15:22:46.338535 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:22:46.339836 systemd[1]: Started sshd@0-10.0.0.18:22-10.0.0.1:47232.service - OpenSSH per-connection server daemon (10.0.0.1:47232). Feb 13 15:22:46.390291 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 47232 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:22:46.392226 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:46.402322 systemd-logind[1481]: New session 1 of user core. Feb 13 15:22:46.404010 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:22:46.417593 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:22:46.430042 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:22:46.440640 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:22:46.443619 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:22:46.545239 systemd[1600]: Queued start job for default target default.target. Feb 13 15:22:46.556860 systemd[1600]: Created slice app.slice - User Application Slice. Feb 13 15:22:46.556890 systemd[1600]: Reached target paths.target - Paths. Feb 13 15:22:46.556906 systemd[1600]: Reached target timers.target - Timers. Feb 13 15:22:46.558678 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:22:46.570603 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:22:46.570761 systemd[1600]: Reached target sockets.target - Sockets. Feb 13 15:22:46.570786 systemd[1600]: Reached target basic.target - Basic System. Feb 13 15:22:46.570842 systemd[1600]: Reached target default.target - Main User Target. Feb 13 15:22:46.570899 systemd[1600]: Startup finished in 120ms. Feb 13 15:22:46.571312 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:22:46.572782 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:22:46.637479 systemd[1]: Started sshd@1-10.0.0.18:22-10.0.0.1:47238.service - OpenSSH per-connection server daemon (10.0.0.1:47238). Feb 13 15:22:46.682856 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 47238 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:22:46.684643 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:46.688944 systemd-logind[1481]: New session 2 of user core. Feb 13 15:22:46.702462 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:22:46.755578 sshd[1613]: Connection closed by 10.0.0.1 port 47238 Feb 13 15:22:46.755881 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:46.770121 systemd[1]: sshd@1-10.0.0.18:22-10.0.0.1:47238.service: Deactivated successfully. Feb 13 15:22:46.771883 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:22:46.773258 systemd-logind[1481]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:22:46.774462 systemd[1]: Started sshd@2-10.0.0.18:22-10.0.0.1:47252.service - OpenSSH per-connection server daemon (10.0.0.1:47252). Feb 13 15:22:46.775194 systemd-logind[1481]: Removed session 2. Feb 13 15:22:46.821384 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 47252 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:22:46.823199 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:46.827457 systemd-logind[1481]: New session 3 of user core. Feb 13 15:22:46.837550 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:22:46.886900 sshd[1620]: Connection closed by 10.0.0.1 port 47252 Feb 13 15:22:46.887370 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:46.900251 systemd[1]: sshd@2-10.0.0.18:22-10.0.0.1:47252.service: Deactivated successfully. Feb 13 15:22:46.902013 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:22:46.903779 systemd-logind[1481]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:22:46.905141 systemd[1]: Started sshd@3-10.0.0.18:22-10.0.0.1:47254.service - OpenSSH per-connection server daemon (10.0.0.1:47254). Feb 13 15:22:46.906098 systemd-logind[1481]: Removed session 3. Feb 13 15:22:46.949540 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 47254 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:22:46.951381 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:46.955507 systemd-logind[1481]: New session 4 of user core. Feb 13 15:22:46.966521 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:22:47.021546 sshd[1627]: Connection closed by 10.0.0.1 port 47254 Feb 13 15:22:47.021895 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:47.032554 systemd[1]: sshd@3-10.0.0.18:22-10.0.0.1:47254.service: Deactivated successfully. Feb 13 15:22:47.034414 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:22:47.035704 systemd-logind[1481]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:22:47.036958 systemd[1]: Started sshd@4-10.0.0.18:22-10.0.0.1:47268.service - OpenSSH per-connection server daemon (10.0.0.1:47268). Feb 13 15:22:47.037761 systemd-logind[1481]: Removed session 4. Feb 13 15:22:47.091901 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 47268 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:22:47.093571 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:47.097707 systemd-logind[1481]: New session 5 of user core. Feb 13 15:22:47.107542 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:22:47.168739 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:22:47.169108 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:22:47.185208 sudo[1635]: pam_unix(sudo:session): session closed for user root Feb 13 15:22:47.187058 sshd[1634]: Connection closed by 10.0.0.1 port 47268 Feb 13 15:22:47.187514 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:47.201493 systemd[1]: sshd@4-10.0.0.18:22-10.0.0.1:47268.service: Deactivated successfully. Feb 13 15:22:47.203197 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:22:47.204907 systemd-logind[1481]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:22:47.206432 systemd[1]: Started sshd@5-10.0.0.18:22-10.0.0.1:47284.service - OpenSSH per-connection server daemon (10.0.0.1:47284). Feb 13 15:22:47.207367 systemd-logind[1481]: Removed session 5. Feb 13 15:22:47.249854 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 47284 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:22:47.251748 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:47.256453 systemd-logind[1481]: New session 6 of user core. Feb 13 15:22:47.267486 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:22:47.323425 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:22:47.323758 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:22:47.327720 sudo[1644]: pam_unix(sudo:session): session closed for user root Feb 13 15:22:47.336304 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:22:47.337140 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:22:47.359743 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:22:47.391060 augenrules[1666]: No rules Feb 13 15:22:47.393133 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:22:47.393519 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:22:47.395106 sudo[1643]: pam_unix(sudo:session): session closed for user root Feb 13 15:22:47.396897 sshd[1642]: Connection closed by 10.0.0.1 port 47284 Feb 13 15:22:47.397254 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:47.411109 systemd[1]: sshd@5-10.0.0.18:22-10.0.0.1:47284.service: Deactivated successfully. Feb 13 15:22:47.413473 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:22:47.415307 systemd-logind[1481]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:22:47.416730 systemd[1]: Started sshd@6-10.0.0.18:22-10.0.0.1:47288.service - OpenSSH per-connection server daemon (10.0.0.1:47288). Feb 13 15:22:47.417507 systemd-logind[1481]: Removed session 6. Feb 13 15:22:47.461604 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 47288 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:22:47.463044 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:47.467218 systemd-logind[1481]: New session 7 of user core. Feb 13 15:22:47.484529 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:22:47.539122 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:22:47.539487 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:22:48.098868 (dockerd)[1697]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:22:48.098891 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:22:48.596400 dockerd[1697]: time="2025-02-13T15:22:48.596308196Z" level=info msg="Starting up" Feb 13 15:22:48.813703 systemd[1]: var-lib-docker-metacopy\x2dcheck1898411299-merged.mount: Deactivated successfully. Feb 13 15:22:48.842352 dockerd[1697]: time="2025-02-13T15:22:48.842281554Z" level=info msg="Loading containers: start." Feb 13 15:22:49.033424 kernel: Initializing XFRM netlink socket Feb 13 15:22:49.125161 systemd-networkd[1409]: docker0: Link UP Feb 13 15:22:49.159010 dockerd[1697]: time="2025-02-13T15:22:49.158935764Z" level=info msg="Loading containers: done." Feb 13 15:22:49.178439 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1685609813-merged.mount: Deactivated successfully. Feb 13 15:22:49.182432 dockerd[1697]: time="2025-02-13T15:22:49.182295271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:22:49.182628 dockerd[1697]: time="2025-02-13T15:22:49.182530372Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:22:49.182728 dockerd[1697]: time="2025-02-13T15:22:49.182697726Z" level=info msg="Daemon has completed initialization" Feb 13 15:22:49.420304 dockerd[1697]: time="2025-02-13T15:22:49.420208423Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:22:49.420532 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:22:51.038020 containerd[1497]: time="2025-02-13T15:22:51.037974723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:22:51.982545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109700454.mount: Deactivated successfully. Feb 13 15:22:52.975537 containerd[1497]: time="2025-02-13T15:22:52.975460650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:52.976423 containerd[1497]: time="2025-02-13T15:22:52.976352643Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 15:22:52.978380 containerd[1497]: time="2025-02-13T15:22:52.978309423Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:52.981412 containerd[1497]: time="2025-02-13T15:22:52.981342811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:52.982346 containerd[1497]: time="2025-02-13T15:22:52.982303293Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 1.944279347s" Feb 13 15:22:52.982396 containerd[1497]: time="2025-02-13T15:22:52.982369397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 15:22:52.984072 containerd[1497]: time="2025-02-13T15:22:52.984046552Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:22:53.413099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:22:53.425592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:22:53.597196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:22:53.601778 (kubelet)[1956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:22:53.644377 kubelet[1956]: E0213 15:22:53.644215 1956 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:22:53.651264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:22:53.651497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:22:54.289879 containerd[1497]: time="2025-02-13T15:22:54.289789466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:54.290626 containerd[1497]: time="2025-02-13T15:22:54.290587653Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 15:22:54.292019 containerd[1497]: time="2025-02-13T15:22:54.291952683Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:54.294712 containerd[1497]: time="2025-02-13T15:22:54.294684406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:54.295788 containerd[1497]: time="2025-02-13T15:22:54.295756797Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.31168098s" Feb 13 15:22:54.295829 containerd[1497]: time="2025-02-13T15:22:54.295788647Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 15:22:54.296409 containerd[1497]: time="2025-02-13T15:22:54.296377962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:22:56.113719 containerd[1497]: time="2025-02-13T15:22:56.113648453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:56.114545 containerd[1497]: time="2025-02-13T15:22:56.114497415Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 15:22:56.115969 containerd[1497]: time="2025-02-13T15:22:56.115937185Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:56.119480 containerd[1497]: time="2025-02-13T15:22:56.119447117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:56.120444 containerd[1497]: time="2025-02-13T15:22:56.120403561Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.823988379s" Feb 13 15:22:56.120444 containerd[1497]: time="2025-02-13T15:22:56.120436743Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 15:22:56.121073 containerd[1497]: time="2025-02-13T15:22:56.121026800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:22:57.591863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156423210.mount: Deactivated successfully. Feb 13 15:22:58.043486 containerd[1497]: time="2025-02-13T15:22:58.043412399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:58.045414 containerd[1497]: time="2025-02-13T15:22:58.045365251Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 15:22:58.047512 containerd[1497]: time="2025-02-13T15:22:58.047468976Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:58.050088 containerd[1497]: time="2025-02-13T15:22:58.050042382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:58.050889 containerd[1497]: time="2025-02-13T15:22:58.050854966Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.92978765s" Feb 13 15:22:58.050933 containerd[1497]: time="2025-02-13T15:22:58.050891495Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 15:22:58.051734 containerd[1497]: time="2025-02-13T15:22:58.051694871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:22:58.624877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266716568.mount: Deactivated successfully. Feb 13 15:22:59.728082 containerd[1497]: time="2025-02-13T15:22:59.728026902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:59.728978 containerd[1497]: time="2025-02-13T15:22:59.728935947Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:22:59.730296 containerd[1497]: time="2025-02-13T15:22:59.730262054Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:59.734681 containerd[1497]: time="2025-02-13T15:22:59.733403425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:59.738841 containerd[1497]: time="2025-02-13T15:22:59.738777494Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.687044952s" Feb 13 15:22:59.738841 containerd[1497]: time="2025-02-13T15:22:59.738831916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:22:59.739503 containerd[1497]: time="2025-02-13T15:22:59.739460044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:23:00.246648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770133214.mount: Deactivated successfully. Feb 13 15:23:00.251252 containerd[1497]: time="2025-02-13T15:23:00.251215307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:00.251905 containerd[1497]: time="2025-02-13T15:23:00.251864224Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 15:23:00.252989 containerd[1497]: time="2025-02-13T15:23:00.252948738Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:00.255104 containerd[1497]: time="2025-02-13T15:23:00.255060388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:00.255810 containerd[1497]: time="2025-02-13T15:23:00.255767755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 516.270411ms" Feb 13 15:23:00.255810 containerd[1497]: time="2025-02-13T15:23:00.255797811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 15:23:00.256272 containerd[1497]: time="2025-02-13T15:23:00.256241613Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:23:00.796646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1954308423.mount: Deactivated successfully. Feb 13 15:23:02.432597 containerd[1497]: time="2025-02-13T15:23:02.432542434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:02.433232 containerd[1497]: time="2025-02-13T15:23:02.433196471Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 15:23:02.434513 containerd[1497]: time="2025-02-13T15:23:02.434458157Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:02.438470 containerd[1497]: time="2025-02-13T15:23:02.438441287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:02.439616 containerd[1497]: time="2025-02-13T15:23:02.439577197Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.183308182s" Feb 13 15:23:02.439674 containerd[1497]: time="2025-02-13T15:23:02.439612423Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 15:23:03.663082 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:23:03.676508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:23:03.832490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:03.836893 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:23:03.874257 kubelet[2111]: E0213 15:23:03.874190 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:23:03.878375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:23:03.878636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:23:04.389147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:04.401722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:23:04.428792 systemd[1]: Reloading requested from client PID 2127 ('systemctl') (unit session-7.scope)... Feb 13 15:23:04.428809 systemd[1]: Reloading... Feb 13 15:23:04.516357 zram_generator::config[2169]: No configuration found. Feb 13 15:23:04.891149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:23:04.972541 systemd[1]: Reloading finished in 543 ms. Feb 13 15:23:05.022728 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:23:05.022839 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:23:05.023149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:05.025614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:23:05.174000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:05.179312 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:23:05.214316 kubelet[2215]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:23:05.214316 kubelet[2215]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:23:05.214316 kubelet[2215]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:23:05.215586 kubelet[2215]: I0213 15:23:05.215543 2215 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:23:05.335583 kubelet[2215]: I0213 15:23:05.335536 2215 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:23:05.335583 kubelet[2215]: I0213 15:23:05.335568 2215 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:23:05.335896 kubelet[2215]: I0213 15:23:05.335872 2215 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:23:05.360689 kubelet[2215]: I0213 15:23:05.360597 2215 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:23:05.360689 kubelet[2215]: E0213 15:23:05.360635 2215 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:05.368522 kubelet[2215]: E0213 15:23:05.368477 2215 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:23:05.368522 kubelet[2215]: I0213 15:23:05.368517 2215 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:23:05.379215 kubelet[2215]: I0213 15:23:05.379171 2215 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:23:05.380782 kubelet[2215]: I0213 15:23:05.380748 2215 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:23:05.380994 kubelet[2215]: I0213 15:23:05.380947 2215 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:23:05.381184 kubelet[2215]: I0213 15:23:05.380984 2215 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:23:05.381278 kubelet[2215]: I0213 15:23:05.381191 2215 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:23:05.381278 kubelet[2215]: I0213 15:23:05.381200 2215 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:23:05.381355 kubelet[2215]: I0213 15:23:05.381322 2215 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:23:05.383776 kubelet[2215]: I0213 15:23:05.383735 2215 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:23:05.383776 kubelet[2215]: I0213 15:23:05.383766 2215 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:23:05.383856 kubelet[2215]: I0213 15:23:05.383817 2215 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:23:05.383856 kubelet[2215]: I0213 15:23:05.383846 2215 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:23:05.431263 kubelet[2215]: W0213 15:23:05.430216 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 15:23:05.431263 kubelet[2215]: E0213 15:23:05.430303 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:05.432268 kubelet[2215]: I0213 15:23:05.432242 2215 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:23:05.433990 kubelet[2215]: I0213 15:23:05.433969 2215 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:23:05.434364 kubelet[2215]: W0213 15:23:05.434306 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 15:23:05.434417 kubelet[2215]: E0213 15:23:05.434376 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:05.434870 kubelet[2215]: W0213 15:23:05.434843 2215 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:23:05.436106 kubelet[2215]: I0213 15:23:05.435657 2215 server.go:1269] "Started kubelet" Feb 13 15:23:05.436106 kubelet[2215]: I0213 15:23:05.435750 2215 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:23:05.436516 kubelet[2215]: I0213 15:23:05.436442 2215 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:23:05.436986 kubelet[2215]: I0213 15:23:05.436967 2215 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:23:05.437070 kubelet[2215]: I0213 15:23:05.437049 2215 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:23:05.437208 kubelet[2215]: I0213 15:23:05.436999 2215 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:23:05.438442 kubelet[2215]: I0213 15:23:05.438140 2215 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:23:05.439283 kubelet[2215]: I0213 15:23:05.439136 2215 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:23:05.439283 kubelet[2215]: I0213 15:23:05.439231 2215 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:23:05.439388 kubelet[2215]: I0213 15:23:05.439306 2215 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:23:05.439670 kubelet[2215]: W0213 15:23:05.439625 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 15:23:05.439706 kubelet[2215]: E0213 15:23:05.439678 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:05.440333 kubelet[2215]: I0213 15:23:05.440293 2215 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:23:05.440398 kubelet[2215]: I0213 15:23:05.440386 2215 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:23:05.442918 kubelet[2215]: E0213 15:23:05.442450 2215 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:23:05.444087 kubelet[2215]: E0213 15:23:05.444062 2215 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:23:05.446211 kubelet[2215]: E0213 15:23:05.445520 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="200ms" Feb 13 15:23:05.446211 kubelet[2215]: I0213 15:23:05.445760 2215 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:23:05.446551 kubelet[2215]: E0213 15:23:05.443749 2215 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cdd295d3582d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:23:05.435625517 +0000 UTC m=+0.252342122,LastTimestamp:2025-02-13 15:23:05.435625517 +0000 UTC m=+0.252342122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:23:05.462875 kubelet[2215]: I0213 15:23:05.462850 2215 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:23:05.463243 kubelet[2215]: I0213 15:23:05.463015 2215 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:23:05.463243 kubelet[2215]: I0213 15:23:05.463036 2215 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:23:05.464113 kubelet[2215]: I0213 15:23:05.464074 2215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:23:05.465433 kubelet[2215]: I0213 15:23:05.465406 2215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:23:05.465487 kubelet[2215]: I0213 15:23:05.465456 2215 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:23:05.465487 kubelet[2215]: I0213 15:23:05.465485 2215 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:23:05.465542 kubelet[2215]: E0213 15:23:05.465528 2215 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:23:05.466032 kubelet[2215]: W0213 15:23:05.465988 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 15:23:05.466075 kubelet[2215]: E0213 15:23:05.466029 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:05.544343 kubelet[2215]: E0213 15:23:05.544251 2215 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:23:05.566598 kubelet[2215]: E0213 15:23:05.566498 2215 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:23:05.645118 kubelet[2215]: E0213 15:23:05.645046 2215 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:23:05.646531 kubelet[2215]: E0213 15:23:05.646495 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="400ms" Feb 13 15:23:05.733033 kubelet[2215]: I0213 15:23:05.732908 2215 policy_none.go:49] "None policy: Start" Feb 13 15:23:05.734004 kubelet[2215]: I0213 15:23:05.733982 2215 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:23:05.734165 kubelet[2215]: I0213 15:23:05.734011 2215 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:23:05.745227 kubelet[2215]: E0213 15:23:05.745190 2215 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:23:05.745399 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:23:05.765109 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:23:05.767623 kubelet[2215]: E0213 15:23:05.767541 2215 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:23:05.768615 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:23:05.778157 kubelet[2215]: I0213 15:23:05.778133 2215 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:23:05.778372 kubelet[2215]: I0213 15:23:05.778358 2215 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:23:05.778436 kubelet[2215]: I0213 15:23:05.778374 2215 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:23:05.778904 kubelet[2215]: I0213 15:23:05.778616 2215 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:23:05.779618 kubelet[2215]: E0213 15:23:05.779600 2215 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:23:05.879953 kubelet[2215]: I0213 15:23:05.879883 2215 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:23:05.880450 kubelet[2215]: E0213 15:23:05.880417 2215 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Feb 13 15:23:05.935302 kubelet[2215]: E0213 15:23:05.935164 2215 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cdd295d3582d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:23:05.435625517 +0000 UTC m=+0.252342122,LastTimestamp:2025-02-13 15:23:05.435625517 +0000 UTC m=+0.252342122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:23:06.047965 kubelet[2215]: E0213 15:23:06.047787 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="800ms" Feb 13 15:23:06.082298 kubelet[2215]: I0213 15:23:06.082257 2215 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:23:06.082790 kubelet[2215]: E0213 15:23:06.082747 2215 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Feb 13 15:23:06.177967 systemd[1]: Created slice kubepods-burstable-pod1125230356a8747ee798bb4b6ccbaf0c.slice - libcontainer container kubepods-burstable-pod1125230356a8747ee798bb4b6ccbaf0c.slice. Feb 13 15:23:06.192187 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 15:23:06.195822 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 15:23:06.244148 kubelet[2215]: I0213 15:23:06.244068 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1125230356a8747ee798bb4b6ccbaf0c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1125230356a8747ee798bb4b6ccbaf0c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:23:06.244148 kubelet[2215]: I0213 15:23:06.244122 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1125230356a8747ee798bb4b6ccbaf0c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1125230356a8747ee798bb4b6ccbaf0c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:23:06.244700 kubelet[2215]: I0213 15:23:06.244186 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:06.244700 kubelet[2215]: I0213 15:23:06.244205 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:06.244700 kubelet[2215]: I0213 15:23:06.244222 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:06.244700 kubelet[2215]: I0213 15:23:06.244239 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:06.244700 kubelet[2215]: I0213 15:23:06.244253 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1125230356a8747ee798bb4b6ccbaf0c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1125230356a8747ee798bb4b6ccbaf0c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:23:06.244862 kubelet[2215]: I0213 15:23:06.244273 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:23:06.244862 kubelet[2215]: I0213 15:23:06.244386 2215 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:06.268834 kubelet[2215]: W0213 15:23:06.268762 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 15:23:06.268834 kubelet[2215]: E0213 15:23:06.268824 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:06.484741 kubelet[2215]: I0213 15:23:06.484697 2215 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:23:06.485155 kubelet[2215]: E0213 15:23:06.485098 2215 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Feb 13 15:23:06.490300 kubelet[2215]: E0213 15:23:06.490281 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:06.490904 containerd[1497]: time="2025-02-13T15:23:06.490864547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1125230356a8747ee798bb4b6ccbaf0c,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:06.495141 kubelet[2215]: E0213 15:23:06.495108 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:06.495580 containerd[1497]: time="2025-02-13T15:23:06.495535156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:06.501802 kubelet[2215]: E0213 15:23:06.501771 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:06.502111 containerd[1497]: time="2025-02-13T15:23:06.502083537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:06.510696 kubelet[2215]: W0213 15:23:06.510652 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 15:23:06.510784 kubelet[2215]: E0213 15:23:06.510706 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:06.721641 kubelet[2215]: W0213 15:23:06.721541 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 15:23:06.721641 kubelet[2215]: E0213 15:23:06.721638 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:06.848504 kubelet[2215]: E0213 15:23:06.848361 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.18:6443: connect: connection refused" interval="1.6s" Feb 13 15:23:06.982946 kubelet[2215]: W0213 15:23:06.982897 2215 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.18:6443: connect: connection refused Feb 13 15:23:06.983065 kubelet[2215]: E0213 15:23:06.982952 2215 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:07.027718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457562321.mount: Deactivated successfully. Feb 13 15:23:07.033691 containerd[1497]: time="2025-02-13T15:23:07.033615208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:23:07.036501 containerd[1497]: time="2025-02-13T15:23:07.036443262Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:23:07.037417 containerd[1497]: time="2025-02-13T15:23:07.037382022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:23:07.039176 containerd[1497]: time="2025-02-13T15:23:07.039136202Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:23:07.040037 containerd[1497]: time="2025-02-13T15:23:07.039998669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:23:07.040958 containerd[1497]: time="2025-02-13T15:23:07.040925257Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:23:07.043421 containerd[1497]: time="2025-02-13T15:23:07.043299590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:23:07.044471 containerd[1497]: time="2025-02-13T15:23:07.044403710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:23:07.045158 containerd[1497]: time="2025-02-13T15:23:07.045129381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.486323ms" Feb 13 15:23:07.050057 containerd[1497]: time="2025-02-13T15:23:07.050004614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.863941ms" Feb 13 15:23:07.052694 containerd[1497]: time="2025-02-13T15:23:07.052648643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.687334ms" Feb 13 15:23:07.286856 kubelet[2215]: I0213 15:23:07.286821 2215 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:23:07.287440 kubelet[2215]: E0213 15:23:07.287394 2215 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.18:6443/api/v1/nodes\": dial tcp 10.0.0.18:6443: connect: connection refused" node="localhost" Feb 13 15:23:07.367493 containerd[1497]: time="2025-02-13T15:23:07.362116425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:07.367493 containerd[1497]: time="2025-02-13T15:23:07.364346848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:07.367493 containerd[1497]: time="2025-02-13T15:23:07.364363890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:07.367493 containerd[1497]: time="2025-02-13T15:23:07.364465030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:07.367493 containerd[1497]: time="2025-02-13T15:23:07.364234036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:07.367493 containerd[1497]: time="2025-02-13T15:23:07.364310350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:07.367493 containerd[1497]: time="2025-02-13T15:23:07.364367497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:07.369302 containerd[1497]: time="2025-02-13T15:23:07.367790135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:07.379694 containerd[1497]: time="2025-02-13T15:23:07.379386312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:07.379694 containerd[1497]: time="2025-02-13T15:23:07.379459650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:07.379694 containerd[1497]: time="2025-02-13T15:23:07.379472795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:07.379694 containerd[1497]: time="2025-02-13T15:23:07.379587379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:07.398583 systemd[1]: Started cri-containerd-5f3a7edbdb6a6b446dd9e6fda181421528d37ffbb81a3f4f521553282ec39dbb.scope - libcontainer container 5f3a7edbdb6a6b446dd9e6fda181421528d37ffbb81a3f4f521553282ec39dbb. Feb 13 15:23:07.402889 systemd[1]: Started cri-containerd-fb54d25e2dc7f80d43125e98ece0436b316472dd762f00386c87cc50ac613958.scope - libcontainer container fb54d25e2dc7f80d43125e98ece0436b316472dd762f00386c87cc50ac613958. Feb 13 15:23:07.412350 systemd[1]: Started cri-containerd-a5d008bf530f00f2e31afd9378e62475a800dcff6da82a47aa91bac5bbdfa65b.scope - libcontainer container a5d008bf530f00f2e31afd9378e62475a800dcff6da82a47aa91bac5bbdfa65b. Feb 13 15:23:07.515652 containerd[1497]: time="2025-02-13T15:23:07.515593660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1125230356a8747ee798bb4b6ccbaf0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb54d25e2dc7f80d43125e98ece0436b316472dd762f00386c87cc50ac613958\"" Feb 13 15:23:07.519223 kubelet[2215]: E0213 15:23:07.518794 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:07.520055 kubelet[2215]: E0213 15:23:07.520021 2215 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.18:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:23:07.525354 containerd[1497]: time="2025-02-13T15:23:07.525293470Z" level=info msg="CreateContainer within sandbox \"fb54d25e2dc7f80d43125e98ece0436b316472dd762f00386c87cc50ac613958\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:23:07.537891 containerd[1497]: time="2025-02-13T15:23:07.537765320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5d008bf530f00f2e31afd9378e62475a800dcff6da82a47aa91bac5bbdfa65b\"" Feb 13 15:23:07.539001 kubelet[2215]: E0213 15:23:07.538948 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:07.540975 containerd[1497]: time="2025-02-13T15:23:07.540928812Z" level=info msg="CreateContainer within sandbox \"a5d008bf530f00f2e31afd9378e62475a800dcff6da82a47aa91bac5bbdfa65b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:23:07.556126 containerd[1497]: time="2025-02-13T15:23:07.556090917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f3a7edbdb6a6b446dd9e6fda181421528d37ffbb81a3f4f521553282ec39dbb\"" Feb 13 15:23:07.556770 kubelet[2215]: E0213 15:23:07.556749 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:07.558273 containerd[1497]: time="2025-02-13T15:23:07.558248994Z" level=info msg="CreateContainer within sandbox \"5f3a7edbdb6a6b446dd9e6fda181421528d37ffbb81a3f4f521553282ec39dbb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:23:07.820390 containerd[1497]: time="2025-02-13T15:23:07.820212479Z" level=info msg="CreateContainer within sandbox \"fb54d25e2dc7f80d43125e98ece0436b316472dd762f00386c87cc50ac613958\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"207939a3e475646165609d020eb130636148d25e31836d1f33b0b1fc9120205d\"" Feb 13 15:23:07.821096 containerd[1497]: time="2025-02-13T15:23:07.821061772Z" level=info msg="StartContainer for \"207939a3e475646165609d020eb130636148d25e31836d1f33b0b1fc9120205d\"" Feb 13 15:23:07.830254 containerd[1497]: time="2025-02-13T15:23:07.830201963Z" level=info msg="CreateContainer within sandbox \"5f3a7edbdb6a6b446dd9e6fda181421528d37ffbb81a3f4f521553282ec39dbb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"da1a966d9f284b213e59a21227f4df9fb1065bfe4efbc0380cd4e0365715f719\"" Feb 13 15:23:07.831101 containerd[1497]: time="2025-02-13T15:23:07.831056295Z" level=info msg="StartContainer for \"da1a966d9f284b213e59a21227f4df9fb1065bfe4efbc0380cd4e0365715f719\"" Feb 13 15:23:07.831189 containerd[1497]: time="2025-02-13T15:23:07.831160711Z" level=info msg="CreateContainer within sandbox \"a5d008bf530f00f2e31afd9378e62475a800dcff6da82a47aa91bac5bbdfa65b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e6a272d17b12f90418b3d1260933a2496687675be87182db36411c32d72ed99\"" Feb 13 15:23:07.832187 containerd[1497]: time="2025-02-13T15:23:07.832156558Z" level=info msg="StartContainer for \"6e6a272d17b12f90418b3d1260933a2496687675be87182db36411c32d72ed99\"" Feb 13 15:23:07.853693 systemd[1]: Started cri-containerd-207939a3e475646165609d020eb130636148d25e31836d1f33b0b1fc9120205d.scope - libcontainer container 207939a3e475646165609d020eb130636148d25e31836d1f33b0b1fc9120205d. Feb 13 15:23:07.869544 systemd[1]: Started cri-containerd-6e6a272d17b12f90418b3d1260933a2496687675be87182db36411c32d72ed99.scope - libcontainer container 6e6a272d17b12f90418b3d1260933a2496687675be87182db36411c32d72ed99. Feb 13 15:23:07.871923 systemd[1]: Started cri-containerd-da1a966d9f284b213e59a21227f4df9fb1065bfe4efbc0380cd4e0365715f719.scope - libcontainer container da1a966d9f284b213e59a21227f4df9fb1065bfe4efbc0380cd4e0365715f719. Feb 13 15:23:07.928977 containerd[1497]: time="2025-02-13T15:23:07.928917690Z" level=info msg="StartContainer for \"6e6a272d17b12f90418b3d1260933a2496687675be87182db36411c32d72ed99\" returns successfully" Feb 13 15:23:07.929108 containerd[1497]: time="2025-02-13T15:23:07.929076317Z" level=info msg="StartContainer for \"207939a3e475646165609d020eb130636148d25e31836d1f33b0b1fc9120205d\" returns successfully" Feb 13 15:23:07.929108 containerd[1497]: time="2025-02-13T15:23:07.929101374Z" level=info msg="StartContainer for \"da1a966d9f284b213e59a21227f4df9fb1065bfe4efbc0380cd4e0365715f719\" returns successfully" Feb 13 15:23:08.476619 kubelet[2215]: E0213 15:23:08.476589 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:08.478689 kubelet[2215]: E0213 15:23:08.478664 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:08.481128 kubelet[2215]: E0213 15:23:08.481097 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:08.889651 kubelet[2215]: I0213 15:23:08.889509 2215 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:23:09.264102 kubelet[2215]: E0213 15:23:09.264063 2215 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:23:09.348562 kubelet[2215]: I0213 15:23:09.348523 2215 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:23:09.348562 kubelet[2215]: E0213 15:23:09.348559 2215 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 15:23:09.387017 kubelet[2215]: I0213 15:23:09.386992 2215 apiserver.go:52] "Watching apiserver" Feb 13 15:23:09.440075 kubelet[2215]: I0213 15:23:09.439989 2215 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:23:09.486233 kubelet[2215]: E0213 15:23:09.486197 2215 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 15:23:09.486679 kubelet[2215]: E0213 15:23:09.486359 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:11.090738 kubelet[2215]: E0213 15:23:11.090697 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:11.128407 systemd[1]: Reloading requested from client PID 2491 ('systemctl') (unit session-7.scope)... Feb 13 15:23:11.128426 systemd[1]: Reloading... Feb 13 15:23:11.206387 zram_generator::config[2530]: No configuration found. Feb 13 15:23:11.316539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:23:11.406684 systemd[1]: Reloading finished in 277 ms. Feb 13 15:23:11.451835 kubelet[2215]: I0213 15:23:11.451738 2215 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:23:11.451911 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:23:11.471165 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:23:11.471515 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:11.479555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:23:11.629996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:23:11.647775 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:23:11.689079 kubelet[2575]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:23:11.689079 kubelet[2575]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:23:11.689079 kubelet[2575]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:23:11.689562 kubelet[2575]: I0213 15:23:11.689120 2575 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:23:11.694420 kubelet[2575]: I0213 15:23:11.694382 2575 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:23:11.694420 kubelet[2575]: I0213 15:23:11.694401 2575 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:23:11.694813 kubelet[2575]: I0213 15:23:11.694788 2575 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:23:11.696919 kubelet[2575]: I0213 15:23:11.696896 2575 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:23:11.698584 kubelet[2575]: I0213 15:23:11.698566 2575 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:23:11.701121 kubelet[2575]: E0213 15:23:11.701090 2575 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:23:11.701121 kubelet[2575]: I0213 15:23:11.701118 2575 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:23:11.705487 kubelet[2575]: I0213 15:23:11.705465 2575 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:23:11.705618 kubelet[2575]: I0213 15:23:11.705593 2575 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:23:11.705789 kubelet[2575]: I0213 15:23:11.705728 2575 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:23:11.705983 kubelet[2575]: I0213 15:23:11.705779 2575 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:23:11.705983 kubelet[2575]: I0213 15:23:11.705976 2575 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:23:11.706115 kubelet[2575]: I0213 15:23:11.705986 2575 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:23:11.706115 kubelet[2575]: I0213 15:23:11.706021 2575 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:23:11.706174 kubelet[2575]: I0213 15:23:11.706139 2575 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:23:11.706174 kubelet[2575]: I0213 15:23:11.706153 2575 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:23:11.706242 kubelet[2575]: I0213 15:23:11.706191 2575 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:23:11.706242 kubelet[2575]: I0213 15:23:11.706207 2575 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:23:11.707469 kubelet[2575]: I0213 15:23:11.707443 2575 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:23:11.707897 kubelet[2575]: I0213 15:23:11.707872 2575 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:23:11.711888 kubelet[2575]: I0213 15:23:11.709308 2575 server.go:1269] "Started kubelet" Feb 13 15:23:11.711888 kubelet[2575]: I0213 15:23:11.711668 2575 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:23:11.712119 kubelet[2575]: I0213 15:23:11.712002 2575 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:23:11.712119 kubelet[2575]: I0213 15:23:11.712054 2575 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:23:11.718664 kubelet[2575]: I0213 15:23:11.718340 2575 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:23:11.720115 kubelet[2575]: I0213 15:23:11.720048 2575 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:23:11.720306 kubelet[2575]: I0213 15:23:11.720284 2575 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:23:11.721300 kubelet[2575]: E0213 15:23:11.720419 2575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:23:11.721300 kubelet[2575]: I0213 15:23:11.721174 2575 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:23:11.721667 kubelet[2575]: I0213 15:23:11.721616 2575 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:23:11.721879 kubelet[2575]: I0213 15:23:11.721814 2575 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:23:11.722524 kubelet[2575]: I0213 15:23:11.722221 2575 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:23:11.724565 kubelet[2575]: I0213 15:23:11.724528 2575 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:23:11.724791 kubelet[2575]: E0213 15:23:11.724763 2575 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:23:11.727044 kubelet[2575]: I0213 15:23:11.727017 2575 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:23:11.737072 kubelet[2575]: I0213 15:23:11.736993 2575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:23:11.738412 kubelet[2575]: I0213 15:23:11.738388 2575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:23:11.738471 kubelet[2575]: I0213 15:23:11.738424 2575 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:23:11.738471 kubelet[2575]: I0213 15:23:11.738453 2575 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:23:11.738840 kubelet[2575]: E0213 15:23:11.738505 2575 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:23:11.838749 kubelet[2575]: E0213 15:23:11.838656 2575 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:23:11.949148 kubelet[2575]: I0213 15:23:11.948763 2575 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:23:11.949148 kubelet[2575]: I0213 15:23:11.948790 2575 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:23:11.949148 kubelet[2575]: I0213 15:23:11.948816 2575 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:23:11.949148 kubelet[2575]: I0213 15:23:11.949036 2575 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:23:11.950681 kubelet[2575]: I0213 15:23:11.950642 2575 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:23:11.950762 kubelet[2575]: I0213 15:23:11.950751 2575 policy_none.go:49] "None policy: Start" Feb 13 15:23:11.952697 kubelet[2575]: I0213 15:23:11.952655 2575 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:23:11.952755 kubelet[2575]: I0213 15:23:11.952702 2575 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:23:11.952934 kubelet[2575]: I0213 15:23:11.952911 2575 state_mem.go:75] "Updated machine memory state" Feb 13 15:23:11.957859 kubelet[2575]: I0213 15:23:11.957788 2575 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:23:11.958062 kubelet[2575]: I0213 15:23:11.958023 2575 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:23:11.958062 kubelet[2575]: I0213 15:23:11.958037 2575 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:23:11.958348 kubelet[2575]: I0213 15:23:11.958267 2575 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:23:12.047615 kubelet[2575]: E0213 15:23:12.047560 2575 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:23:12.066248 kubelet[2575]: I0213 15:23:12.066217 2575 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:23:12.074531 kubelet[2575]: I0213 15:23:12.074497 2575 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 15:23:12.074678 kubelet[2575]: I0213 15:23:12.074602 2575 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:23:12.122370 sudo[2610]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:23:12.122958 sudo[2610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:23:12.123889 kubelet[2575]: I0213 15:23:12.123856 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1125230356a8747ee798bb4b6ccbaf0c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1125230356a8747ee798bb4b6ccbaf0c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:23:12.123975 kubelet[2575]: I0213 15:23:12.123908 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:12.123975 kubelet[2575]: I0213 15:23:12.123931 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:12.123975 kubelet[2575]: I0213 15:23:12.123951 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1125230356a8747ee798bb4b6ccbaf0c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1125230356a8747ee798bb4b6ccbaf0c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:23:12.123975 kubelet[2575]: I0213 15:23:12.123971 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1125230356a8747ee798bb4b6ccbaf0c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1125230356a8747ee798bb4b6ccbaf0c\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:23:12.124108 kubelet[2575]: I0213 15:23:12.124001 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:12.124108 kubelet[2575]: I0213 15:23:12.124035 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:12.124108 kubelet[2575]: I0213 15:23:12.124057 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:23:12.124108 kubelet[2575]: I0213 15:23:12.124077 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:23:12.348470 kubelet[2575]: E0213 15:23:12.348229 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:12.348470 kubelet[2575]: E0213 15:23:12.348270 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:12.348470 kubelet[2575]: E0213 15:23:12.348363 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:12.711755 kubelet[2575]: I0213 15:23:12.711706 2575 apiserver.go:52] "Watching apiserver" Feb 13 15:23:12.717694 sudo[2610]: pam_unix(sudo:session): session closed for user root Feb 13 15:23:12.722168 kubelet[2575]: I0213 15:23:12.722124 2575 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:23:12.752266 kubelet[2575]: E0213 15:23:12.751926 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:12.753272 kubelet[2575]: E0213 15:23:12.752861 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:12.753584 kubelet[2575]: E0213 15:23:12.753560 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:12.783985 kubelet[2575]: I0213 15:23:12.783863 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.783828798 podStartE2EDuration="1.783828798s" podCreationTimestamp="2025-02-13 15:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:23:12.776722947 +0000 UTC m=+1.123863648" watchObservedRunningTime="2025-02-13 15:23:12.783828798 +0000 UTC m=+1.130969499" Feb 13 15:23:12.783985 kubelet[2575]: I0213 15:23:12.784002 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.783997892 podStartE2EDuration="783.997892ms" podCreationTimestamp="2025-02-13 15:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:23:12.783317957 +0000 UTC m=+1.130458658" watchObservedRunningTime="2025-02-13 15:23:12.783997892 +0000 UTC m=+1.131138593" Feb 13 15:23:12.791236 kubelet[2575]: I0213 15:23:12.791182 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.791163426 podStartE2EDuration="791.163426ms" podCreationTimestamp="2025-02-13 15:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:23:12.790794348 +0000 UTC m=+1.137935049" watchObservedRunningTime="2025-02-13 15:23:12.791163426 +0000 UTC m=+1.138304127" Feb 13 15:23:13.754454 kubelet[2575]: E0213 15:23:13.754396 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:14.170832 sudo[1677]: pam_unix(sudo:session): session closed for user root Feb 13 15:23:14.172257 sshd[1676]: Connection closed by 10.0.0.1 port 47288 Feb 13 15:23:14.173129 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:14.177304 systemd[1]: sshd@6-10.0.0.18:22-10.0.0.1:47288.service: Deactivated successfully. Feb 13 15:23:14.179147 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:23:14.179350 systemd[1]: session-7.scope: Consumed 4.493s CPU time, 147.2M memory peak, 0B memory swap peak. Feb 13 15:23:14.179919 systemd-logind[1481]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:23:14.180891 systemd-logind[1481]: Removed session 7. Feb 13 15:23:15.307736 kubelet[2575]: E0213 15:23:15.307695 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:16.721353 kubelet[2575]: I0213 15:23:16.718375 2575 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:23:16.721954 containerd[1497]: time="2025-02-13T15:23:16.718692754Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:23:16.722254 kubelet[2575]: I0213 15:23:16.721691 2575 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:23:17.628685 systemd[1]: Created slice kubepods-besteffort-pod5f74894a_71f3_4f6e_be7c_135990db1d7d.slice - libcontainer container kubepods-besteffort-pod5f74894a_71f3_4f6e_be7c_135990db1d7d.slice. Feb 13 15:23:17.645816 systemd[1]: Created slice kubepods-burstable-pod92e54049_c99a_400e_a038_b01188795403.slice - libcontainer container kubepods-burstable-pod92e54049_c99a_400e_a038_b01188795403.slice. Feb 13 15:23:17.674269 systemd[1]: Created slice kubepods-besteffort-podfbbadbe0_72ff_4ced_a244_75c6b4bb700e.slice - libcontainer container kubepods-besteffort-podfbbadbe0_72ff_4ced_a244_75c6b4bb700e.slice. Feb 13 15:23:17.781634 kubelet[2575]: I0213 15:23:17.781578 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-bpf-maps\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.781634 kubelet[2575]: I0213 15:23:17.781619 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-hostproc\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.781634 kubelet[2575]: I0213 15:23:17.781646 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92e54049-c99a-400e-a038-b01188795403-cilium-config-path\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782206 kubelet[2575]: I0213 15:23:17.781668 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-host-proc-sys-net\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782206 kubelet[2575]: I0213 15:23:17.781702 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f74894a-71f3-4f6e-be7c-135990db1d7d-xtables-lock\") pod \"kube-proxy-nzt7h\" (UID: \"5f74894a-71f3-4f6e-be7c-135990db1d7d\") " pod="kube-system/kube-proxy-nzt7h" Feb 13 15:23:17.782206 kubelet[2575]: I0213 15:23:17.781754 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f74894a-71f3-4f6e-be7c-135990db1d7d-kube-proxy\") pod \"kube-proxy-nzt7h\" (UID: \"5f74894a-71f3-4f6e-be7c-135990db1d7d\") " pod="kube-system/kube-proxy-nzt7h" Feb 13 15:23:17.782206 kubelet[2575]: I0213 15:23:17.781774 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cilium-run\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782206 kubelet[2575]: I0213 15:23:17.781792 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-etc-cni-netd\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782206 kubelet[2575]: I0213 15:23:17.781805 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-xtables-lock\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782620 kubelet[2575]: I0213 15:23:17.781823 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92e54049-c99a-400e-a038-b01188795403-clustermesh-secrets\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782620 kubelet[2575]: I0213 15:23:17.781837 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-host-proc-sys-kernel\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782620 kubelet[2575]: I0213 15:23:17.781862 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cni-path\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782620 kubelet[2575]: I0213 15:23:17.781880 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc8h4\" (UniqueName: \"kubernetes.io/projected/92e54049-c99a-400e-a038-b01188795403-kube-api-access-hc8h4\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782620 kubelet[2575]: I0213 15:23:17.781894 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqfg\" (UniqueName: \"kubernetes.io/projected/fbbadbe0-72ff-4ced-a244-75c6b4bb700e-kube-api-access-ngqfg\") pod \"cilium-operator-5d85765b45-pc5vl\" (UID: \"fbbadbe0-72ff-4ced-a244-75c6b4bb700e\") " pod="kube-system/cilium-operator-5d85765b45-pc5vl" Feb 13 15:23:17.782779 kubelet[2575]: I0213 15:23:17.781940 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f74894a-71f3-4f6e-be7c-135990db1d7d-lib-modules\") pod \"kube-proxy-nzt7h\" (UID: \"5f74894a-71f3-4f6e-be7c-135990db1d7d\") " pod="kube-system/kube-proxy-nzt7h" Feb 13 15:23:17.782779 kubelet[2575]: I0213 15:23:17.781964 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9ps\" (UniqueName: \"kubernetes.io/projected/5f74894a-71f3-4f6e-be7c-135990db1d7d-kube-api-access-4f9ps\") pod \"kube-proxy-nzt7h\" (UID: \"5f74894a-71f3-4f6e-be7c-135990db1d7d\") " pod="kube-system/kube-proxy-nzt7h" Feb 13 15:23:17.782779 kubelet[2575]: I0213 15:23:17.781978 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cilium-cgroup\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782779 kubelet[2575]: I0213 15:23:17.781990 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-lib-modules\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.782779 kubelet[2575]: I0213 15:23:17.782005 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbbadbe0-72ff-4ced-a244-75c6b4bb700e-cilium-config-path\") pod \"cilium-operator-5d85765b45-pc5vl\" (UID: \"fbbadbe0-72ff-4ced-a244-75c6b4bb700e\") " pod="kube-system/cilium-operator-5d85765b45-pc5vl" Feb 13 15:23:17.782937 kubelet[2575]: I0213 15:23:17.782019 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92e54049-c99a-400e-a038-b01188795403-hubble-tls\") pod \"cilium-g84fr\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " pod="kube-system/cilium-g84fr" Feb 13 15:23:17.940971 kubelet[2575]: E0213 15:23:17.940936 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:17.941719 containerd[1497]: time="2025-02-13T15:23:17.941670621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nzt7h,Uid:5f74894a-71f3-4f6e-be7c-135990db1d7d,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:17.949005 kubelet[2575]: E0213 15:23:17.948971 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:17.949300 containerd[1497]: time="2025-02-13T15:23:17.949266870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g84fr,Uid:92e54049-c99a-400e-a038-b01188795403,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:17.977854 kubelet[2575]: E0213 15:23:17.977825 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:17.978108 containerd[1497]: time="2025-02-13T15:23:17.978083412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pc5vl,Uid:fbbadbe0-72ff-4ced-a244-75c6b4bb700e,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:18.110928 containerd[1497]: time="2025-02-13T15:23:18.110695784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:18.110928 containerd[1497]: time="2025-02-13T15:23:18.110762451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:18.110928 containerd[1497]: time="2025-02-13T15:23:18.110775706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:18.110928 containerd[1497]: time="2025-02-13T15:23:18.110851400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:18.131426 containerd[1497]: time="2025-02-13T15:23:18.121853032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:18.131426 containerd[1497]: time="2025-02-13T15:23:18.121903508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:18.131426 containerd[1497]: time="2025-02-13T15:23:18.121913197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:18.131426 containerd[1497]: time="2025-02-13T15:23:18.121975716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:18.354812 systemd[1]: Started cri-containerd-b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9.scope - libcontainer container b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9. Feb 13 15:23:18.356991 containerd[1497]: time="2025-02-13T15:23:18.356657461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:18.356991 containerd[1497]: time="2025-02-13T15:23:18.356709932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:18.356991 containerd[1497]: time="2025-02-13T15:23:18.356724679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:18.356991 containerd[1497]: time="2025-02-13T15:23:18.356810714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:18.365820 systemd[1]: Started cri-containerd-66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810.scope - libcontainer container 66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810. Feb 13 15:23:18.383518 systemd[1]: Started cri-containerd-7829012ed97e3bd37236d4136e37354fa8231332dbdd4efd9761b2c2ef0e19a8.scope - libcontainer container 7829012ed97e3bd37236d4136e37354fa8231332dbdd4efd9761b2c2ef0e19a8. Feb 13 15:23:18.403636 containerd[1497]: time="2025-02-13T15:23:18.403535309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g84fr,Uid:92e54049-c99a-400e-a038-b01188795403,Namespace:kube-system,Attempt:0,} returns sandbox id \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\"" Feb 13 15:23:18.405825 kubelet[2575]: E0213 15:23:18.405583 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:18.407285 containerd[1497]: time="2025-02-13T15:23:18.407189561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pc5vl,Uid:fbbadbe0-72ff-4ced-a244-75c6b4bb700e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9\"" Feb 13 15:23:18.407559 containerd[1497]: time="2025-02-13T15:23:18.407492769Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:23:18.409242 kubelet[2575]: E0213 15:23:18.409127 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:18.417598 containerd[1497]: time="2025-02-13T15:23:18.417551163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nzt7h,Uid:5f74894a-71f3-4f6e-be7c-135990db1d7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7829012ed97e3bd37236d4136e37354fa8231332dbdd4efd9761b2c2ef0e19a8\"" Feb 13 15:23:18.418288 kubelet[2575]: E0213 15:23:18.418250 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:18.421422 containerd[1497]: time="2025-02-13T15:23:18.421387013Z" level=info msg="CreateContainer within sandbox \"7829012ed97e3bd37236d4136e37354fa8231332dbdd4efd9761b2c2ef0e19a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:23:18.441801 containerd[1497]: time="2025-02-13T15:23:18.441753096Z" level=info msg="CreateContainer within sandbox \"7829012ed97e3bd37236d4136e37354fa8231332dbdd4efd9761b2c2ef0e19a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"665d804bf5a88a9061b977495ba0b500f19f17d45b6765504d660d6973cc2ace\"" Feb 13 15:23:18.442305 containerd[1497]: time="2025-02-13T15:23:18.442281213Z" level=info msg="StartContainer for \"665d804bf5a88a9061b977495ba0b500f19f17d45b6765504d660d6973cc2ace\"" Feb 13 15:23:18.473482 systemd[1]: Started cri-containerd-665d804bf5a88a9061b977495ba0b500f19f17d45b6765504d660d6973cc2ace.scope - libcontainer container 665d804bf5a88a9061b977495ba0b500f19f17d45b6765504d660d6973cc2ace. Feb 13 15:23:18.505286 containerd[1497]: time="2025-02-13T15:23:18.505235691Z" level=info msg="StartContainer for \"665d804bf5a88a9061b977495ba0b500f19f17d45b6765504d660d6973cc2ace\" returns successfully" Feb 13 15:23:18.763244 kubelet[2575]: E0213 15:23:18.763211 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:18.773007 kubelet[2575]: I0213 15:23:18.772901 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nzt7h" podStartSLOduration=1.772881945 podStartE2EDuration="1.772881945s" podCreationTimestamp="2025-02-13 15:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:23:18.772801912 +0000 UTC m=+7.119942633" watchObservedRunningTime="2025-02-13 15:23:18.772881945 +0000 UTC m=+7.120022646" Feb 13 15:23:18.823961 kubelet[2575]: E0213 15:23:18.823926 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:19.766997 kubelet[2575]: E0213 15:23:19.766908 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:21.660976 kubelet[2575]: E0213 15:23:21.660935 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:21.770393 kubelet[2575]: E0213 15:23:21.769867 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:24.131626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206687116.mount: Deactivated successfully. Feb 13 15:23:25.328058 kubelet[2575]: E0213 15:23:25.325357 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:25.383585 update_engine[1485]: I20250213 15:23:25.376397 1485 update_attempter.cc:509] Updating boot flags... Feb 13 15:23:25.461671 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2963) Feb 13 15:23:25.579491 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2964) Feb 13 15:23:25.796586 kubelet[2575]: E0213 15:23:25.792607 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:28.533792 containerd[1497]: time="2025-02-13T15:23:28.533725260Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:28.534912 containerd[1497]: time="2025-02-13T15:23:28.534857151Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:23:28.536179 containerd[1497]: time="2025-02-13T15:23:28.536142401Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:28.538101 containerd[1497]: time="2025-02-13T15:23:28.538061670Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.130524207s" Feb 13 15:23:28.538101 containerd[1497]: time="2025-02-13T15:23:28.538098390Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:23:28.543259 containerd[1497]: time="2025-02-13T15:23:28.543221417Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:23:28.560176 containerd[1497]: time="2025-02-13T15:23:28.560114143Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:23:28.574756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659664088.mount: Deactivated successfully. Feb 13 15:23:28.575856 containerd[1497]: time="2025-02-13T15:23:28.575553560Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\"" Feb 13 15:23:28.578553 containerd[1497]: time="2025-02-13T15:23:28.578518557Z" level=info msg="StartContainer for \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\"" Feb 13 15:23:28.621626 systemd[1]: Started cri-containerd-05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7.scope - libcontainer container 05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7. Feb 13 15:23:28.657069 containerd[1497]: time="2025-02-13T15:23:28.656993269Z" level=info msg="StartContainer for \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\" returns successfully" Feb 13 15:23:28.667284 systemd[1]: cri-containerd-05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7.scope: Deactivated successfully. Feb 13 15:23:28.807431 kubelet[2575]: E0213 15:23:28.807254 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:28.930106 containerd[1497]: time="2025-02-13T15:23:28.930032978Z" level=info msg="shim disconnected" id=05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7 namespace=k8s.io Feb 13 15:23:28.930106 containerd[1497]: time="2025-02-13T15:23:28.930093292Z" level=warning msg="cleaning up after shim disconnected" id=05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7 namespace=k8s.io Feb 13 15:23:28.930106 containerd[1497]: time="2025-02-13T15:23:28.930102270Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:29.571067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7-rootfs.mount: Deactivated successfully. Feb 13 15:23:29.809653 kubelet[2575]: E0213 15:23:29.809617 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:29.811529 containerd[1497]: time="2025-02-13T15:23:29.811489494Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:23:29.901866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403705933.mount: Deactivated successfully. Feb 13 15:23:29.964389 containerd[1497]: time="2025-02-13T15:23:29.964344905Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\"" Feb 13 15:23:29.964697 containerd[1497]: time="2025-02-13T15:23:29.964676392Z" level=info msg="StartContainer for \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\"" Feb 13 15:23:29.991451 systemd[1]: Started cri-containerd-b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c.scope - libcontainer container b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c. Feb 13 15:23:30.017252 containerd[1497]: time="2025-02-13T15:23:30.017214869Z" level=info msg="StartContainer for \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\" returns successfully" Feb 13 15:23:30.027600 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:23:30.028080 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:23:30.028167 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:23:30.033895 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:23:30.034387 systemd[1]: cri-containerd-b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c.scope: Deactivated successfully. Feb 13 15:23:30.048763 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:23:30.068882 containerd[1497]: time="2025-02-13T15:23:30.068820201Z" level=info msg="shim disconnected" id=b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c namespace=k8s.io Feb 13 15:23:30.068882 containerd[1497]: time="2025-02-13T15:23:30.068877779Z" level=warning msg="cleaning up after shim disconnected" id=b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c namespace=k8s.io Feb 13 15:23:30.068882 containerd[1497]: time="2025-02-13T15:23:30.068888059Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:30.572381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c-rootfs.mount: Deactivated successfully. Feb 13 15:23:30.574893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369819531.mount: Deactivated successfully. Feb 13 15:23:30.812460 kubelet[2575]: E0213 15:23:30.812429 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:30.815625 containerd[1497]: time="2025-02-13T15:23:30.815581821Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:23:30.840097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3485518786.mount: Deactivated successfully. Feb 13 15:23:30.856881 containerd[1497]: time="2025-02-13T15:23:30.856840135Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\"" Feb 13 15:23:30.857394 containerd[1497]: time="2025-02-13T15:23:30.857366991Z" level=info msg="StartContainer for \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\"" Feb 13 15:23:30.890457 systemd[1]: Started cri-containerd-1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5.scope - libcontainer container 1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5. Feb 13 15:23:30.925022 systemd[1]: cri-containerd-1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5.scope: Deactivated successfully. Feb 13 15:23:30.925484 containerd[1497]: time="2025-02-13T15:23:30.925443088Z" level=info msg="StartContainer for \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\" returns successfully" Feb 13 15:23:31.007485 containerd[1497]: time="2025-02-13T15:23:31.007396399Z" level=info msg="shim disconnected" id=1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5 namespace=k8s.io Feb 13 15:23:31.007485 containerd[1497]: time="2025-02-13T15:23:31.007480047Z" level=warning msg="cleaning up after shim disconnected" id=1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5 namespace=k8s.io Feb 13 15:23:31.007485 containerd[1497]: time="2025-02-13T15:23:31.007492460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:31.541123 containerd[1497]: time="2025-02-13T15:23:31.541048338Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:31.547692 containerd[1497]: time="2025-02-13T15:23:31.547576875Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:23:31.555918 containerd[1497]: time="2025-02-13T15:23:31.555848284Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:23:31.557201 containerd[1497]: time="2025-02-13T15:23:31.557154000Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.013724289s" Feb 13 15:23:31.557201 containerd[1497]: time="2025-02-13T15:23:31.557197503Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:23:31.562967 containerd[1497]: time="2025-02-13T15:23:31.562563485Z" level=info msg="CreateContainer within sandbox \"b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:23:31.571202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5-rootfs.mount: Deactivated successfully. Feb 13 15:23:31.605068 containerd[1497]: time="2025-02-13T15:23:31.605003926Z" level=info msg="CreateContainer within sandbox \"b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\"" Feb 13 15:23:31.605610 containerd[1497]: time="2025-02-13T15:23:31.605478952Z" level=info msg="StartContainer for \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\"" Feb 13 15:23:31.637571 systemd[1]: Started cri-containerd-2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73.scope - libcontainer container 2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73. Feb 13 15:23:31.674728 containerd[1497]: time="2025-02-13T15:23:31.674657403Z" level=info msg="StartContainer for \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\" returns successfully" Feb 13 15:23:31.817016 kubelet[2575]: E0213 15:23:31.816045 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:31.818280 kubelet[2575]: E0213 15:23:31.818234 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:31.820588 containerd[1497]: time="2025-02-13T15:23:31.820543902Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:23:31.831275 kubelet[2575]: I0213 15:23:31.831214 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pc5vl" podStartSLOduration=1.6835389790000002 podStartE2EDuration="14.831196357s" podCreationTimestamp="2025-02-13 15:23:17 +0000 UTC" firstStartedPulling="2025-02-13 15:23:18.410397644 +0000 UTC m=+6.757538345" lastFinishedPulling="2025-02-13 15:23:31.558055032 +0000 UTC m=+19.905195723" observedRunningTime="2025-02-13 15:23:31.830200887 +0000 UTC m=+20.177341588" watchObservedRunningTime="2025-02-13 15:23:31.831196357 +0000 UTC m=+20.178337058" Feb 13 15:23:31.843199 containerd[1497]: time="2025-02-13T15:23:31.843137065Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\"" Feb 13 15:23:31.844351 containerd[1497]: time="2025-02-13T15:23:31.844297255Z" level=info msg="StartContainer for \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\"" Feb 13 15:23:31.882451 systemd[1]: Started cri-containerd-eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c.scope - libcontainer container eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c. Feb 13 15:23:31.908209 systemd[1]: cri-containerd-eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c.scope: Deactivated successfully. Feb 13 15:23:31.911065 containerd[1497]: time="2025-02-13T15:23:31.911004610Z" level=info msg="StartContainer for \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\" returns successfully" Feb 13 15:23:32.347957 containerd[1497]: time="2025-02-13T15:23:32.347893339Z" level=info msg="shim disconnected" id=eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c namespace=k8s.io Feb 13 15:23:32.347957 containerd[1497]: time="2025-02-13T15:23:32.347949475Z" level=warning msg="cleaning up after shim disconnected" id=eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c namespace=k8s.io Feb 13 15:23:32.347957 containerd[1497]: time="2025-02-13T15:23:32.347959504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:32.371489 containerd[1497]: time="2025-02-13T15:23:32.371423822Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:23:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:23:32.833498 kubelet[2575]: E0213 15:23:32.833454 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:32.833920 kubelet[2575]: E0213 15:23:32.833463 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:32.836485 containerd[1497]: time="2025-02-13T15:23:32.836437115Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:23:33.093787 containerd[1497]: time="2025-02-13T15:23:33.093653740Z" level=info msg="CreateContainer within sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\"" Feb 13 15:23:33.095310 containerd[1497]: time="2025-02-13T15:23:33.094205441Z" level=info msg="StartContainer for \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\"" Feb 13 15:23:33.154475 systemd[1]: Started cri-containerd-1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1.scope - libcontainer container 1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1. Feb 13 15:23:33.185705 containerd[1497]: time="2025-02-13T15:23:33.185651115Z" level=info msg="StartContainer for \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\" returns successfully" Feb 13 15:23:33.355541 kubelet[2575]: I0213 15:23:33.355407 2575 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:23:33.400386 systemd[1]: Created slice kubepods-burstable-pod778b7fde_e23d_4aa4_887b_31b3bab30511.slice - libcontainer container kubepods-burstable-pod778b7fde_e23d_4aa4_887b_31b3bab30511.slice. Feb 13 15:23:33.406421 systemd[1]: Created slice kubepods-burstable-pod84d3bd57_bfa6_4840_9d4b_2f031742cd0c.slice - libcontainer container kubepods-burstable-pod84d3bd57_bfa6_4840_9d4b_2f031742cd0c.slice. Feb 13 15:23:33.495798 kubelet[2575]: I0213 15:23:33.495738 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84d3bd57-bfa6-4840-9d4b-2f031742cd0c-config-volume\") pod \"coredns-6f6b679f8f-jhw79\" (UID: \"84d3bd57-bfa6-4840-9d4b-2f031742cd0c\") " pod="kube-system/coredns-6f6b679f8f-jhw79" Feb 13 15:23:33.495798 kubelet[2575]: I0213 15:23:33.495786 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjpxb\" (UniqueName: \"kubernetes.io/projected/778b7fde-e23d-4aa4-887b-31b3bab30511-kube-api-access-wjpxb\") pod \"coredns-6f6b679f8f-qqvq4\" (UID: \"778b7fde-e23d-4aa4-887b-31b3bab30511\") " pod="kube-system/coredns-6f6b679f8f-qqvq4" Feb 13 15:23:33.495798 kubelet[2575]: I0213 15:23:33.495808 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmldf\" (UniqueName: \"kubernetes.io/projected/84d3bd57-bfa6-4840-9d4b-2f031742cd0c-kube-api-access-vmldf\") pod \"coredns-6f6b679f8f-jhw79\" (UID: \"84d3bd57-bfa6-4840-9d4b-2f031742cd0c\") " pod="kube-system/coredns-6f6b679f8f-jhw79" Feb 13 15:23:33.496030 kubelet[2575]: I0213 15:23:33.495826 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/778b7fde-e23d-4aa4-887b-31b3bab30511-config-volume\") pod \"coredns-6f6b679f8f-qqvq4\" (UID: \"778b7fde-e23d-4aa4-887b-31b3bab30511\") " pod="kube-system/coredns-6f6b679f8f-qqvq4" Feb 13 15:23:33.703361 kubelet[2575]: E0213 15:23:33.703307 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:33.708906 kubelet[2575]: E0213 15:23:33.708858 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:33.709509 containerd[1497]: time="2025-02-13T15:23:33.709454114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jhw79,Uid:84d3bd57-bfa6-4840-9d4b-2f031742cd0c,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:33.716377 containerd[1497]: time="2025-02-13T15:23:33.716309259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqvq4,Uid:778b7fde-e23d-4aa4-887b-31b3bab30511,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:33.837903 kubelet[2575]: E0213 15:23:33.837861 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:34.839828 kubelet[2575]: E0213 15:23:34.839786 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:35.440641 systemd-networkd[1409]: cilium_host: Link UP Feb 13 15:23:35.440804 systemd-networkd[1409]: cilium_net: Link UP Feb 13 15:23:35.440809 systemd-networkd[1409]: cilium_net: Gained carrier Feb 13 15:23:35.440996 systemd-networkd[1409]: cilium_host: Gained carrier Feb 13 15:23:35.443545 systemd-networkd[1409]: cilium_host: Gained IPv6LL Feb 13 15:23:35.542936 systemd-networkd[1409]: cilium_vxlan: Link UP Feb 13 15:23:35.542948 systemd-networkd[1409]: cilium_vxlan: Gained carrier Feb 13 15:23:35.750454 systemd-networkd[1409]: cilium_net: Gained IPv6LL Feb 13 15:23:35.760348 kernel: NET: Registered PF_ALG protocol family Feb 13 15:23:35.841279 kubelet[2575]: E0213 15:23:35.841248 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:36.407018 systemd-networkd[1409]: lxc_health: Link UP Feb 13 15:23:36.419588 systemd-networkd[1409]: lxc_health: Gained carrier Feb 13 15:23:36.630495 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Feb 13 15:23:36.802915 systemd-networkd[1409]: lxcbe77c22ac8be: Link UP Feb 13 15:23:36.816363 kernel: eth0: renamed from tmp5b1b6 Feb 13 15:23:36.828533 systemd-networkd[1409]: lxc13555b3188cc: Link UP Feb 13 15:23:36.830619 systemd-networkd[1409]: lxcbe77c22ac8be: Gained carrier Feb 13 15:23:36.834441 kernel: eth0: renamed from tmpea2b7 Feb 13 15:23:36.842172 systemd-networkd[1409]: lxc13555b3188cc: Gained carrier Feb 13 15:23:37.717575 systemd-networkd[1409]: lxc_health: Gained IPv6LL Feb 13 15:23:37.952217 kubelet[2575]: E0213 15:23:37.951926 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:37.970845 kubelet[2575]: I0213 15:23:37.969344 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g84fr" podStartSLOduration=10.83252506 podStartE2EDuration="20.969303271s" podCreationTimestamp="2025-02-13 15:23:17 +0000 UTC" firstStartedPulling="2025-02-13 15:23:18.406244501 +0000 UTC m=+6.753385202" lastFinishedPulling="2025-02-13 15:23:28.543022712 +0000 UTC m=+16.890163413" observedRunningTime="2025-02-13 15:23:33.862719346 +0000 UTC m=+22.209860047" watchObservedRunningTime="2025-02-13 15:23:37.969303271 +0000 UTC m=+26.316443982" Feb 13 15:23:38.293557 systemd-networkd[1409]: lxcbe77c22ac8be: Gained IPv6LL Feb 13 15:23:38.805525 systemd-networkd[1409]: lxc13555b3188cc: Gained IPv6LL Feb 13 15:23:38.849959 kubelet[2575]: E0213 15:23:38.849921 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:39.851247 kubelet[2575]: E0213 15:23:39.851205 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:40.334101 containerd[1497]: time="2025-02-13T15:23:40.334014852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:40.334101 containerd[1497]: time="2025-02-13T15:23:40.334072841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:40.334101 containerd[1497]: time="2025-02-13T15:23:40.334086627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:40.334558 containerd[1497]: time="2025-02-13T15:23:40.334179992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:40.339432 containerd[1497]: time="2025-02-13T15:23:40.339212958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:40.339432 containerd[1497]: time="2025-02-13T15:23:40.339271578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:40.339817 containerd[1497]: time="2025-02-13T15:23:40.339286496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:40.339817 containerd[1497]: time="2025-02-13T15:23:40.339748346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:40.359473 systemd[1]: Started cri-containerd-5b1b688faf58bde42009820c31eb8fc737e9a704cf8f1884eb0b41119270b56f.scope - libcontainer container 5b1b688faf58bde42009820c31eb8fc737e9a704cf8f1884eb0b41119270b56f. Feb 13 15:23:40.364517 systemd[1]: Started cri-containerd-ea2b714dd9fdb663dc8fe2b835d4f08560a82adc061d15a9e657eaba1006d181.scope - libcontainer container ea2b714dd9fdb663dc8fe2b835d4f08560a82adc061d15a9e657eaba1006d181. Feb 13 15:23:40.372548 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:23:40.379636 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:23:40.399110 containerd[1497]: time="2025-02-13T15:23:40.399037664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqvq4,Uid:778b7fde-e23d-4aa4-887b-31b3bab30511,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b1b688faf58bde42009820c31eb8fc737e9a704cf8f1884eb0b41119270b56f\"" Feb 13 15:23:40.400782 kubelet[2575]: E0213 15:23:40.400746 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:40.404411 containerd[1497]: time="2025-02-13T15:23:40.403124658Z" level=info msg="CreateContainer within sandbox \"5b1b688faf58bde42009820c31eb8fc737e9a704cf8f1884eb0b41119270b56f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:23:40.410398 containerd[1497]: time="2025-02-13T15:23:40.410358326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jhw79,Uid:84d3bd57-bfa6-4840-9d4b-2f031742cd0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea2b714dd9fdb663dc8fe2b835d4f08560a82adc061d15a9e657eaba1006d181\"" Feb 13 15:23:40.411072 kubelet[2575]: E0213 15:23:40.411044 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:40.413867 containerd[1497]: time="2025-02-13T15:23:40.413831735Z" level=info msg="CreateContainer within sandbox \"ea2b714dd9fdb663dc8fe2b835d4f08560a82adc061d15a9e657eaba1006d181\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:23:40.445111 containerd[1497]: time="2025-02-13T15:23:40.445013749Z" level=info msg="CreateContainer within sandbox \"5b1b688faf58bde42009820c31eb8fc737e9a704cf8f1884eb0b41119270b56f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3056f586ecf9176319246091b49c481b1b7b00248d021fecb4c0c01b7dc29c07\"" Feb 13 15:23:40.452011 containerd[1497]: time="2025-02-13T15:23:40.451828277Z" level=info msg="StartContainer for \"3056f586ecf9176319246091b49c481b1b7b00248d021fecb4c0c01b7dc29c07\"" Feb 13 15:23:40.462479 containerd[1497]: time="2025-02-13T15:23:40.461439441Z" level=info msg="CreateContainer within sandbox \"ea2b714dd9fdb663dc8fe2b835d4f08560a82adc061d15a9e657eaba1006d181\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca7744283b0e577c4505a2e32ada4a8440c1b6bd18b0999004c6b2e4a6377813\"" Feb 13 15:23:40.463253 containerd[1497]: time="2025-02-13T15:23:40.463223430Z" level=info msg="StartContainer for \"ca7744283b0e577c4505a2e32ada4a8440c1b6bd18b0999004c6b2e4a6377813\"" Feb 13 15:23:40.490083 systemd[1]: Started cri-containerd-3056f586ecf9176319246091b49c481b1b7b00248d021fecb4c0c01b7dc29c07.scope - libcontainer container 3056f586ecf9176319246091b49c481b1b7b00248d021fecb4c0c01b7dc29c07. Feb 13 15:23:40.501579 systemd[1]: Started cri-containerd-ca7744283b0e577c4505a2e32ada4a8440c1b6bd18b0999004c6b2e4a6377813.scope - libcontainer container ca7744283b0e577c4505a2e32ada4a8440c1b6bd18b0999004c6b2e4a6377813. Feb 13 15:23:40.627247 containerd[1497]: time="2025-02-13T15:23:40.627109251Z" level=info msg="StartContainer for \"ca7744283b0e577c4505a2e32ada4a8440c1b6bd18b0999004c6b2e4a6377813\" returns successfully" Feb 13 15:23:40.627247 containerd[1497]: time="2025-02-13T15:23:40.627109261Z" level=info msg="StartContainer for \"3056f586ecf9176319246091b49c481b1b7b00248d021fecb4c0c01b7dc29c07\" returns successfully" Feb 13 15:23:40.857084 kubelet[2575]: E0213 15:23:40.857048 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:40.860033 kubelet[2575]: E0213 15:23:40.859264 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:40.904872 kubelet[2575]: I0213 15:23:40.904701 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qqvq4" podStartSLOduration=23.904668545 podStartE2EDuration="23.904668545s" podCreationTimestamp="2025-02-13 15:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:23:40.875802282 +0000 UTC m=+29.222942973" watchObservedRunningTime="2025-02-13 15:23:40.904668545 +0000 UTC m=+29.251809246" Feb 13 15:23:41.179045 systemd[1]: Started sshd@7-10.0.0.18:22-10.0.0.1:48154.service - OpenSSH per-connection server daemon (10.0.0.1:48154). Feb 13 15:23:41.230546 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 48154 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:23:41.232704 sshd-session[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:41.237692 systemd-logind[1481]: New session 8 of user core. Feb 13 15:23:41.247480 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:23:41.419767 sshd[3971]: Connection closed by 10.0.0.1 port 48154 Feb 13 15:23:41.420108 sshd-session[3969]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:41.424784 systemd[1]: sshd@7-10.0.0.18:22-10.0.0.1:48154.service: Deactivated successfully. Feb 13 15:23:41.426909 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:23:41.427911 systemd-logind[1481]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:23:41.428921 systemd-logind[1481]: Removed session 8. Feb 13 15:23:41.861101 kubelet[2575]: E0213 15:23:41.860835 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:41.861641 kubelet[2575]: E0213 15:23:41.861266 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:41.873839 kubelet[2575]: I0213 15:23:41.873764 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jhw79" podStartSLOduration=24.873740287 podStartE2EDuration="24.873740287s" podCreationTimestamp="2025-02-13 15:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:23:40.905651436 +0000 UTC m=+29.252792148" watchObservedRunningTime="2025-02-13 15:23:41.873740287 +0000 UTC m=+30.220880988" Feb 13 15:23:42.865596 kubelet[2575]: E0213 15:23:42.865540 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:42.866085 kubelet[2575]: E0213 15:23:42.865781 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:23:46.431067 systemd[1]: Started sshd@8-10.0.0.18:22-10.0.0.1:46378.service - OpenSSH per-connection server daemon (10.0.0.1:46378). Feb 13 15:23:46.474082 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 46378 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:23:46.475631 sshd-session[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:46.480360 systemd-logind[1481]: New session 9 of user core. Feb 13 15:23:46.495653 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:23:46.613135 sshd[3994]: Connection closed by 10.0.0.1 port 46378 Feb 13 15:23:46.613557 sshd-session[3992]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:46.617601 systemd[1]: sshd@8-10.0.0.18:22-10.0.0.1:46378.service: Deactivated successfully. Feb 13 15:23:46.619884 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:23:46.620682 systemd-logind[1481]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:23:46.621602 systemd-logind[1481]: Removed session 9. Feb 13 15:23:51.624399 systemd[1]: Started sshd@9-10.0.0.18:22-10.0.0.1:46392.service - OpenSSH per-connection server daemon (10.0.0.1:46392). Feb 13 15:23:51.668804 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 46392 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:23:51.670818 sshd-session[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:51.675259 systemd-logind[1481]: New session 10 of user core. Feb 13 15:23:51.686567 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:23:51.798309 sshd[4012]: Connection closed by 10.0.0.1 port 46392 Feb 13 15:23:51.798707 sshd-session[4010]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:51.803271 systemd[1]: sshd@9-10.0.0.18:22-10.0.0.1:46392.service: Deactivated successfully. Feb 13 15:23:51.805260 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:23:51.805998 systemd-logind[1481]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:23:51.806980 systemd-logind[1481]: Removed session 10. Feb 13 15:23:56.814463 systemd[1]: Started sshd@10-10.0.0.18:22-10.0.0.1:42882.service - OpenSSH per-connection server daemon (10.0.0.1:42882). Feb 13 15:23:56.859457 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 42882 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:23:56.862984 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:56.873671 systemd-logind[1481]: New session 11 of user core. Feb 13 15:23:56.884462 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:23:56.997792 sshd[4027]: Connection closed by 10.0.0.1 port 42882 Feb 13 15:23:56.998222 sshd-session[4025]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:57.009261 systemd[1]: sshd@10-10.0.0.18:22-10.0.0.1:42882.service: Deactivated successfully. Feb 13 15:23:57.011181 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:23:57.012890 systemd-logind[1481]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:23:57.022584 systemd[1]: Started sshd@11-10.0.0.18:22-10.0.0.1:42890.service - OpenSSH per-connection server daemon (10.0.0.1:42890). Feb 13 15:23:57.023506 systemd-logind[1481]: Removed session 11. Feb 13 15:23:57.063966 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 42890 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:23:57.065850 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:57.070866 systemd-logind[1481]: New session 12 of user core. Feb 13 15:23:57.080571 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:23:57.248951 sshd[4043]: Connection closed by 10.0.0.1 port 42890 Feb 13 15:23:57.252447 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:57.261601 systemd[1]: sshd@11-10.0.0.18:22-10.0.0.1:42890.service: Deactivated successfully. Feb 13 15:23:57.264626 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:23:57.267780 systemd-logind[1481]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:23:57.284889 systemd[1]: Started sshd@12-10.0.0.18:22-10.0.0.1:42902.service - OpenSSH per-connection server daemon (10.0.0.1:42902). Feb 13 15:23:57.286005 systemd-logind[1481]: Removed session 12. Feb 13 15:23:57.327361 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 42902 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:23:57.328798 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:57.332737 systemd-logind[1481]: New session 13 of user core. Feb 13 15:23:57.341462 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:23:57.453939 sshd[4056]: Connection closed by 10.0.0.1 port 42902 Feb 13 15:23:57.454421 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:57.459216 systemd[1]: sshd@12-10.0.0.18:22-10.0.0.1:42902.service: Deactivated successfully. Feb 13 15:23:57.461427 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:23:57.462200 systemd-logind[1481]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:23:57.463322 systemd-logind[1481]: Removed session 13. Feb 13 15:24:02.467062 systemd[1]: Started sshd@13-10.0.0.18:22-10.0.0.1:42916.service - OpenSSH per-connection server daemon (10.0.0.1:42916). Feb 13 15:24:02.515419 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 42916 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:02.517011 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:02.521024 systemd-logind[1481]: New session 14 of user core. Feb 13 15:24:02.528471 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:24:02.640470 sshd[4070]: Connection closed by 10.0.0.1 port 42916 Feb 13 15:24:02.640866 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:02.645715 systemd[1]: sshd@13-10.0.0.18:22-10.0.0.1:42916.service: Deactivated successfully. Feb 13 15:24:02.648376 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:24:02.649170 systemd-logind[1481]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:24:02.650209 systemd-logind[1481]: Removed session 14. Feb 13 15:24:07.652801 systemd[1]: Started sshd@14-10.0.0.18:22-10.0.0.1:43904.service - OpenSSH per-connection server daemon (10.0.0.1:43904). Feb 13 15:24:07.696981 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 43904 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:07.698581 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:07.702931 systemd-logind[1481]: New session 15 of user core. Feb 13 15:24:07.713482 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:24:07.819558 sshd[4084]: Connection closed by 10.0.0.1 port 43904 Feb 13 15:24:07.819935 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:07.823944 systemd[1]: sshd@14-10.0.0.18:22-10.0.0.1:43904.service: Deactivated successfully. Feb 13 15:24:07.826004 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:24:07.826653 systemd-logind[1481]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:24:07.827468 systemd-logind[1481]: Removed session 15. Feb 13 15:24:12.832286 systemd[1]: Started sshd@15-10.0.0.18:22-10.0.0.1:43916.service - OpenSSH per-connection server daemon (10.0.0.1:43916). Feb 13 15:24:12.875566 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 43916 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:12.877412 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:12.881799 systemd-logind[1481]: New session 16 of user core. Feb 13 15:24:12.898486 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:24:13.006361 sshd[4101]: Connection closed by 10.0.0.1 port 43916 Feb 13 15:24:13.006768 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:13.016157 systemd[1]: sshd@15-10.0.0.18:22-10.0.0.1:43916.service: Deactivated successfully. Feb 13 15:24:13.017988 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:24:13.019390 systemd-logind[1481]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:24:13.028726 systemd[1]: Started sshd@16-10.0.0.18:22-10.0.0.1:43926.service - OpenSSH per-connection server daemon (10.0.0.1:43926). Feb 13 15:24:13.029778 systemd-logind[1481]: Removed session 16. Feb 13 15:24:13.070004 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 43926 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:13.071464 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:13.075168 systemd-logind[1481]: New session 17 of user core. Feb 13 15:24:13.084452 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:24:13.314100 sshd[4116]: Connection closed by 10.0.0.1 port 43926 Feb 13 15:24:13.314687 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:13.326371 systemd[1]: sshd@16-10.0.0.18:22-10.0.0.1:43926.service: Deactivated successfully. Feb 13 15:24:13.328198 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:24:13.329834 systemd-logind[1481]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:24:13.331499 systemd[1]: Started sshd@17-10.0.0.18:22-10.0.0.1:43940.service - OpenSSH per-connection server daemon (10.0.0.1:43940). Feb 13 15:24:13.332430 systemd-logind[1481]: Removed session 17. Feb 13 15:24:13.391217 sshd[4126]: Accepted publickey for core from 10.0.0.1 port 43940 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:13.392913 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:13.397249 systemd-logind[1481]: New session 18 of user core. Feb 13 15:24:13.411511 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:24:14.737787 sshd[4128]: Connection closed by 10.0.0.1 port 43940 Feb 13 15:24:14.739610 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:14.749614 systemd[1]: sshd@17-10.0.0.18:22-10.0.0.1:43940.service: Deactivated successfully. Feb 13 15:24:14.752224 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:24:14.753721 systemd-logind[1481]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:24:14.759250 systemd[1]: Started sshd@18-10.0.0.18:22-10.0.0.1:46956.service - OpenSSH per-connection server daemon (10.0.0.1:46956). Feb 13 15:24:14.761734 systemd-logind[1481]: Removed session 18. Feb 13 15:24:14.804473 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 46956 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:14.806020 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:14.809700 systemd-logind[1481]: New session 19 of user core. Feb 13 15:24:14.820546 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:24:15.064476 sshd[4148]: Connection closed by 10.0.0.1 port 46956 Feb 13 15:24:15.064821 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:15.074310 systemd[1]: sshd@18-10.0.0.18:22-10.0.0.1:46956.service: Deactivated successfully. Feb 13 15:24:15.076046 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:24:15.078509 systemd-logind[1481]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:24:15.083695 systemd[1]: Started sshd@19-10.0.0.18:22-10.0.0.1:46972.service - OpenSSH per-connection server daemon (10.0.0.1:46972). Feb 13 15:24:15.085897 systemd-logind[1481]: Removed session 19. Feb 13 15:24:15.123627 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 46972 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:15.125062 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:15.129688 systemd-logind[1481]: New session 20 of user core. Feb 13 15:24:15.136461 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:24:15.245777 sshd[4161]: Connection closed by 10.0.0.1 port 46972 Feb 13 15:24:15.246182 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:15.250282 systemd[1]: sshd@19-10.0.0.18:22-10.0.0.1:46972.service: Deactivated successfully. Feb 13 15:24:15.252267 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:24:15.252929 systemd-logind[1481]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:24:15.254012 systemd-logind[1481]: Removed session 20. Feb 13 15:24:20.262626 systemd[1]: Started sshd@20-10.0.0.18:22-10.0.0.1:46982.service - OpenSSH per-connection server daemon (10.0.0.1:46982). Feb 13 15:24:20.305509 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 46982 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:20.307163 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:20.311017 systemd-logind[1481]: New session 21 of user core. Feb 13 15:24:20.322462 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:24:20.429074 sshd[4180]: Connection closed by 10.0.0.1 port 46982 Feb 13 15:24:20.429415 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:20.432939 systemd[1]: sshd@20-10.0.0.18:22-10.0.0.1:46982.service: Deactivated successfully. Feb 13 15:24:20.435013 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:24:20.435646 systemd-logind[1481]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:24:20.436494 systemd-logind[1481]: Removed session 21. Feb 13 15:24:23.740217 kubelet[2575]: E0213 15:24:23.740159 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:25.441428 systemd[1]: Started sshd@21-10.0.0.18:22-10.0.0.1:50946.service - OpenSSH per-connection server daemon (10.0.0.1:50946). Feb 13 15:24:25.486040 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 50946 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:25.487436 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:25.491409 systemd-logind[1481]: New session 22 of user core. Feb 13 15:24:25.501449 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:24:25.605434 sshd[4194]: Connection closed by 10.0.0.1 port 50946 Feb 13 15:24:25.606555 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:25.610198 systemd[1]: sshd@21-10.0.0.18:22-10.0.0.1:50946.service: Deactivated successfully. Feb 13 15:24:25.612202 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:24:25.612908 systemd-logind[1481]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:24:25.613807 systemd-logind[1481]: Removed session 22. Feb 13 15:24:30.619570 systemd[1]: Started sshd@22-10.0.0.18:22-10.0.0.1:50960.service - OpenSSH per-connection server daemon (10.0.0.1:50960). Feb 13 15:24:30.668180 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 50960 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:30.670070 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:30.675285 systemd-logind[1481]: New session 23 of user core. Feb 13 15:24:30.688478 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:24:30.792263 sshd[4208]: Connection closed by 10.0.0.1 port 50960 Feb 13 15:24:30.792651 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:30.796108 systemd[1]: sshd@22-10.0.0.18:22-10.0.0.1:50960.service: Deactivated successfully. Feb 13 15:24:30.797980 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:24:30.798573 systemd-logind[1481]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:24:30.799472 systemd-logind[1481]: Removed session 23. Feb 13 15:24:35.806625 systemd[1]: Started sshd@23-10.0.0.18:22-10.0.0.1:51460.service - OpenSSH per-connection server daemon (10.0.0.1:51460). Feb 13 15:24:35.855571 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 51460 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:35.857341 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:35.861831 systemd-logind[1481]: New session 24 of user core. Feb 13 15:24:35.871552 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:24:35.990195 sshd[4222]: Connection closed by 10.0.0.1 port 51460 Feb 13 15:24:35.990633 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:36.002761 systemd[1]: sshd@23-10.0.0.18:22-10.0.0.1:51460.service: Deactivated successfully. Feb 13 15:24:36.005041 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:24:36.006909 systemd-logind[1481]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:24:36.012648 systemd[1]: Started sshd@24-10.0.0.18:22-10.0.0.1:51470.service - OpenSSH per-connection server daemon (10.0.0.1:51470). Feb 13 15:24:36.013739 systemd-logind[1481]: Removed session 24. Feb 13 15:24:36.054629 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 51470 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:36.056492 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:36.060902 systemd-logind[1481]: New session 25 of user core. Feb 13 15:24:36.069590 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:24:37.487607 containerd[1497]: time="2025-02-13T15:24:37.487458346Z" level=info msg="StopContainer for \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\" with timeout 30 (s)" Feb 13 15:24:37.503487 containerd[1497]: time="2025-02-13T15:24:37.503447744Z" level=info msg="Stop container \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\" with signal terminated" Feb 13 15:24:37.514252 containerd[1497]: time="2025-02-13T15:24:37.514177701Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:24:37.517547 systemd[1]: cri-containerd-2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73.scope: Deactivated successfully. Feb 13 15:24:37.524666 containerd[1497]: time="2025-02-13T15:24:37.524619779Z" level=info msg="StopContainer for \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\" with timeout 2 (s)" Feb 13 15:24:37.525192 containerd[1497]: time="2025-02-13T15:24:37.525150692Z" level=info msg="Stop container \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\" with signal terminated" Feb 13 15:24:37.535609 systemd-networkd[1409]: lxc_health: Link DOWN Feb 13 15:24:37.535620 systemd-networkd[1409]: lxc_health: Lost carrier Feb 13 15:24:37.543582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73-rootfs.mount: Deactivated successfully. Feb 13 15:24:37.551318 containerd[1497]: time="2025-02-13T15:24:37.551236428Z" level=info msg="shim disconnected" id=2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73 namespace=k8s.io Feb 13 15:24:37.551318 containerd[1497]: time="2025-02-13T15:24:37.551310619Z" level=warning msg="cleaning up after shim disconnected" id=2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73 namespace=k8s.io Feb 13 15:24:37.551318 containerd[1497]: time="2025-02-13T15:24:37.551319205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:37.563055 systemd[1]: cri-containerd-1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1.scope: Deactivated successfully. Feb 13 15:24:37.563445 systemd[1]: cri-containerd-1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1.scope: Consumed 6.987s CPU time. Feb 13 15:24:37.575912 containerd[1497]: time="2025-02-13T15:24:37.575861779Z" level=info msg="StopContainer for \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\" returns successfully" Feb 13 15:24:37.580128 containerd[1497]: time="2025-02-13T15:24:37.580064585Z" level=info msg="StopPodSandbox for \"b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9\"" Feb 13 15:24:37.585876 containerd[1497]: time="2025-02-13T15:24:37.580139989Z" level=info msg="Container to stop \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:37.590958 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9-shm.mount: Deactivated successfully. Feb 13 15:24:37.593277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1-rootfs.mount: Deactivated successfully. Feb 13 15:24:37.596796 containerd[1497]: time="2025-02-13T15:24:37.596712530Z" level=info msg="shim disconnected" id=1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1 namespace=k8s.io Feb 13 15:24:37.596796 containerd[1497]: time="2025-02-13T15:24:37.596785900Z" level=warning msg="cleaning up after shim disconnected" id=1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1 namespace=k8s.io Feb 13 15:24:37.596796 containerd[1497]: time="2025-02-13T15:24:37.596799957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:37.598229 systemd[1]: cri-containerd-b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9.scope: Deactivated successfully. Feb 13 15:24:37.619946 containerd[1497]: time="2025-02-13T15:24:37.619889177Z" level=info msg="StopContainer for \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\" returns successfully" Feb 13 15:24:37.620709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9-rootfs.mount: Deactivated successfully. Feb 13 15:24:37.621010 containerd[1497]: time="2025-02-13T15:24:37.620759177Z" level=info msg="StopPodSandbox for \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\"" Feb 13 15:24:37.621010 containerd[1497]: time="2025-02-13T15:24:37.620802620Z" level=info msg="Container to stop \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:37.621010 containerd[1497]: time="2025-02-13T15:24:37.620844911Z" level=info msg="Container to stop \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:37.621010 containerd[1497]: time="2025-02-13T15:24:37.620855310Z" level=info msg="Container to stop \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:37.621010 containerd[1497]: time="2025-02-13T15:24:37.620866602Z" level=info msg="Container to stop \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:37.621010 containerd[1497]: time="2025-02-13T15:24:37.620877673Z" level=info msg="Container to stop \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:24:37.626548 containerd[1497]: time="2025-02-13T15:24:37.626253067Z" level=info msg="shim disconnected" id=b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9 namespace=k8s.io Feb 13 15:24:37.626548 containerd[1497]: time="2025-02-13T15:24:37.626318852Z" level=warning msg="cleaning up after shim disconnected" id=b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9 namespace=k8s.io Feb 13 15:24:37.626548 containerd[1497]: time="2025-02-13T15:24:37.626346174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:37.629377 systemd[1]: cri-containerd-66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810.scope: Deactivated successfully. Feb 13 15:24:37.646559 containerd[1497]: time="2025-02-13T15:24:37.646512219Z" level=info msg="TearDown network for sandbox \"b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9\" successfully" Feb 13 15:24:37.646559 containerd[1497]: time="2025-02-13T15:24:37.646548869Z" level=info msg="StopPodSandbox for \"b3358f968b8fb7519b9af635fca70ff4ae44ffbddc38c1455712b3eec9142bf9\" returns successfully" Feb 13 15:24:37.672411 containerd[1497]: time="2025-02-13T15:24:37.672146675Z" level=info msg="shim disconnected" id=66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810 namespace=k8s.io Feb 13 15:24:37.672411 containerd[1497]: time="2025-02-13T15:24:37.672222630Z" level=warning msg="cleaning up after shim disconnected" id=66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810 namespace=k8s.io Feb 13 15:24:37.672411 containerd[1497]: time="2025-02-13T15:24:37.672231086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:37.694499 containerd[1497]: time="2025-02-13T15:24:37.694367159Z" level=info msg="TearDown network for sandbox \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" successfully" Feb 13 15:24:37.694499 containerd[1497]: time="2025-02-13T15:24:37.694403468Z" level=info msg="StopPodSandbox for \"66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810\" returns successfully" Feb 13 15:24:37.740075 kubelet[2575]: E0213 15:24:37.739942 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:37.851338 kubelet[2575]: I0213 15:24:37.851258 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-etc-cni-netd\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851338 kubelet[2575]: I0213 15:24:37.851304 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.851530 kubelet[2575]: I0213 15:24:37.851356 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cni-path\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851530 kubelet[2575]: I0213 15:24:37.851376 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-xtables-lock\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851530 kubelet[2575]: I0213 15:24:37.851387 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cni-path" (OuterVolumeSpecName: "cni-path") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.851530 kubelet[2575]: I0213 15:24:37.851391 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cilium-cgroup\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851530 kubelet[2575]: I0213 15:24:37.851435 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.851530 kubelet[2575]: I0213 15:24:37.851443 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbbadbe0-72ff-4ced-a244-75c6b4bb700e-cilium-config-path\") pod \"fbbadbe0-72ff-4ced-a244-75c6b4bb700e\" (UID: \"fbbadbe0-72ff-4ced-a244-75c6b4bb700e\") " Feb 13 15:24:37.851734 kubelet[2575]: I0213 15:24:37.851473 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cilium-run\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851734 kubelet[2575]: I0213 15:24:37.851494 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-host-proc-sys-kernel\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851734 kubelet[2575]: I0213 15:24:37.851512 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92e54049-c99a-400e-a038-b01188795403-hubble-tls\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851734 kubelet[2575]: I0213 15:24:37.851527 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-hostproc\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851734 kubelet[2575]: I0213 15:24:37.851519 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.851734 kubelet[2575]: I0213 15:24:37.851536 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.851870 kubelet[2575]: I0213 15:24:37.851544 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92e54049-c99a-400e-a038-b01188795403-clustermesh-secrets\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851870 kubelet[2575]: I0213 15:24:37.851642 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc8h4\" (UniqueName: \"kubernetes.io/projected/92e54049-c99a-400e-a038-b01188795403-kube-api-access-hc8h4\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851870 kubelet[2575]: I0213 15:24:37.851667 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-lib-modules\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851870 kubelet[2575]: I0213 15:24:37.851693 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-host-proc-sys-net\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851870 kubelet[2575]: I0213 15:24:37.851724 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-bpf-maps\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.851870 kubelet[2575]: I0213 15:24:37.851749 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngqfg\" (UniqueName: \"kubernetes.io/projected/fbbadbe0-72ff-4ced-a244-75c6b4bb700e-kube-api-access-ngqfg\") pod \"fbbadbe0-72ff-4ced-a244-75c6b4bb700e\" (UID: \"fbbadbe0-72ff-4ced-a244-75c6b4bb700e\") " Feb 13 15:24:37.852042 kubelet[2575]: I0213 15:24:37.851775 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92e54049-c99a-400e-a038-b01188795403-cilium-config-path\") pod \"92e54049-c99a-400e-a038-b01188795403\" (UID: \"92e54049-c99a-400e-a038-b01188795403\") " Feb 13 15:24:37.852042 kubelet[2575]: I0213 15:24:37.851832 2575 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.852042 kubelet[2575]: I0213 15:24:37.851847 2575 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.852042 kubelet[2575]: I0213 15:24:37.851859 2575 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.852042 kubelet[2575]: I0213 15:24:37.851870 2575 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.852042 kubelet[2575]: I0213 15:24:37.851886 2575 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.854394 kubelet[2575]: I0213 15:24:37.854366 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.854586 kubelet[2575]: I0213 15:24:37.854562 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.855047 kubelet[2575]: I0213 15:24:37.854859 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.855047 kubelet[2575]: I0213 15:24:37.854992 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-hostproc" (OuterVolumeSpecName: "hostproc") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.856139 kubelet[2575]: I0213 15:24:37.856102 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbbadbe0-72ff-4ced-a244-75c6b4bb700e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fbbadbe0-72ff-4ced-a244-75c6b4bb700e" (UID: "fbbadbe0-72ff-4ced-a244-75c6b4bb700e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:24:37.856430 kubelet[2575]: I0213 15:24:37.856412 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e54049-c99a-400e-a038-b01188795403-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:24:37.856525 kubelet[2575]: I0213 15:24:37.856510 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:24:37.857539 kubelet[2575]: I0213 15:24:37.857516 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e54049-c99a-400e-a038-b01188795403-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:24:37.857677 kubelet[2575]: I0213 15:24:37.857589 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbbadbe0-72ff-4ced-a244-75c6b4bb700e-kube-api-access-ngqfg" (OuterVolumeSpecName: "kube-api-access-ngqfg") pod "fbbadbe0-72ff-4ced-a244-75c6b4bb700e" (UID: "fbbadbe0-72ff-4ced-a244-75c6b4bb700e"). InnerVolumeSpecName "kube-api-access-ngqfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:24:37.859116 kubelet[2575]: I0213 15:24:37.859077 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e54049-c99a-400e-a038-b01188795403-kube-api-access-hc8h4" (OuterVolumeSpecName: "kube-api-access-hc8h4") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "kube-api-access-hc8h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:24:37.860066 kubelet[2575]: I0213 15:24:37.860030 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e54049-c99a-400e-a038-b01188795403-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92e54049-c99a-400e-a038-b01188795403" (UID: "92e54049-c99a-400e-a038-b01188795403"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:24:37.952404 kubelet[2575]: I0213 15:24:37.952353 2575 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbbadbe0-72ff-4ced-a244-75c6b4bb700e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952404 kubelet[2575]: I0213 15:24:37.952388 2575 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952404 kubelet[2575]: I0213 15:24:37.952398 2575 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92e54049-c99a-400e-a038-b01188795403-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952404 kubelet[2575]: I0213 15:24:37.952406 2575 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952404 kubelet[2575]: I0213 15:24:37.952418 2575 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92e54049-c99a-400e-a038-b01188795403-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952697 kubelet[2575]: I0213 15:24:37.952426 2575 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hc8h4\" (UniqueName: \"kubernetes.io/projected/92e54049-c99a-400e-a038-b01188795403-kube-api-access-hc8h4\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952697 kubelet[2575]: I0213 15:24:37.952434 2575 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952697 kubelet[2575]: I0213 15:24:37.952441 2575 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952697 kubelet[2575]: I0213 15:24:37.952449 2575 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92e54049-c99a-400e-a038-b01188795403-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952697 kubelet[2575]: I0213 15:24:37.952460 2575 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ngqfg\" (UniqueName: \"kubernetes.io/projected/fbbadbe0-72ff-4ced-a244-75c6b4bb700e-kube-api-access-ngqfg\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.952697 kubelet[2575]: I0213 15:24:37.952467 2575 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92e54049-c99a-400e-a038-b01188795403-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:24:37.974574 kubelet[2575]: I0213 15:24:37.974541 2575 scope.go:117] "RemoveContainer" containerID="2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73" Feb 13 15:24:37.975903 containerd[1497]: time="2025-02-13T15:24:37.975863812Z" level=info msg="RemoveContainer for \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\"" Feb 13 15:24:37.983698 systemd[1]: Removed slice kubepods-besteffort-podfbbadbe0_72ff_4ced_a244_75c6b4bb700e.slice - libcontainer container kubepods-besteffort-podfbbadbe0_72ff_4ced_a244_75c6b4bb700e.slice. Feb 13 15:24:37.986460 containerd[1497]: time="2025-02-13T15:24:37.986409317Z" level=info msg="RemoveContainer for \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\" returns successfully" Feb 13 15:24:37.986678 kubelet[2575]: I0213 15:24:37.986652 2575 scope.go:117] "RemoveContainer" containerID="2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73" Feb 13 15:24:37.986959 containerd[1497]: time="2025-02-13T15:24:37.986877580Z" level=error msg="ContainerStatus for \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\": not found" Feb 13 15:24:37.987905 systemd[1]: Removed slice kubepods-burstable-pod92e54049_c99a_400e_a038_b01188795403.slice - libcontainer container kubepods-burstable-pod92e54049_c99a_400e_a038_b01188795403.slice. Feb 13 15:24:37.988035 systemd[1]: kubepods-burstable-pod92e54049_c99a_400e_a038_b01188795403.slice: Consumed 7.095s CPU time. Feb 13 15:24:37.994847 kubelet[2575]: E0213 15:24:37.994752 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\": not found" containerID="2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73" Feb 13 15:24:37.994918 kubelet[2575]: I0213 15:24:37.994796 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73"} err="failed to get container status \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\": rpc error: code = NotFound desc = an error occurred when try to find container \"2529d270e47eb094c3ba8cba137d9d7523e33cb8d916a5cb44f01c58753cde73\": not found" Feb 13 15:24:37.994918 kubelet[2575]: I0213 15:24:37.994877 2575 scope.go:117] "RemoveContainer" containerID="1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1" Feb 13 15:24:37.996187 containerd[1497]: time="2025-02-13T15:24:37.996138284Z" level=info msg="RemoveContainer for \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\"" Feb 13 15:24:38.008595 containerd[1497]: time="2025-02-13T15:24:38.008548860Z" level=info msg="RemoveContainer for \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\" returns successfully" Feb 13 15:24:38.008864 kubelet[2575]: I0213 15:24:38.008829 2575 scope.go:117] "RemoveContainer" containerID="eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c" Feb 13 15:24:38.010611 containerd[1497]: time="2025-02-13T15:24:38.010581125Z" level=info msg="RemoveContainer for \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\"" Feb 13 15:24:38.014180 containerd[1497]: time="2025-02-13T15:24:38.014149659Z" level=info msg="RemoveContainer for \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\" returns successfully" Feb 13 15:24:38.014302 kubelet[2575]: I0213 15:24:38.014280 2575 scope.go:117] "RemoveContainer" containerID="1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5" Feb 13 15:24:38.015283 containerd[1497]: time="2025-02-13T15:24:38.015251229Z" level=info msg="RemoveContainer for \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\"" Feb 13 15:24:38.018575 containerd[1497]: time="2025-02-13T15:24:38.018540981Z" level=info msg="RemoveContainer for \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\" returns successfully" Feb 13 15:24:38.018802 kubelet[2575]: I0213 15:24:38.018708 2575 scope.go:117] "RemoveContainer" containerID="b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c" Feb 13 15:24:38.019618 containerd[1497]: time="2025-02-13T15:24:38.019596814Z" level=info msg="RemoveContainer for \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\"" Feb 13 15:24:38.022856 containerd[1497]: time="2025-02-13T15:24:38.022827594Z" level=info msg="RemoveContainer for \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\" returns successfully" Feb 13 15:24:38.022980 kubelet[2575]: I0213 15:24:38.022961 2575 scope.go:117] "RemoveContainer" containerID="05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7" Feb 13 15:24:38.024316 containerd[1497]: time="2025-02-13T15:24:38.023995190Z" level=info msg="RemoveContainer for \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\"" Feb 13 15:24:38.027304 containerd[1497]: time="2025-02-13T15:24:38.027274513Z" level=info msg="RemoveContainer for \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\" returns successfully" Feb 13 15:24:38.027501 kubelet[2575]: I0213 15:24:38.027421 2575 scope.go:117] "RemoveContainer" containerID="1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1" Feb 13 15:24:38.027653 containerd[1497]: time="2025-02-13T15:24:38.027622957Z" level=error msg="ContainerStatus for \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\": not found" Feb 13 15:24:38.027765 kubelet[2575]: E0213 15:24:38.027744 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\": not found" containerID="1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1" Feb 13 15:24:38.027819 kubelet[2575]: I0213 15:24:38.027774 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1"} err="failed to get container status \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d6999e16c6634ef0f6f82eece6478f40b84db8cd2c020e0138fe310b19565e1\": not found" Feb 13 15:24:38.027819 kubelet[2575]: I0213 15:24:38.027796 2575 scope.go:117] "RemoveContainer" containerID="eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c" Feb 13 15:24:38.028015 containerd[1497]: time="2025-02-13T15:24:38.027979797Z" level=error msg="ContainerStatus for \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\": not found" Feb 13 15:24:38.028134 kubelet[2575]: E0213 15:24:38.028113 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\": not found" containerID="eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c" Feb 13 15:24:38.028172 kubelet[2575]: I0213 15:24:38.028136 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c"} err="failed to get container status \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb4d9baa694cfe47c215cf1ce16a1acba81266a562c1c7726d6fefdf03ac241c\": not found" Feb 13 15:24:38.028172 kubelet[2575]: I0213 15:24:38.028152 2575 scope.go:117] "RemoveContainer" containerID="1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5" Feb 13 15:24:38.028310 containerd[1497]: time="2025-02-13T15:24:38.028281903Z" level=error msg="ContainerStatus for \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\": not found" Feb 13 15:24:38.028481 kubelet[2575]: E0213 15:24:38.028446 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\": not found" containerID="1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5" Feb 13 15:24:38.028524 kubelet[2575]: I0213 15:24:38.028484 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5"} err="failed to get container status \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\": rpc error: code = NotFound desc = an error occurred when try to find container \"1dee18a177bb850edf7ea3f595403d96368ba361c52ca5b571b89adad1b51ea5\": not found" Feb 13 15:24:38.028524 kubelet[2575]: I0213 15:24:38.028501 2575 scope.go:117] "RemoveContainer" containerID="b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c" Feb 13 15:24:38.028658 containerd[1497]: time="2025-02-13T15:24:38.028630708Z" level=error msg="ContainerStatus for \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\": not found" Feb 13 15:24:38.028745 kubelet[2575]: E0213 15:24:38.028726 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\": not found" containerID="b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c" Feb 13 15:24:38.028786 kubelet[2575]: I0213 15:24:38.028750 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c"} err="failed to get container status \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b23cd02820e95043f32391d482f8817c68b1617b99fa3f6776b509c405b5f97c\": not found" Feb 13 15:24:38.028786 kubelet[2575]: I0213 15:24:38.028769 2575 scope.go:117] "RemoveContainer" containerID="05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7" Feb 13 15:24:38.028902 containerd[1497]: time="2025-02-13T15:24:38.028878861Z" level=error msg="ContainerStatus for \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\": not found" Feb 13 15:24:38.029002 kubelet[2575]: E0213 15:24:38.028969 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\": not found" containerID="05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7" Feb 13 15:24:38.029043 kubelet[2575]: I0213 15:24:38.029004 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7"} err="failed to get container status \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"05b6f6db4a319ef420d5641461fd4c211b002a24c18fa7bc26184e091ff11eb7\": not found" Feb 13 15:24:38.497197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810-rootfs.mount: Deactivated successfully. Feb 13 15:24:38.497353 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66cfaee1c25ea67a6c123b65580ec45478a21042304a953a88fedb422fca5810-shm.mount: Deactivated successfully. Feb 13 15:24:38.497431 systemd[1]: var-lib-kubelet-pods-92e54049\x2dc99a\x2d400e\x2da038\x2db01188795403-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhc8h4.mount: Deactivated successfully. Feb 13 15:24:38.497509 systemd[1]: var-lib-kubelet-pods-fbbadbe0\x2d72ff\x2d4ced\x2da244\x2d75c6b4bb700e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngqfg.mount: Deactivated successfully. Feb 13 15:24:38.497592 systemd[1]: var-lib-kubelet-pods-92e54049\x2dc99a\x2d400e\x2da038\x2db01188795403-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:24:38.497669 systemd[1]: var-lib-kubelet-pods-92e54049\x2dc99a\x2d400e\x2da038\x2db01188795403-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:24:38.739011 kubelet[2575]: E0213 15:24:38.738971 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:39.451213 sshd[4236]: Connection closed by 10.0.0.1 port 51470 Feb 13 15:24:39.451805 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:39.464775 systemd[1]: sshd@24-10.0.0.18:22-10.0.0.1:51470.service: Deactivated successfully. Feb 13 15:24:39.466869 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:24:39.469053 systemd-logind[1481]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:24:39.480634 systemd[1]: Started sshd@25-10.0.0.18:22-10.0.0.1:51478.service - OpenSSH per-connection server daemon (10.0.0.1:51478). Feb 13 15:24:39.481883 systemd-logind[1481]: Removed session 25. Feb 13 15:24:39.524004 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 51478 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:39.525636 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:39.530872 systemd-logind[1481]: New session 26 of user core. Feb 13 15:24:39.544463 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:24:39.742374 kubelet[2575]: I0213 15:24:39.742216 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e54049-c99a-400e-a038-b01188795403" path="/var/lib/kubelet/pods/92e54049-c99a-400e-a038-b01188795403/volumes" Feb 13 15:24:39.743100 kubelet[2575]: I0213 15:24:39.743076 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbbadbe0-72ff-4ced-a244-75c6b4bb700e" path="/var/lib/kubelet/pods/fbbadbe0-72ff-4ced-a244-75c6b4bb700e/volumes" Feb 13 15:24:39.911359 sshd[4399]: Connection closed by 10.0.0.1 port 51478 Feb 13 15:24:39.911891 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:39.920747 systemd[1]: sshd@25-10.0.0.18:22-10.0.0.1:51478.service: Deactivated successfully. Feb 13 15:24:39.924100 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:24:39.927684 systemd-logind[1481]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:24:39.931367 kubelet[2575]: E0213 15:24:39.931306 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92e54049-c99a-400e-a038-b01188795403" containerName="mount-cgroup" Feb 13 15:24:39.931367 kubelet[2575]: E0213 15:24:39.931351 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92e54049-c99a-400e-a038-b01188795403" containerName="apply-sysctl-overwrites" Feb 13 15:24:39.931367 kubelet[2575]: E0213 15:24:39.931359 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92e54049-c99a-400e-a038-b01188795403" containerName="cilium-agent" Feb 13 15:24:39.931367 kubelet[2575]: E0213 15:24:39.931365 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92e54049-c99a-400e-a038-b01188795403" containerName="clean-cilium-state" Feb 13 15:24:39.931367 kubelet[2575]: E0213 15:24:39.931372 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92e54049-c99a-400e-a038-b01188795403" containerName="mount-bpf-fs" Feb 13 15:24:39.931367 kubelet[2575]: E0213 15:24:39.931378 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbbadbe0-72ff-4ced-a244-75c6b4bb700e" containerName="cilium-operator" Feb 13 15:24:39.931611 kubelet[2575]: I0213 15:24:39.931404 2575 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e54049-c99a-400e-a038-b01188795403" containerName="cilium-agent" Feb 13 15:24:39.931611 kubelet[2575]: I0213 15:24:39.931410 2575 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbbadbe0-72ff-4ced-a244-75c6b4bb700e" containerName="cilium-operator" Feb 13 15:24:39.933726 systemd[1]: Started sshd@26-10.0.0.18:22-10.0.0.1:51492.service - OpenSSH per-connection server daemon (10.0.0.1:51492). Feb 13 15:24:39.938646 systemd-logind[1481]: Removed session 26. Feb 13 15:24:39.949579 systemd[1]: Created slice kubepods-burstable-pod50690284_bf42_4550_a5dd_4b57c3a96ec9.slice - libcontainer container kubepods-burstable-pod50690284_bf42_4550_a5dd_4b57c3a96ec9.slice. Feb 13 15:24:39.962428 kubelet[2575]: I0213 15:24:39.962397 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-bpf-maps\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962546 kubelet[2575]: I0213 15:24:39.962430 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-xtables-lock\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962546 kubelet[2575]: I0213 15:24:39.962468 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-host-proc-sys-kernel\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962546 kubelet[2575]: I0213 15:24:39.962483 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-host-proc-sys-net\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962546 kubelet[2575]: I0213 15:24:39.962496 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50690284-bf42-4550-a5dd-4b57c3a96ec9-clustermesh-secrets\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962546 kubelet[2575]: I0213 15:24:39.962508 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50690284-bf42-4550-a5dd-4b57c3a96ec9-cilium-config-path\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962659 kubelet[2575]: I0213 15:24:39.962524 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-cilium-run\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962659 kubelet[2575]: I0213 15:24:39.962536 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-cilium-cgroup\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962659 kubelet[2575]: I0213 15:24:39.962548 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-hostproc\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962659 kubelet[2575]: I0213 15:24:39.962569 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-lib-modules\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962659 kubelet[2575]: I0213 15:24:39.962584 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-etc-cni-netd\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962659 kubelet[2575]: I0213 15:24:39.962596 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv2tv\" (UniqueName: \"kubernetes.io/projected/50690284-bf42-4550-a5dd-4b57c3a96ec9-kube-api-access-sv2tv\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962778 kubelet[2575]: I0213 15:24:39.962610 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50690284-bf42-4550-a5dd-4b57c3a96ec9-cni-path\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962778 kubelet[2575]: I0213 15:24:39.962624 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/50690284-bf42-4550-a5dd-4b57c3a96ec9-cilium-ipsec-secrets\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.962778 kubelet[2575]: I0213 15:24:39.962636 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50690284-bf42-4550-a5dd-4b57c3a96ec9-hubble-tls\") pod \"cilium-49znr\" (UID: \"50690284-bf42-4550-a5dd-4b57c3a96ec9\") " pod="kube-system/cilium-49znr" Feb 13 15:24:39.977593 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 51492 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:39.979074 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:39.982789 systemd-logind[1481]: New session 27 of user core. Feb 13 15:24:39.992454 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:24:40.043010 sshd[4412]: Connection closed by 10.0.0.1 port 51492 Feb 13 15:24:40.043337 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:40.055061 systemd[1]: sshd@26-10.0.0.18:22-10.0.0.1:51492.service: Deactivated successfully. Feb 13 15:24:40.056712 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:24:40.059104 systemd-logind[1481]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:24:40.066261 systemd[1]: Started sshd@27-10.0.0.18:22-10.0.0.1:51504.service - OpenSSH per-connection server daemon (10.0.0.1:51504). Feb 13 15:24:40.087874 systemd-logind[1481]: Removed session 27. Feb 13 15:24:40.113410 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 51504 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:40.115003 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:40.118942 systemd-logind[1481]: New session 28 of user core. Feb 13 15:24:40.130459 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:24:40.255497 kubelet[2575]: E0213 15:24:40.255316 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:40.256434 containerd[1497]: time="2025-02-13T15:24:40.256373041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-49znr,Uid:50690284-bf42-4550-a5dd-4b57c3a96ec9,Namespace:kube-system,Attempt:0,}" Feb 13 15:24:40.279124 containerd[1497]: time="2025-02-13T15:24:40.279036327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:24:40.279124 containerd[1497]: time="2025-02-13T15:24:40.279094999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:24:40.279124 containerd[1497]: time="2025-02-13T15:24:40.279107422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:24:40.279426 containerd[1497]: time="2025-02-13T15:24:40.279186362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:24:40.301451 systemd[1]: Started cri-containerd-0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8.scope - libcontainer container 0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8. Feb 13 15:24:40.322535 containerd[1497]: time="2025-02-13T15:24:40.322454070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-49znr,Uid:50690284-bf42-4550-a5dd-4b57c3a96ec9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\"" Feb 13 15:24:40.323754 kubelet[2575]: E0213 15:24:40.323731 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:40.326473 containerd[1497]: time="2025-02-13T15:24:40.326400740Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:24:40.338991 containerd[1497]: time="2025-02-13T15:24:40.338901675Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2782e3b045a71024b05ae555756f97b0d4fca9e6d3ea2a8648508f9efe363d7\"" Feb 13 15:24:40.340089 containerd[1497]: time="2025-02-13T15:24:40.339489053Z" level=info msg="StartContainer for \"c2782e3b045a71024b05ae555756f97b0d4fca9e6d3ea2a8648508f9efe363d7\"" Feb 13 15:24:40.369450 systemd[1]: Started cri-containerd-c2782e3b045a71024b05ae555756f97b0d4fca9e6d3ea2a8648508f9efe363d7.scope - libcontainer container c2782e3b045a71024b05ae555756f97b0d4fca9e6d3ea2a8648508f9efe363d7. Feb 13 15:24:40.394475 containerd[1497]: time="2025-02-13T15:24:40.394429880Z" level=info msg="StartContainer for \"c2782e3b045a71024b05ae555756f97b0d4fca9e6d3ea2a8648508f9efe363d7\" returns successfully" Feb 13 15:24:40.404978 systemd[1]: cri-containerd-c2782e3b045a71024b05ae555756f97b0d4fca9e6d3ea2a8648508f9efe363d7.scope: Deactivated successfully. Feb 13 15:24:40.437518 containerd[1497]: time="2025-02-13T15:24:40.437450426Z" level=info msg="shim disconnected" id=c2782e3b045a71024b05ae555756f97b0d4fca9e6d3ea2a8648508f9efe363d7 namespace=k8s.io Feb 13 15:24:40.437518 containerd[1497]: time="2025-02-13T15:24:40.437510300Z" level=warning msg="cleaning up after shim disconnected" id=c2782e3b045a71024b05ae555756f97b0d4fca9e6d3ea2a8648508f9efe363d7 namespace=k8s.io Feb 13 15:24:40.437518 containerd[1497]: time="2025-02-13T15:24:40.437520279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:40.988208 kubelet[2575]: E0213 15:24:40.988164 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:40.989906 containerd[1497]: time="2025-02-13T15:24:40.989851047Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:24:41.011025 containerd[1497]: time="2025-02-13T15:24:41.010966956Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e958fe78d02378a3284317063619f77df6e8b78a65dbbec41f3cc7f599839feb\"" Feb 13 15:24:41.012497 containerd[1497]: time="2025-02-13T15:24:41.011536160Z" level=info msg="StartContainer for \"e958fe78d02378a3284317063619f77df6e8b78a65dbbec41f3cc7f599839feb\"" Feb 13 15:24:41.044537 systemd[1]: Started cri-containerd-e958fe78d02378a3284317063619f77df6e8b78a65dbbec41f3cc7f599839feb.scope - libcontainer container e958fe78d02378a3284317063619f77df6e8b78a65dbbec41f3cc7f599839feb. Feb 13 15:24:41.077169 containerd[1497]: time="2025-02-13T15:24:41.077099985Z" level=info msg="StartContainer for \"e958fe78d02378a3284317063619f77df6e8b78a65dbbec41f3cc7f599839feb\" returns successfully" Feb 13 15:24:41.083885 systemd[1]: cri-containerd-e958fe78d02378a3284317063619f77df6e8b78a65dbbec41f3cc7f599839feb.scope: Deactivated successfully. Feb 13 15:24:41.107514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e958fe78d02378a3284317063619f77df6e8b78a65dbbec41f3cc7f599839feb-rootfs.mount: Deactivated successfully. Feb 13 15:24:41.110870 containerd[1497]: time="2025-02-13T15:24:41.110795723Z" level=info msg="shim disconnected" id=e958fe78d02378a3284317063619f77df6e8b78a65dbbec41f3cc7f599839feb namespace=k8s.io Feb 13 15:24:41.110984 containerd[1497]: time="2025-02-13T15:24:41.110866939Z" level=warning msg="cleaning up after shim disconnected" id=e958fe78d02378a3284317063619f77df6e8b78a65dbbec41f3cc7f599839feb namespace=k8s.io Feb 13 15:24:41.110984 containerd[1497]: time="2025-02-13T15:24:41.110880625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:41.975586 kubelet[2575]: E0213 15:24:41.975535 2575 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:24:41.992316 kubelet[2575]: E0213 15:24:41.992277 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:41.994760 containerd[1497]: time="2025-02-13T15:24:41.994714345Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:24:42.013703 containerd[1497]: time="2025-02-13T15:24:42.013648974Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd5826333c0e1254e40ba25831e6bb2c5d850a0456e555aec8569ea7a7469cc8\"" Feb 13 15:24:42.014354 containerd[1497]: time="2025-02-13T15:24:42.014292018Z" level=info msg="StartContainer for \"dd5826333c0e1254e40ba25831e6bb2c5d850a0456e555aec8569ea7a7469cc8\"" Feb 13 15:24:42.043602 systemd[1]: Started cri-containerd-dd5826333c0e1254e40ba25831e6bb2c5d850a0456e555aec8569ea7a7469cc8.scope - libcontainer container dd5826333c0e1254e40ba25831e6bb2c5d850a0456e555aec8569ea7a7469cc8. Feb 13 15:24:42.078695 containerd[1497]: time="2025-02-13T15:24:42.078630624Z" level=info msg="StartContainer for \"dd5826333c0e1254e40ba25831e6bb2c5d850a0456e555aec8569ea7a7469cc8\" returns successfully" Feb 13 15:24:42.082602 systemd[1]: cri-containerd-dd5826333c0e1254e40ba25831e6bb2c5d850a0456e555aec8569ea7a7469cc8.scope: Deactivated successfully. Feb 13 15:24:42.104315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd5826333c0e1254e40ba25831e6bb2c5d850a0456e555aec8569ea7a7469cc8-rootfs.mount: Deactivated successfully. Feb 13 15:24:42.109215 containerd[1497]: time="2025-02-13T15:24:42.109149540Z" level=info msg="shim disconnected" id=dd5826333c0e1254e40ba25831e6bb2c5d850a0456e555aec8569ea7a7469cc8 namespace=k8s.io Feb 13 15:24:42.109215 containerd[1497]: time="2025-02-13T15:24:42.109209935Z" level=warning msg="cleaning up after shim disconnected" id=dd5826333c0e1254e40ba25831e6bb2c5d850a0456e555aec8569ea7a7469cc8 namespace=k8s.io Feb 13 15:24:42.109215 containerd[1497]: time="2025-02-13T15:24:42.109217770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:42.996765 kubelet[2575]: E0213 15:24:42.996730 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:42.998667 containerd[1497]: time="2025-02-13T15:24:42.998579943Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:24:43.018232 containerd[1497]: time="2025-02-13T15:24:43.018164793Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a9e99bd327b61e0a868a14dc74e6de8eba6d048682c8c84ee1daeade0f4459d4\"" Feb 13 15:24:43.019149 containerd[1497]: time="2025-02-13T15:24:43.019090254Z" level=info msg="StartContainer for \"a9e99bd327b61e0a868a14dc74e6de8eba6d048682c8c84ee1daeade0f4459d4\"" Feb 13 15:24:43.049525 systemd[1]: Started cri-containerd-a9e99bd327b61e0a868a14dc74e6de8eba6d048682c8c84ee1daeade0f4459d4.scope - libcontainer container a9e99bd327b61e0a868a14dc74e6de8eba6d048682c8c84ee1daeade0f4459d4. Feb 13 15:24:43.074851 systemd[1]: cri-containerd-a9e99bd327b61e0a868a14dc74e6de8eba6d048682c8c84ee1daeade0f4459d4.scope: Deactivated successfully. Feb 13 15:24:43.077114 containerd[1497]: time="2025-02-13T15:24:43.077063082Z" level=info msg="StartContainer for \"a9e99bd327b61e0a868a14dc74e6de8eba6d048682c8c84ee1daeade0f4459d4\" returns successfully" Feb 13 15:24:43.097069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9e99bd327b61e0a868a14dc74e6de8eba6d048682c8c84ee1daeade0f4459d4-rootfs.mount: Deactivated successfully. Feb 13 15:24:43.101758 containerd[1497]: time="2025-02-13T15:24:43.101691007Z" level=info msg="shim disconnected" id=a9e99bd327b61e0a868a14dc74e6de8eba6d048682c8c84ee1daeade0f4459d4 namespace=k8s.io Feb 13 15:24:43.101758 containerd[1497]: time="2025-02-13T15:24:43.101754769Z" level=warning msg="cleaning up after shim disconnected" id=a9e99bd327b61e0a868a14dc74e6de8eba6d048682c8c84ee1daeade0f4459d4 namespace=k8s.io Feb 13 15:24:43.101758 containerd[1497]: time="2025-02-13T15:24:43.101763445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:43.527355 kubelet[2575]: I0213 15:24:43.527247 2575 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:24:43Z","lastTransitionTime":"2025-02-13T15:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:24:44.001522 kubelet[2575]: E0213 15:24:44.001466 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:44.004130 containerd[1497]: time="2025-02-13T15:24:44.004066327Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:24:44.070074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12844004.mount: Deactivated successfully. Feb 13 15:24:44.072503 containerd[1497]: time="2025-02-13T15:24:44.072436594Z" level=info msg="CreateContainer within sandbox \"0a6198f9f1ad6d9e9599f9d96958191957481cf52daadf2a6a28481ae7c46cf8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2448f6fb81c2a428ada1ddd999a1c9c9279c5ba4cb3507a83d7941ed80e50ed1\"" Feb 13 15:24:44.081745 containerd[1497]: time="2025-02-13T15:24:44.081690769Z" level=info msg="StartContainer for \"2448f6fb81c2a428ada1ddd999a1c9c9279c5ba4cb3507a83d7941ed80e50ed1\"" Feb 13 15:24:44.111479 systemd[1]: Started cri-containerd-2448f6fb81c2a428ada1ddd999a1c9c9279c5ba4cb3507a83d7941ed80e50ed1.scope - libcontainer container 2448f6fb81c2a428ada1ddd999a1c9c9279c5ba4cb3507a83d7941ed80e50ed1. Feb 13 15:24:44.144663 containerd[1497]: time="2025-02-13T15:24:44.144617361Z" level=info msg="StartContainer for \"2448f6fb81c2a428ada1ddd999a1c9c9279c5ba4cb3507a83d7941ed80e50ed1\" returns successfully" Feb 13 15:24:44.593361 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:24:45.004732 kubelet[2575]: E0213 15:24:45.004692 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:45.018180 kubelet[2575]: I0213 15:24:45.017830 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-49znr" podStartSLOduration=6.017808081 podStartE2EDuration="6.017808081s" podCreationTimestamp="2025-02-13 15:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:24:45.017559618 +0000 UTC m=+93.364700319" watchObservedRunningTime="2025-02-13 15:24:45.017808081 +0000 UTC m=+93.364948782" Feb 13 15:24:46.256762 kubelet[2575]: E0213 15:24:46.256637 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:46.739507 kubelet[2575]: E0213 15:24:46.739463 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:47.674896 systemd-networkd[1409]: lxc_health: Link UP Feb 13 15:24:47.689463 systemd-networkd[1409]: lxc_health: Gained carrier Feb 13 15:24:48.258356 kubelet[2575]: E0213 15:24:48.257607 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:48.486651 systemd[1]: run-containerd-runc-k8s.io-2448f6fb81c2a428ada1ddd999a1c9c9279c5ba4cb3507a83d7941ed80e50ed1-runc.BTwgtM.mount: Deactivated successfully. Feb 13 15:24:48.740017 kubelet[2575]: E0213 15:24:48.739961 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:48.757472 systemd-networkd[1409]: lxc_health: Gained IPv6LL Feb 13 15:24:49.012681 kubelet[2575]: E0213 15:24:49.012565 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:50.014048 kubelet[2575]: E0213 15:24:50.013964 2575 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:54.845144 sshd[4424]: Connection closed by 10.0.0.1 port 51504 Feb 13 15:24:54.845617 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:54.849518 systemd[1]: sshd@27-10.0.0.18:22-10.0.0.1:51504.service: Deactivated successfully. Feb 13 15:24:54.851373 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:24:54.852128 systemd-logind[1481]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:24:54.852888 systemd-logind[1481]: Removed session 28.