May 15 00:07:23.958885 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:18:55 -00 2025 May 15 00:07:23.958909 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=676605e5288ab6a23835eefe0cbb74879b800df0a2a85ac0781041b13f2d6bba May 15 00:07:23.958921 kernel: BIOS-provided physical RAM map: May 15 00:07:23.958928 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 00:07:23.958934 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 00:07:23.958940 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 00:07:23.958947 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 15 00:07:23.958965 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 00:07:23.958972 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 15 00:07:23.958978 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 15 00:07:23.958987 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 15 00:07:23.958994 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 15 00:07:23.959000 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 15 00:07:23.959006 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 15 00:07:23.959017 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 15 00:07:23.959024 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 00:07:23.959033 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 15 00:07:23.959040 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 15 00:07:23.959047 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 15 00:07:23.959054 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 15 00:07:23.959061 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 15 00:07:23.959069 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 00:07:23.959079 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 15 00:07:23.959088 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 00:07:23.959097 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 15 00:07:23.959106 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 00:07:23.959115 kernel: NX (Execute Disable) protection: active May 15 00:07:23.959127 kernel: APIC: Static calls initialized May 15 00:07:23.959134 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 15 00:07:23.959141 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 15 00:07:23.959147 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 15 00:07:23.959154 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 15 00:07:23.959160 kernel: extended physical RAM map: May 15 00:07:23.959167 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 00:07:23.959174 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 00:07:23.959181 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 00:07:23.959187 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 15 00:07:23.959194 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 00:07:23.959203 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 15 00:07:23.959210 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 15 00:07:23.959221 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 15 00:07:23.959228 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 15 00:07:23.959235 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 15 00:07:23.959242 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 15 00:07:23.959249 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 15 00:07:23.959259 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 15 00:07:23.959266 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 15 00:07:23.959273 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 15 00:07:23.959280 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 15 00:07:23.959288 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 00:07:23.959295 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 15 00:07:23.959302 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 15 00:07:23.959309 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 15 00:07:23.959316 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 15 00:07:23.959326 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 15 00:07:23.959333 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 00:07:23.959340 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 15 00:07:23.959347 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 00:07:23.959354 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 15 00:07:23.959361 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 00:07:23.959368 kernel: efi: EFI v2.7 by EDK II May 15 00:07:23.959375 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 15 00:07:23.959382 kernel: random: crng init done May 15 00:07:23.959389 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 15 00:07:23.959396 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 15 00:07:23.959409 kernel: secureboot: Secure boot disabled May 15 00:07:23.959416 kernel: SMBIOS 2.8 present. May 15 00:07:23.959423 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 15 00:07:23.959430 kernel: Hypervisor detected: KVM May 15 00:07:23.959438 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 00:07:23.959445 kernel: kvm-clock: using sched offset of 5123378300 cycles May 15 00:07:23.959453 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 00:07:23.959460 kernel: tsc: Detected 2794.746 MHz processor May 15 00:07:23.959467 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 00:07:23.959475 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 00:07:23.959485 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 15 00:07:23.959492 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 15 00:07:23.959500 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 00:07:23.959507 kernel: Using GB pages for direct mapping May 15 00:07:23.959514 kernel: ACPI: Early table checksum verification disabled May 15 00:07:23.959522 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 15 00:07:23.959529 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 15 00:07:23.959536 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:23.959544 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:23.959554 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 15 00:07:23.959561 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:23.959568 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:23.959576 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:23.959583 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:07:23.959590 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 15 00:07:23.959598 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 15 00:07:23.959605 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 15 00:07:23.959612 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 15 00:07:23.959622 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 15 00:07:23.959629 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 15 00:07:23.959636 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 15 00:07:23.959643 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 15 00:07:23.959651 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 15 00:07:23.959658 kernel: No NUMA configuration found May 15 00:07:23.959665 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 15 00:07:23.959672 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 15 00:07:23.959679 kernel: Zone ranges: May 15 00:07:23.959687 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 00:07:23.959697 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 15 00:07:23.959704 kernel: Normal empty May 15 00:07:23.959711 kernel: Movable zone start for each node May 15 00:07:23.959718 kernel: Early memory node ranges May 15 00:07:23.959728 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 15 00:07:23.959737 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 15 00:07:23.959748 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 15 00:07:23.959757 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 15 00:07:23.959766 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 15 00:07:23.959776 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 15 00:07:23.959783 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 15 00:07:23.959790 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 15 00:07:23.959798 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 15 00:07:23.959805 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:07:23.959812 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 15 00:07:23.959842 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 15 00:07:23.959855 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:07:23.959865 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 15 00:07:23.959875 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 15 00:07:23.959886 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 15 00:07:23.959900 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 15 00:07:23.959915 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 15 00:07:23.959923 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 00:07:23.959931 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 00:07:23.959939 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 00:07:23.959946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 00:07:23.959965 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 00:07:23.959973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 00:07:23.959981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 00:07:23.959988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 00:07:23.959996 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 00:07:23.960003 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 00:07:23.960011 kernel: TSC deadline timer available May 15 00:07:23.960019 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 00:07:23.960026 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 00:07:23.960037 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 00:07:23.960044 kernel: kvm-guest: setup PV sched yield May 15 00:07:23.960052 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 15 00:07:23.960059 kernel: Booting paravirtualized kernel on KVM May 15 00:07:23.960067 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 00:07:23.960075 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 15 00:07:23.960083 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 15 00:07:23.960090 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 15 00:07:23.960098 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 00:07:23.960108 kernel: kvm-guest: PV spinlocks enabled May 15 00:07:23.960115 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 00:07:23.960124 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=676605e5288ab6a23835eefe0cbb74879b800df0a2a85ac0781041b13f2d6bba May 15 00:07:23.960132 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:07:23.960140 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:07:23.960148 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:07:23.960155 kernel: Fallback order for Node 0: 0 May 15 00:07:23.960163 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 15 00:07:23.960173 kernel: Policy zone: DMA32 May 15 00:07:23.960180 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:07:23.960188 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2295K rwdata, 22752K rodata, 43000K init, 2192K bss, 175776K reserved, 0K cma-reserved) May 15 00:07:23.960196 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:07:23.960203 kernel: ftrace: allocating 37946 entries in 149 pages May 15 00:07:23.960211 kernel: ftrace: allocated 149 pages with 4 groups May 15 00:07:23.960218 kernel: Dynamic Preempt: voluntary May 15 00:07:23.960226 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:07:23.960235 kernel: rcu: RCU event tracing is enabled. May 15 00:07:23.960245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:07:23.960253 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:07:23.960261 kernel: Rude variant of Tasks RCU enabled. May 15 00:07:23.960271 kernel: Tracing variant of Tasks RCU enabled. May 15 00:07:23.960282 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:07:23.960292 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:07:23.960303 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 00:07:23.960313 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:07:23.960321 kernel: Console: colour dummy device 80x25 May 15 00:07:23.960331 kernel: printk: console [ttyS0] enabled May 15 00:07:23.960339 kernel: ACPI: Core revision 20230628 May 15 00:07:23.960347 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 00:07:23.960354 kernel: APIC: Switch to symmetric I/O mode setup May 15 00:07:23.960362 kernel: x2apic enabled May 15 00:07:23.960369 kernel: APIC: Switched APIC routing to: physical x2apic May 15 00:07:23.960380 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 15 00:07:23.960388 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 15 00:07:23.960396 kernel: kvm-guest: setup PV IPIs May 15 00:07:23.960406 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 00:07:23.960414 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 00:07:23.960421 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 15 00:07:23.960429 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 00:07:23.960436 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 00:07:23.960444 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 00:07:23.960452 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 00:07:23.960459 kernel: Spectre V2 : Mitigation: Retpolines May 15 00:07:23.960467 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 00:07:23.960477 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 00:07:23.960485 kernel: RETBleed: Mitigation: untrained return thunk May 15 00:07:23.960493 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 00:07:23.960500 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 15 00:07:23.960508 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 15 00:07:23.960516 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 15 00:07:23.960524 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 15 00:07:23.960531 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 00:07:23.960542 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 00:07:23.960549 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 00:07:23.960557 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 00:07:23.960565 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 15 00:07:23.960572 kernel: Freeing SMP alternatives memory: 32K May 15 00:07:23.960580 kernel: pid_max: default: 32768 minimum: 301 May 15 00:07:23.960587 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:07:23.960595 kernel: landlock: Up and running. May 15 00:07:23.960602 kernel: SELinux: Initializing. May 15 00:07:23.960610 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:07:23.960620 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:07:23.960628 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 00:07:23.960636 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:07:23.960643 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:07:23.960651 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:07:23.960659 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 00:07:23.960666 kernel: ... version: 0 May 15 00:07:23.960674 kernel: ... bit width: 48 May 15 00:07:23.960684 kernel: ... generic registers: 6 May 15 00:07:23.960692 kernel: ... value mask: 0000ffffffffffff May 15 00:07:23.960699 kernel: ... max period: 00007fffffffffff May 15 00:07:23.960707 kernel: ... fixed-purpose events: 0 May 15 00:07:23.960714 kernel: ... event mask: 000000000000003f May 15 00:07:23.960722 kernel: signal: max sigframe size: 1776 May 15 00:07:23.960733 kernel: rcu: Hierarchical SRCU implementation. May 15 00:07:23.960744 kernel: rcu: Max phase no-delay instances is 400. May 15 00:07:23.960753 kernel: smp: Bringing up secondary CPUs ... May 15 00:07:23.960764 kernel: smpboot: x86: Booting SMP configuration: May 15 00:07:23.960772 kernel: .... node #0, CPUs: #1 #2 #3 May 15 00:07:23.960780 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:07:23.960790 kernel: smpboot: Max logical packages: 1 May 15 00:07:23.960801 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 15 00:07:23.960811 kernel: devtmpfs: initialized May 15 00:07:23.960857 kernel: x86/mm: Memory block size: 128MB May 15 00:07:23.960866 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 15 00:07:23.960874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 15 00:07:23.960884 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 15 00:07:23.960898 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 15 00:07:23.960906 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 15 00:07:23.960914 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 15 00:07:23.960922 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:07:23.960929 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:07:23.960937 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:07:23.960945 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:07:23.960961 kernel: audit: initializing netlink subsys (disabled) May 15 00:07:23.960972 kernel: audit: type=2000 audit(1747267642.502:1): state=initialized audit_enabled=0 res=1 May 15 00:07:23.960980 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:07:23.960988 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 00:07:23.960995 kernel: cpuidle: using governor menu May 15 00:07:23.961003 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:07:23.961010 kernel: dca service started, version 1.12.1 May 15 00:07:23.961018 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 15 00:07:23.961026 kernel: PCI: Using configuration type 1 for base access May 15 00:07:23.961033 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 00:07:23.961044 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:07:23.961051 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:07:23.961059 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:07:23.961067 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:07:23.961074 kernel: ACPI: Added _OSI(Module Device) May 15 00:07:23.961082 kernel: ACPI: Added _OSI(Processor Device) May 15 00:07:23.961089 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:07:23.961097 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:07:23.961104 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:07:23.961115 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 15 00:07:23.961122 kernel: ACPI: Interpreter enabled May 15 00:07:23.961130 kernel: ACPI: PM: (supports S0 S3 S5) May 15 00:07:23.961137 kernel: ACPI: Using IOAPIC for interrupt routing May 15 00:07:23.961145 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 00:07:23.961152 kernel: PCI: Using E820 reservations for host bridge windows May 15 00:07:23.961160 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 00:07:23.961168 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:07:23.961398 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:07:23.961543 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 00:07:23.961855 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 00:07:23.961880 kernel: PCI host bridge to bus 0000:00 May 15 00:07:23.962059 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 00:07:23.962181 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 00:07:23.962299 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 00:07:23.962422 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 15 00:07:23.962552 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 15 00:07:23.962677 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 15 00:07:23.962807 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:07:23.963035 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 00:07:23.963191 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 00:07:23.963339 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 15 00:07:23.963476 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 15 00:07:23.963635 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 15 00:07:23.963839 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 15 00:07:23.964034 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 00:07:23.964197 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:07:23.964359 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 15 00:07:23.964503 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 15 00:07:23.964641 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 15 00:07:23.964804 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 00:07:23.964963 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 15 00:07:23.965117 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 15 00:07:23.965301 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 15 00:07:23.965500 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 00:07:23.965668 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 15 00:07:23.965942 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 15 00:07:23.966098 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 15 00:07:23.966227 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 15 00:07:23.966370 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 00:07:23.966510 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 00:07:23.966665 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 00:07:23.966837 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 15 00:07:23.967014 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 15 00:07:23.967174 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 00:07:23.967332 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 15 00:07:23.967344 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 00:07:23.967352 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 00:07:23.967360 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 00:07:23.967368 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 00:07:23.967382 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 00:07:23.967390 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 00:07:23.967397 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 00:07:23.967405 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 00:07:23.967413 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 00:07:23.967420 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 00:07:23.967428 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 00:07:23.967436 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 00:07:23.967443 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 00:07:23.967454 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 00:07:23.967461 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 00:07:23.967472 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 00:07:23.967482 kernel: iommu: Default domain type: Translated May 15 00:07:23.967493 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 00:07:23.967503 kernel: efivars: Registered efivars operations May 15 00:07:23.967513 kernel: PCI: Using ACPI for IRQ routing May 15 00:07:23.967522 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 00:07:23.967530 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 15 00:07:23.967543 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 15 00:07:23.967550 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 15 00:07:23.967558 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 15 00:07:23.967566 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 15 00:07:23.967573 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 15 00:07:23.967581 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 15 00:07:23.967588 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 15 00:07:23.967724 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 00:07:23.967925 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 00:07:23.968092 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 00:07:23.968108 kernel: vgaarb: loaded May 15 00:07:23.968118 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 00:07:23.968128 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 00:07:23.968138 kernel: clocksource: Switched to clocksource kvm-clock May 15 00:07:23.968148 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:07:23.968160 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:07:23.968171 kernel: pnp: PnP ACPI init May 15 00:07:23.968416 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 15 00:07:23.968432 kernel: pnp: PnP ACPI: found 6 devices May 15 00:07:23.968441 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 00:07:23.968449 kernel: NET: Registered PF_INET protocol family May 15 00:07:23.968457 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:07:23.968490 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:07:23.968501 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:07:23.968509 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:07:23.968523 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:07:23.968534 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:07:23.968546 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:07:23.968556 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:07:23.968564 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:07:23.968575 kernel: NET: Registered PF_XDP protocol family May 15 00:07:23.968762 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 15 00:07:23.968928 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 15 00:07:23.969079 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 00:07:23.969222 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 00:07:23.969349 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 00:07:23.969481 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 15 00:07:23.969640 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 15 00:07:23.969812 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 15 00:07:23.969845 kernel: PCI: CLS 0 bytes, default 64 May 15 00:07:23.969857 kernel: Initialise system trusted keyrings May 15 00:07:23.969874 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:07:23.969884 kernel: Key type asymmetric registered May 15 00:07:23.969895 kernel: Asymmetric key parser 'x509' registered May 15 00:07:23.969906 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 15 00:07:23.969917 kernel: io scheduler mq-deadline registered May 15 00:07:23.969925 kernel: io scheduler kyber registered May 15 00:07:23.969933 kernel: io scheduler bfq registered May 15 00:07:23.969941 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 00:07:23.969957 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 00:07:23.969966 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 00:07:23.969979 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 00:07:23.969990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:07:23.969998 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 00:07:23.970006 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 00:07:23.970014 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 00:07:23.970025 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 00:07:23.970184 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 00:07:23.970197 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 00:07:23.970321 kernel: rtc_cmos 00:04: registered as rtc0 May 15 00:07:23.970476 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T00:07:23 UTC (1747267643) May 15 00:07:23.970599 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 15 00:07:23.970610 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 00:07:23.970618 kernel: efifb: probing for efifb May 15 00:07:23.970632 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 15 00:07:23.970640 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 15 00:07:23.970659 kernel: efifb: scrolling: redraw May 15 00:07:23.970678 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 00:07:23.970696 kernel: Console: switching to colour frame buffer device 160x50 May 15 00:07:23.970704 kernel: fb0: EFI VGA frame buffer device May 15 00:07:23.970712 kernel: pstore: Using crash dump compression: deflate May 15 00:07:23.970720 kernel: pstore: Registered efi_pstore as persistent store backend May 15 00:07:23.970728 kernel: NET: Registered PF_INET6 protocol family May 15 00:07:23.970740 kernel: Segment Routing with IPv6 May 15 00:07:23.970748 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:07:23.970756 kernel: NET: Registered PF_PACKET protocol family May 15 00:07:23.970765 kernel: Key type dns_resolver registered May 15 00:07:23.970779 kernel: IPI shorthand broadcast: enabled May 15 00:07:23.970789 kernel: sched_clock: Marking stable (1260004036, 192166231)->(1523470974, -71300707) May 15 00:07:23.970798 kernel: registered taskstats version 1 May 15 00:07:23.970806 kernel: Loading compiled-in X.509 certificates May 15 00:07:23.970814 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 24318a9a7bb74dcc18d1d3d4ac63358025b8c253' May 15 00:07:23.970906 kernel: Key type .fscrypt registered May 15 00:07:23.970915 kernel: Key type fscrypt-provisioning registered May 15 00:07:23.970923 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:07:23.970931 kernel: ima: Allocated hash algorithm: sha1 May 15 00:07:23.970939 kernel: ima: No architecture policies found May 15 00:07:23.970947 kernel: clk: Disabling unused clocks May 15 00:07:23.970963 kernel: Freeing unused kernel image (initmem) memory: 43000K May 15 00:07:23.970972 kernel: Write protecting the kernel read-only data: 36864k May 15 00:07:23.970980 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 15 00:07:23.970991 kernel: Run /init as init process May 15 00:07:23.970999 kernel: with arguments: May 15 00:07:23.971008 kernel: /init May 15 00:07:23.971016 kernel: with environment: May 15 00:07:23.971023 kernel: HOME=/ May 15 00:07:23.971031 kernel: TERM=linux May 15 00:07:23.971039 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:07:23.971050 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:07:23.971063 systemd[1]: Detected virtualization kvm. May 15 00:07:23.971072 systemd[1]: Detected architecture x86-64. May 15 00:07:23.971081 systemd[1]: Running in initrd. May 15 00:07:23.971089 systemd[1]: No hostname configured, using default hostname. May 15 00:07:23.971097 systemd[1]: Hostname set to . May 15 00:07:23.971106 systemd[1]: Initializing machine ID from VM UUID. May 15 00:07:23.971114 systemd[1]: Queued start job for default target initrd.target. May 15 00:07:23.971122 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:07:23.971134 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:07:23.971143 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:07:23.971152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:07:23.971161 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:07:23.971169 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:07:23.971180 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:07:23.971189 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:07:23.971200 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:07:23.971208 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:07:23.971217 systemd[1]: Reached target paths.target - Path Units. May 15 00:07:23.971225 systemd[1]: Reached target slices.target - Slice Units. May 15 00:07:23.971234 systemd[1]: Reached target swap.target - Swaps. May 15 00:07:23.971243 systemd[1]: Reached target timers.target - Timer Units. May 15 00:07:23.971260 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:07:23.971275 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:07:23.971289 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:07:23.971298 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 00:07:23.971306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:07:23.971315 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:07:23.971323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:07:23.971331 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:07:23.971340 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:07:23.971348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:07:23.971356 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:07:23.971368 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:07:23.971376 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:07:23.971385 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:07:23.971393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:23.971402 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:07:23.971410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:07:23.971418 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:07:23.971431 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:07:23.971468 systemd-journald[194]: Collecting audit messages is disabled. May 15 00:07:23.971490 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:23.971499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:07:23.971508 systemd-journald[194]: Journal started May 15 00:07:23.971526 systemd-journald[194]: Runtime Journal (/run/log/journal/b47f6c6347874ff1a4533f45818c33f8) is 6.0M, max 48.3M, 42.2M free. May 15 00:07:23.965813 systemd-modules-load[195]: Inserted module 'overlay' May 15 00:07:23.976425 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:07:23.974813 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:07:23.976527 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:07:23.978010 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:07:23.995653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:07:23.999450 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:07:24.005534 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:07:24.002716 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:07:24.008348 systemd-modules-load[195]: Inserted module 'br_netfilter' May 15 00:07:24.009581 kernel: Bridge firewalling registered May 15 00:07:24.009613 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:07:24.011509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:07:24.019571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:07:24.022108 dracut-cmdline[222]: dracut-dracut-053 May 15 00:07:24.025633 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=676605e5288ab6a23835eefe0cbb74879b800df0a2a85ac0781041b13f2d6bba May 15 00:07:24.031879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:07:24.040074 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:07:24.078093 systemd-resolved[248]: Positive Trust Anchors: May 15 00:07:24.078117 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:07:24.078157 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:07:24.081021 systemd-resolved[248]: Defaulting to hostname 'linux'. May 15 00:07:24.082361 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:07:24.089327 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:07:24.140896 kernel: SCSI subsystem initialized May 15 00:07:24.154868 kernel: Loading iSCSI transport class v2.0-870. May 15 00:07:24.169878 kernel: iscsi: registered transport (tcp) May 15 00:07:24.197887 kernel: iscsi: registered transport (qla4xxx) May 15 00:07:24.198006 kernel: QLogic iSCSI HBA Driver May 15 00:07:24.269671 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:07:24.281027 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:07:24.313889 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:07:24.313997 kernel: device-mapper: uevent: version 1.0.3 May 15 00:07:24.315278 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:07:24.361917 kernel: raid6: avx2x4 gen() 22063 MB/s May 15 00:07:24.378877 kernel: raid6: avx2x2 gen() 18101 MB/s May 15 00:07:24.396054 kernel: raid6: avx2x1 gen() 23062 MB/s May 15 00:07:24.396142 kernel: raid6: using algorithm avx2x1 gen() 23062 MB/s May 15 00:07:24.414069 kernel: raid6: .... xor() 15104 MB/s, rmw enabled May 15 00:07:24.414179 kernel: raid6: using avx2x2 recovery algorithm May 15 00:07:24.438902 kernel: xor: automatically using best checksumming function avx May 15 00:07:24.633915 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:07:24.650773 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:07:24.664132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:07:24.679963 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 15 00:07:24.684992 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:07:24.696105 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:07:24.716426 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation May 15 00:07:24.759541 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:07:24.776178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:07:24.871666 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:07:24.884159 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:07:24.902542 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:07:24.906378 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:07:24.908994 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:07:24.911805 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:07:24.921891 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 15 00:07:24.922162 kernel: cryptd: max_cpu_qlen set to 1000 May 15 00:07:24.927076 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:07:24.933863 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:07:24.943849 kernel: AVX2 version of gcm_enc/dec engaged. May 15 00:07:24.943913 kernel: AES CTR mode by8 optimization enabled May 15 00:07:24.951968 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:07:24.957211 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:07:24.957268 kernel: GPT:9289727 != 19775487 May 15 00:07:24.957283 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:07:24.957297 kernel: GPT:9289727 != 19775487 May 15 00:07:24.958452 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:07:24.959265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:07:24.961979 kernel: libata version 3.00 loaded. May 15 00:07:24.969358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:07:24.970874 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:07:24.972100 kernel: ahci 0000:00:1f.2: version 3.0 May 15 00:07:24.973516 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 00:07:24.975467 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:07:24.981112 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 00:07:24.981381 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 00:07:24.981740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:07:24.983032 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:24.987451 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:24.995867 kernel: BTRFS: device fsid 588f8840-d63c-4068-b03d-1642b4e6460f devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (476) May 15 00:07:24.997838 kernel: scsi host0: ahci May 15 00:07:24.997894 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (468) May 15 00:07:25.000909 kernel: scsi host1: ahci May 15 00:07:25.001302 kernel: scsi host2: ahci May 15 00:07:25.001908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:25.014129 kernel: scsi host3: ahci May 15 00:07:25.014378 kernel: scsi host4: ahci May 15 00:07:25.014573 kernel: scsi host5: ahci May 15 00:07:25.014778 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 15 00:07:25.014797 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 15 00:07:25.014813 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 15 00:07:25.017592 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 15 00:07:25.017612 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 15 00:07:25.017626 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 15 00:07:25.028342 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 00:07:25.046921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 00:07:25.061380 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 00:07:25.063099 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 00:07:25.075666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:07:25.092346 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:07:25.095364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:07:25.095502 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:25.099710 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:25.104411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:25.111488 disk-uuid[556]: Primary Header is updated. May 15 00:07:25.111488 disk-uuid[556]: Secondary Entries is updated. May 15 00:07:25.111488 disk-uuid[556]: Secondary Header is updated. May 15 00:07:25.116899 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:07:25.123866 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:07:25.132717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:25.142422 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:07:25.173675 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:07:25.326893 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 00:07:25.327154 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 00:07:25.329869 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 00:07:25.330267 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 00:07:25.330282 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 00:07:25.332213 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 00:07:25.332241 kernel: ata3.00: applying bridge limits May 15 00:07:25.332857 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 00:07:25.333879 kernel: ata3.00: configured for UDMA/100 May 15 00:07:25.334878 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 00:07:25.390891 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 00:07:25.391374 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 00:07:25.405894 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 00:07:26.145882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:07:26.146456 disk-uuid[558]: The operation has completed successfully. May 15 00:07:26.183732 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:07:26.183890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:07:26.216267 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:07:26.221680 sh[597]: Success May 15 00:07:26.241417 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 00:07:26.305706 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:07:26.320425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:07:26.323809 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:07:26.375307 kernel: BTRFS info (device dm-0): first mount of filesystem 588f8840-d63c-4068-b03d-1642b4e6460f May 15 00:07:26.375382 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 00:07:26.375416 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:07:26.376578 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:07:26.377611 kernel: BTRFS info (device dm-0): using free space tree May 15 00:07:26.394893 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:07:26.398137 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:07:26.410088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:07:26.444218 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:07:26.452173 kernel: BTRFS info (device vda6): first mount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 00:07:26.452244 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:07:26.452259 kernel: BTRFS info (device vda6): using free space tree May 15 00:07:26.457868 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:07:26.471156 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:07:26.488551 kernel: BTRFS info (device vda6): last unmount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 00:07:26.608413 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:07:26.621203 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:07:26.652111 systemd-networkd[775]: lo: Link UP May 15 00:07:26.652129 systemd-networkd[775]: lo: Gained carrier May 15 00:07:26.654402 systemd-networkd[775]: Enumeration completed May 15 00:07:26.655686 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:07:26.655687 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:07:26.655700 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:07:26.657118 systemd-networkd[775]: eth0: Link UP May 15 00:07:26.657124 systemd-networkd[775]: eth0: Gained carrier May 15 00:07:26.657135 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:07:26.663505 systemd[1]: Reached target network.target - Network. May 15 00:07:26.687014 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:07:26.753820 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:07:26.765266 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:07:26.828796 ignition[780]: Ignition 2.20.0 May 15 00:07:26.828815 ignition[780]: Stage: fetch-offline May 15 00:07:26.828875 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 15 00:07:26.828886 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:26.829172 ignition[780]: parsed url from cmdline: "" May 15 00:07:26.829176 ignition[780]: no config URL provided May 15 00:07:26.829182 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:07:26.829193 ignition[780]: no config at "/usr/lib/ignition/user.ign" May 15 00:07:26.829230 ignition[780]: op(1): [started] loading QEMU firmware config module May 15 00:07:26.829236 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:07:26.849363 ignition[780]: op(1): [finished] loading QEMU firmware config module May 15 00:07:26.889478 ignition[780]: parsing config with SHA512: 85c2ab090a79189fba794d57e1e76441b8b792ee19d71f88dcd976922005e8fd2ceb19f700ff8a7d64a232c62676c2e6b51439f97be682ab713e3c59a516dec9 May 15 00:07:26.894912 unknown[780]: fetched base config from "system" May 15 00:07:26.895336 ignition[780]: fetch-offline: fetch-offline passed May 15 00:07:26.894925 unknown[780]: fetched user config from "qemu" May 15 00:07:26.895413 ignition[780]: Ignition finished successfully May 15 00:07:26.897695 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:07:26.909884 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:07:26.917117 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:07:26.933720 ignition[789]: Ignition 2.20.0 May 15 00:07:26.933737 ignition[789]: Stage: kargs May 15 00:07:26.933998 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 15 00:07:26.934015 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:26.935113 ignition[789]: kargs: kargs passed May 15 00:07:26.938647 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:07:26.935175 ignition[789]: Ignition finished successfully May 15 00:07:26.951215 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:07:26.968140 ignition[798]: Ignition 2.20.0 May 15 00:07:26.968155 ignition[798]: Stage: disks May 15 00:07:26.968434 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 15 00:07:26.968454 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:26.969650 ignition[798]: disks: disks passed May 15 00:07:26.973117 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:07:26.969724 ignition[798]: Ignition finished successfully May 15 00:07:26.975266 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:07:26.977005 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:07:26.979962 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:07:26.982286 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:07:26.984991 systemd[1]: Reached target basic.target - Basic System. May 15 00:07:26.999215 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:07:27.022193 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:07:27.036085 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:07:27.045252 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:07:27.188875 kernel: EXT4-fs (vda9): mounted filesystem f97506c4-898a-43e3-9925-b47c40fa47d6 r/w with ordered data mode. Quota mode: none. May 15 00:07:27.189340 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:07:27.192354 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:07:27.209179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:07:27.226194 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:07:27.226980 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:07:27.227040 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:07:27.227072 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:07:27.238195 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:07:27.243671 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:07:27.255856 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (817) May 15 00:07:27.259962 kernel: BTRFS info (device vda6): first mount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 00:07:27.260203 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:07:27.260225 kernel: BTRFS info (device vda6): using free space tree May 15 00:07:27.266892 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:07:27.270066 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:07:27.305254 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:07:27.319729 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory May 15 00:07:27.326137 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:07:27.331208 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:07:27.466109 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:07:27.474168 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:07:27.477602 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:07:27.484060 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:07:27.485559 kernel: BTRFS info (device vda6): last unmount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 00:07:27.511915 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:07:27.551368 ignition[933]: INFO : Ignition 2.20.0 May 15 00:07:27.551368 ignition[933]: INFO : Stage: mount May 15 00:07:27.553356 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:07:27.553356 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:27.556429 ignition[933]: INFO : mount: mount passed May 15 00:07:27.557402 ignition[933]: INFO : Ignition finished successfully May 15 00:07:27.560903 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:07:27.576048 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:07:27.586884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:07:27.599864 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) May 15 00:07:27.602070 kernel: BTRFS info (device vda6): first mount of filesystem 850231c6-8b0d-4143-afe9-f74782b94c61 May 15 00:07:27.602095 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:07:27.602108 kernel: BTRFS info (device vda6): using free space tree May 15 00:07:27.605851 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:07:27.607562 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:07:27.640583 ignition[960]: INFO : Ignition 2.20.0 May 15 00:07:27.640583 ignition[960]: INFO : Stage: files May 15 00:07:27.642806 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:07:27.642806 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:27.642806 ignition[960]: DEBUG : files: compiled without relabeling support, skipping May 15 00:07:27.642806 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:07:27.642806 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:07:27.651961 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:07:27.654190 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:07:27.655780 unknown[960]: wrote ssh authorized keys file for user: core May 15 00:07:27.657119 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:07:27.658689 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 00:07:27.658689 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 00:07:27.735520 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:07:27.927021 systemd-networkd[775]: eth0: Gained IPv6LL May 15 00:07:27.972273 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 00:07:27.972273 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:07:27.976323 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 00:07:28.304237 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:07:29.269460 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:07:29.269460 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 00:07:29.274321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 15 00:07:29.713461 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:07:30.085630 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 00:07:30.085630 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 00:07:30.091729 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:07:30.094731 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:07:30.094731 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 00:07:30.094731 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 00:07:30.099975 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:07:30.099975 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:07:30.099975 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 00:07:30.099975 ignition[960]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:07:30.142784 ignition[960]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:07:31.376126 ignition[960]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:07:31.378401 ignition[960]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:07:31.378401 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 00:07:31.378401 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:07:31.378401 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:07:31.378401 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:07:31.378401 ignition[960]: INFO : files: files passed May 15 00:07:31.378401 ignition[960]: INFO : Ignition finished successfully May 15 00:07:31.450326 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:07:31.489133 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:07:31.491491 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:07:31.493550 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:07:31.493674 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:07:31.512519 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory May 15 00:07:31.545560 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:07:31.547568 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:07:31.549296 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:07:31.551350 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:07:31.552300 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:07:31.565061 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:07:31.617931 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:07:31.618101 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:07:31.835993 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:07:31.838304 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:07:31.838904 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:07:31.840216 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:07:31.860668 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:07:32.157052 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:07:32.167967 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:07:32.188811 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:07:32.191049 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:07:32.193132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:07:32.193272 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:07:32.195516 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:07:32.197241 systemd[1]: Stopped target basic.target - Basic System. May 15 00:07:32.199278 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:07:32.201325 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:07:32.203354 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:07:32.205529 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:07:32.219920 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:07:32.222704 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:07:32.224718 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:07:32.226936 systemd[1]: Stopped target swap.target - Swaps. May 15 00:07:32.228900 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:07:32.229058 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:07:32.231288 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:07:32.232918 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:07:32.252409 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:07:32.252560 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:07:32.254669 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:07:32.254818 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:07:32.257140 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:07:32.257259 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:07:32.259317 systemd[1]: Stopped target paths.target - Path Units. May 15 00:07:32.261060 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:07:32.264904 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:07:32.266741 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:07:32.268803 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:07:32.270662 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:07:32.270770 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:07:32.272734 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:07:32.272853 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:07:32.275354 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:07:32.275491 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:07:32.277478 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:07:32.277592 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:07:32.310081 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:07:32.311324 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:07:32.311460 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:07:32.314695 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:07:32.315883 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:07:32.316091 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:07:32.318885 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:07:32.327943 ignition[1016]: INFO : Ignition 2.20.0 May 15 00:07:32.327943 ignition[1016]: INFO : Stage: umount May 15 00:07:32.327943 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:07:32.327943 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:07:32.327943 ignition[1016]: INFO : umount: umount passed May 15 00:07:32.327943 ignition[1016]: INFO : Ignition finished successfully May 15 00:07:32.318998 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:07:32.325695 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:07:32.325863 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:07:32.338793 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:07:32.340174 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:07:32.346724 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:07:32.348278 systemd[1]: Stopped target network.target - Network. May 15 00:07:32.350431 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:07:32.350513 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:07:32.354032 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:07:32.354104 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:07:32.357447 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:07:32.358558 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:07:32.360898 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:07:32.362080 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:07:32.364766 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:07:32.367429 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:07:32.370277 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:07:32.371457 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:07:32.374178 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:07:32.375293 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:07:32.375900 systemd-networkd[775]: eth0: DHCPv6 lease lost May 15 00:07:32.378867 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:07:32.379020 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:07:32.382697 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:07:32.383842 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:07:32.388309 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:07:32.388390 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:07:32.478157 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:07:32.481269 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:07:32.481408 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:07:32.482188 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:07:32.482244 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:07:32.485439 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:07:32.485494 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:07:32.485754 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:07:32.485811 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:07:32.486399 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:07:32.497794 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:07:32.498038 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:07:32.536120 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:07:32.536394 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:07:32.538934 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:07:32.539011 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:07:32.540905 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:07:32.540966 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:07:32.542348 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:07:32.542421 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:07:32.543109 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:07:32.543180 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:07:32.543745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:07:32.543865 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:07:32.554996 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:07:32.558971 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:07:32.559053 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:07:32.561593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:07:32.561662 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:32.568116 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:07:32.568273 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:07:32.570105 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:07:32.582976 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:07:32.593223 systemd[1]: Switching root. May 15 00:07:32.631277 systemd-journald[194]: Journal stopped May 15 00:07:34.607306 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). May 15 00:07:34.607408 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:07:34.607423 kernel: SELinux: policy capability open_perms=1 May 15 00:07:34.607438 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:07:34.607450 kernel: SELinux: policy capability always_check_network=0 May 15 00:07:34.607461 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:07:34.607473 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:07:34.607485 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:07:34.607503 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:07:34.607515 kernel: audit: type=1403 audit(1747267653.563:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:07:34.607528 systemd[1]: Successfully loaded SELinux policy in 64.355ms. May 15 00:07:34.607561 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.283ms. May 15 00:07:34.607579 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:07:34.607592 systemd[1]: Detected virtualization kvm. May 15 00:07:34.607605 systemd[1]: Detected architecture x86-64. May 15 00:07:34.607617 systemd[1]: Detected first boot. May 15 00:07:34.607630 systemd[1]: Initializing machine ID from VM UUID. May 15 00:07:34.607643 zram_generator::config[1063]: No configuration found. May 15 00:07:34.607656 systemd[1]: Populated /etc with preset unit settings. May 15 00:07:34.607675 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:07:34.607696 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:07:34.607717 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:07:34.607739 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:07:34.607753 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:07:34.607765 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:07:34.607778 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:07:34.607790 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:07:34.607803 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:07:34.607836 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:07:34.607856 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:07:34.607869 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:07:34.607882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:07:34.607895 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:07:34.607908 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:07:34.607921 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:07:34.607934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:07:34.607947 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 00:07:34.607962 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:07:34.607974 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:07:34.607987 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:07:34.608008 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:07:34.608021 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:07:34.608033 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:07:34.608046 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:07:34.608058 systemd[1]: Reached target slices.target - Slice Units. May 15 00:07:34.608074 systemd[1]: Reached target swap.target - Swaps. May 15 00:07:34.608086 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:07:34.608098 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:07:34.608111 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:07:34.608124 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:07:34.608136 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:07:34.608148 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:07:34.608161 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:07:34.608182 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:07:34.608210 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:07:34.608226 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:07:34.608241 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:07:34.608257 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:07:34.608273 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:07:34.610873 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:07:34.610909 systemd[1]: Reached target machines.target - Containers. May 15 00:07:34.610925 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:07:34.610942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:07:34.610965 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:07:34.610978 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:07:34.610990 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:07:34.611003 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:07:34.611016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:07:34.611028 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:07:34.611041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:07:34.611055 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:07:34.611071 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:07:34.611083 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:07:34.611095 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:07:34.611107 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:07:34.611119 kernel: loop: module loaded May 15 00:07:34.611132 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:07:34.611144 kernel: fuse: init (API version 7.39) May 15 00:07:34.611157 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:07:34.611170 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:07:34.611186 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:07:34.611198 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:07:34.611212 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:07:34.611225 systemd[1]: Stopped verity-setup.service. May 15 00:07:34.611238 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:07:34.611251 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:07:34.611267 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:07:34.611330 systemd-journald[1126]: Collecting audit messages is disabled. May 15 00:07:34.611372 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:07:34.611390 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:07:34.611406 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:07:34.611423 systemd-journald[1126]: Journal started May 15 00:07:34.611456 systemd-journald[1126]: Runtime Journal (/run/log/journal/b47f6c6347874ff1a4533f45818c33f8) is 6.0M, max 48.3M, 42.2M free. May 15 00:07:34.336131 systemd[1]: Queued start job for default target multi-user.target. May 15 00:07:34.356279 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 00:07:34.357029 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:07:34.357543 systemd[1]: systemd-journald.service: Consumed 1.494s CPU time. May 15 00:07:34.615029 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:07:34.616545 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:07:34.618215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:07:34.620497 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:07:34.620871 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:07:34.622863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:07:34.623165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:07:34.625209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:07:34.625539 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:07:34.627553 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:07:34.627896 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:07:34.629693 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:07:34.630182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:07:34.632209 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:07:34.634182 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:07:34.634867 kernel: ACPI: bus type drm_connector registered May 15 00:07:34.637767 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:07:34.638026 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:07:34.639837 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:07:34.659032 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:07:34.675970 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:07:34.679525 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:07:34.681002 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:07:34.681037 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:07:34.683461 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 00:07:34.686584 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:07:34.690043 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:07:34.691557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:07:34.706069 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:07:34.710077 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:07:34.711866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:07:34.713960 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:07:34.715540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:07:34.717134 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:07:34.726044 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:07:34.731380 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:07:34.734344 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:07:34.735806 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:07:34.737767 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:07:34.752876 kernel: loop0: detected capacity change from 0 to 140992 May 15 00:07:34.756228 systemd-journald[1126]: Time spent on flushing to /var/log/journal/b47f6c6347874ff1a4533f45818c33f8 is 15.566ms for 1049 entries. May 15 00:07:34.756228 systemd-journald[1126]: System Journal (/var/log/journal/b47f6c6347874ff1a4533f45818c33f8) is 8.0M, max 195.6M, 187.6M free. May 15 00:07:35.072900 systemd-journald[1126]: Received client request to flush runtime journal. May 15 00:07:35.073002 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:07:35.073048 kernel: loop1: detected capacity change from 0 to 138184 May 15 00:07:35.073095 kernel: loop2: detected capacity change from 0 to 205544 May 15 00:07:35.073134 kernel: loop3: detected capacity change from 0 to 140992 May 15 00:07:34.758213 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:07:34.769094 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:07:34.824917 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:07:34.896380 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:07:34.902813 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:07:34.918450 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 00:07:35.031850 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:07:35.047019 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:07:35.075137 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:07:35.082847 kernel: loop4: detected capacity change from 0 to 138184 May 15 00:07:35.103935 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:07:35.105893 kernel: loop5: detected capacity change from 0 to 205544 May 15 00:07:35.117236 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:07:35.127947 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 00:07:35.128802 (sd-merge)[1194]: Merged extensions into '/usr'. May 15 00:07:35.134593 systemd[1]: Reloading requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:07:35.134779 systemd[1]: Reloading... May 15 00:07:35.148203 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 15 00:07:35.148566 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. May 15 00:07:35.200863 zram_generator::config[1226]: No configuration found. May 15 00:07:35.318208 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:07:35.373724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:07:35.426083 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:07:35.426457 systemd[1]: Reloading finished in 290 ms. May 15 00:07:35.469765 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:07:35.471446 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:07:35.473724 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 00:07:35.475587 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:07:35.498062 systemd[1]: Starting ensure-sysext.service... May 15 00:07:35.500662 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:07:35.509047 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... May 15 00:07:35.509069 systemd[1]: Reloading... May 15 00:07:35.611869 zram_generator::config[1300]: No configuration found. May 15 00:07:35.676906 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:07:35.677341 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:07:35.678689 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:07:35.679124 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 15 00:07:35.679231 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 15 00:07:35.684280 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:07:35.684296 systemd-tmpfiles[1270]: Skipping /boot May 15 00:07:35.699139 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:07:35.699161 systemd-tmpfiles[1270]: Skipping /boot May 15 00:07:35.760858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:07:35.825193 systemd[1]: Reloading finished in 315 ms. May 15 00:07:35.846870 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:07:35.862463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:07:35.870729 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:07:35.874665 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:07:35.877434 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:07:35.882762 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:07:35.887683 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:07:35.891017 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:07:35.904021 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:07:35.916529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:07:35.916786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:07:35.931343 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:07:35.934018 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:07:35.937039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:07:35.938373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:07:35.938548 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:07:35.940265 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:07:35.943554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:07:35.945569 systemd-udevd[1341]: Using default interface naming scheme 'v255'. May 15 00:07:35.951033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:07:35.954861 augenrules[1366]: No rules May 15 00:07:35.959078 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:07:35.965996 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:07:35.966587 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:07:35.969312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:07:35.969605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:07:35.971782 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:07:35.972148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:07:35.976322 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:07:35.985590 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:07:35.988067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:07:36.000258 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:07:36.002165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:07:36.007200 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:07:36.014176 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:07:36.018974 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:07:36.023221 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:07:36.026070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:07:36.033174 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:07:36.038137 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:07:36.039282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:07:36.042009 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:07:36.043958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:07:36.044208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:07:36.046491 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:07:36.046753 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:07:36.053203 augenrules[1387]: /sbin/augenrules: No change May 15 00:07:36.051318 systemd-resolved[1339]: Positive Trust Anchors: May 15 00:07:36.051329 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:07:36.051373 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:07:36.058121 systemd[1]: Finished ensure-sysext.service. May 15 00:07:36.062884 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:07:36.063087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:07:36.065025 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:07:36.065243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:07:36.066148 augenrules[1425]: No rules May 15 00:07:36.066161 systemd-resolved[1339]: Defaulting to hostname 'linux'. May 15 00:07:36.066979 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:07:36.067218 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:07:36.071360 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:07:36.080664 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:07:36.091453 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 00:07:36.092425 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:07:36.094424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:07:36.094499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:07:36.098019 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:07:36.099779 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:07:36.125135 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1394) May 15 00:07:36.164652 systemd-networkd[1410]: lo: Link UP May 15 00:07:36.164664 systemd-networkd[1410]: lo: Gained carrier May 15 00:07:36.166998 systemd-networkd[1410]: Enumeration completed May 15 00:07:36.167190 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:07:36.168484 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:07:36.168558 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:07:36.168562 systemd[1]: Reached target network.target - Network. May 15 00:07:36.188600 systemd-networkd[1410]: eth0: Link UP May 15 00:07:36.188721 systemd-networkd[1410]: eth0: Gained carrier May 15 00:07:36.188853 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 00:07:36.188907 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:07:36.197543 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:07:36.201233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:07:36.203892 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:07:36.207905 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:07:36.213377 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:07:36.214921 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:07:36.214970 systemd-timesyncd[1441]: Initial clock synchronization to Thu 2025-05-15 00:07:36.430844 UTC. May 15 00:07:36.215284 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:07:36.218845 kernel: ACPI: button: Power Button [PWRF] May 15 00:07:36.222773 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 15 00:07:36.225526 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 00:07:36.225736 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 00:07:36.226102 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 00:07:36.231929 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 00:07:36.245898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:07:36.263109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:36.290299 kernel: mousedev: PS/2 mouse device common for all mice May 15 00:07:36.290017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:07:36.290278 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:36.309148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:07:36.376429 kernel: kvm_amd: TSC scaling supported May 15 00:07:36.376547 kernel: kvm_amd: Nested Virtualization enabled May 15 00:07:36.376569 kernel: kvm_amd: Nested Paging enabled May 15 00:07:36.376976 kernel: kvm_amd: LBR virtualization supported May 15 00:07:36.378270 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 15 00:07:36.378303 kernel: kvm_amd: Virtual GIF supported May 15 00:07:36.406325 kernel: EDAC MC: Ver: 3.0.0 May 15 00:07:36.420777 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:07:36.439109 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:07:36.455103 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:07:36.474973 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:07:36.516471 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:07:36.519244 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:07:36.520744 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:07:36.522214 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:07:36.523903 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:07:36.525843 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:07:36.527379 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:07:36.528936 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:07:36.530470 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:07:36.530516 systemd[1]: Reached target paths.target - Path Units. May 15 00:07:36.531638 systemd[1]: Reached target timers.target - Timer Units. May 15 00:07:36.534030 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:07:36.538040 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:07:36.549072 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:07:36.552172 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:07:36.554018 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:07:36.555244 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:07:36.556327 systemd[1]: Reached target basic.target - Basic System. May 15 00:07:36.557406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:07:36.557446 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:07:36.558792 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:07:36.561295 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:07:36.565849 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:07:36.566327 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:07:36.571348 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:07:36.572550 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:07:36.576097 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:07:36.577191 jq[1472]: false May 15 00:07:36.594176 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:07:36.598204 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:07:36.609143 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:07:36.615265 extend-filesystems[1473]: Found loop3 May 15 00:07:36.624938 extend-filesystems[1473]: Found loop4 May 15 00:07:36.624938 extend-filesystems[1473]: Found loop5 May 15 00:07:36.624938 extend-filesystems[1473]: Found sr0 May 15 00:07:36.624938 extend-filesystems[1473]: Found vda May 15 00:07:36.624938 extend-filesystems[1473]: Found vda1 May 15 00:07:36.624938 extend-filesystems[1473]: Found vda2 May 15 00:07:36.624938 extend-filesystems[1473]: Found vda3 May 15 00:07:36.624938 extend-filesystems[1473]: Found usr May 15 00:07:36.624938 extend-filesystems[1473]: Found vda4 May 15 00:07:36.624938 extend-filesystems[1473]: Found vda6 May 15 00:07:36.624938 extend-filesystems[1473]: Found vda7 May 15 00:07:36.624938 extend-filesystems[1473]: Found vda9 May 15 00:07:36.624938 extend-filesystems[1473]: Checking size of /dev/vda9 May 15 00:07:36.634474 dbus-daemon[1471]: [system] SELinux support is enabled May 15 00:07:36.628322 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:07:36.629484 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:07:36.630275 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:07:36.647466 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:07:36.650584 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:07:36.653169 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:07:36.661490 extend-filesystems[1473]: Resized partition /dev/vda9 May 15 00:07:36.657168 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:07:36.664626 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) May 15 00:07:36.666146 jq[1491]: true May 15 00:07:36.674530 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:07:36.674808 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:07:36.675196 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:07:36.675419 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:07:36.679346 update_engine[1488]: I20250515 00:07:36.679271 1488 main.cc:92] Flatcar Update Engine starting May 15 00:07:36.680718 update_engine[1488]: I20250515 00:07:36.680552 1488 update_check_scheduler.cc:74] Next update check in 9m30s May 15 00:07:36.693950 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1385) May 15 00:07:36.694336 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:07:36.699483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:07:36.699783 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:07:36.713644 jq[1498]: true May 15 00:07:36.713674 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:07:36.764204 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:07:36.779763 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:07:36.779850 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:07:36.781281 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:07:36.781305 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:07:36.790638 systemd[1]: Started update-engine.service - Update Engine. May 15 00:07:36.803267 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:07:36.822109 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:07:36.825888 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:07:36.832251 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:07:36.832589 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:07:36.845336 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:07:36.904932 tar[1497]: linux-amd64/helm May 15 00:07:36.907610 systemd-logind[1486]: Watching system buttons on /dev/input/event1 (Power Button) May 15 00:07:36.907647 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 00:07:36.908569 systemd-logind[1486]: New seat seat0. May 15 00:07:36.910424 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:07:36.912421 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:07:36.925186 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:07:36.928170 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 00:07:36.941400 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:07:36.991864 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:07:36.997418 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:07:37.025559 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:07:37.139837 sshd[1547]: Connection closed by authenticating user core 10.0.0.1 port 59666 [preauth] May 15 00:07:37.035371 systemd[1]: Started sshd@0-10.0.0.104:22-10.0.0.1:59666.service - OpenSSH per-connection server daemon (10.0.0.1:59666). May 15 00:07:37.137704 systemd[1]: sshd@0-10.0.0.104:22-10.0.0.1:59666.service: Deactivated successfully. May 15 00:07:37.145044 extend-filesystems[1494]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:07:37.145044 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:07:37.145044 extend-filesystems[1494]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:07:37.150777 extend-filesystems[1473]: Resized filesystem in /dev/vda9 May 15 00:07:37.153126 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:07:37.153505 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:07:37.157608 bash[1523]: Updated "/home/core/.ssh/authorized_keys" May 15 00:07:37.159222 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:07:37.164304 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 00:07:37.225948 containerd[1499]: time="2025-05-15T00:07:37.225776725Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 00:07:37.256813 containerd[1499]: time="2025-05-15T00:07:37.256675988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.259727114Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.259789592Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.259820614Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.260117453Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.260142847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.260245031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.260263490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.260565772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.260586104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.260603853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:37.261767 containerd[1499]: time="2025-05-15T00:07:37.260618392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:07:37.262160 containerd[1499]: time="2025-05-15T00:07:37.260776252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:07:37.262160 containerd[1499]: time="2025-05-15T00:07:37.261134601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:07:37.262160 containerd[1499]: time="2025-05-15T00:07:37.261304284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:07:37.262160 containerd[1499]: time="2025-05-15T00:07:37.261323145Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:07:37.262160 containerd[1499]: time="2025-05-15T00:07:37.261457822Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:07:37.262160 containerd[1499]: time="2025-05-15T00:07:37.261551374Z" level=info msg="metadata content store policy set" policy=shared May 15 00:07:37.271078 systemd-networkd[1410]: eth0: Gained IPv6LL May 15 00:07:37.275123 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:07:37.277303 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:07:37.278051 containerd[1499]: time="2025-05-15T00:07:37.277902413Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:07:37.278051 containerd[1499]: time="2025-05-15T00:07:37.277980664Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:07:37.278051 containerd[1499]: time="2025-05-15T00:07:37.278002477Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:07:37.278051 containerd[1499]: time="2025-05-15T00:07:37.278033324Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:07:37.278051 containerd[1499]: time="2025-05-15T00:07:37.278052381Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:07:37.278339 containerd[1499]: time="2025-05-15T00:07:37.278303154Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.278678697Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.278836474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279023083Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279047438Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279065475Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279083111Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279100254Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279120143Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279146165Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279165046Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279182558Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279199474Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279228233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280007 containerd[1499]: time="2025-05-15T00:07:37.279247556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279264359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279280874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279298932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279319161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279335819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279353928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279373540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279393645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279409810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279430687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279449322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279469026Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279494676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279512745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280331 containerd[1499]: time="2025-05-15T00:07:37.279527048Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:07:37.280642 containerd[1499]: time="2025-05-15T00:07:37.279607438Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:07:37.280642 containerd[1499]: time="2025-05-15T00:07:37.279637298Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:07:37.280642 containerd[1499]: time="2025-05-15T00:07:37.280380873Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:07:37.280642 containerd[1499]: time="2025-05-15T00:07:37.280458361Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:07:37.280642 containerd[1499]: time="2025-05-15T00:07:37.280475740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:07:37.280642 containerd[1499]: time="2025-05-15T00:07:37.280503028Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:07:37.280642 containerd[1499]: time="2025-05-15T00:07:37.280522845Z" level=info msg="NRI interface is disabled by configuration." May 15 00:07:37.280642 containerd[1499]: time="2025-05-15T00:07:37.280539421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:07:37.281879 containerd[1499]: time="2025-05-15T00:07:37.280906496Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:07:37.281879 containerd[1499]: time="2025-05-15T00:07:37.280966400Z" level=info msg="Connect containerd service" May 15 00:07:37.281879 containerd[1499]: time="2025-05-15T00:07:37.281019216Z" level=info msg="using legacy CRI server" May 15 00:07:37.281879 containerd[1499]: time="2025-05-15T00:07:37.281033744Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:07:37.281879 containerd[1499]: time="2025-05-15T00:07:37.281227998Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:07:37.282793 containerd[1499]: time="2025-05-15T00:07:37.282280090Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:07:37.282793 containerd[1499]: time="2025-05-15T00:07:37.282588432Z" level=info msg="Start subscribing containerd event" May 15 00:07:37.282793 containerd[1499]: time="2025-05-15T00:07:37.282662660Z" level=info msg="Start recovering state" May 15 00:07:37.282793 containerd[1499]: time="2025-05-15T00:07:37.282792563Z" level=info msg="Start event monitor" May 15 00:07:37.282994 containerd[1499]: time="2025-05-15T00:07:37.282820520Z" level=info msg="Start snapshots syncer" May 15 00:07:37.282994 containerd[1499]: time="2025-05-15T00:07:37.282854598Z" level=info msg="Start cni network conf syncer for default" May 15 00:07:37.282994 containerd[1499]: time="2025-05-15T00:07:37.282867213Z" level=info msg="Start streaming server" May 15 00:07:37.283712 containerd[1499]: time="2025-05-15T00:07:37.283675517Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:07:37.283775 containerd[1499]: time="2025-05-15T00:07:37.283750506Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:07:37.283874 containerd[1499]: time="2025-05-15T00:07:37.283844552Z" level=info msg="containerd successfully booted in 0.059895s" May 15 00:07:37.286193 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 00:07:37.291256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:37.310770 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:07:37.312373 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:07:37.341718 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:07:37.343797 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 00:07:37.344104 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 00:07:37.347433 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:07:37.612429 tar[1497]: linux-amd64/LICENSE May 15 00:07:37.612429 tar[1497]: linux-amd64/README.md May 15 00:07:37.633485 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:07:38.202047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:38.204468 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:07:38.205983 systemd[1]: Startup finished in 1.416s (kernel) + 9.821s (initrd) + 4.686s (userspace) = 15.925s. May 15 00:07:38.209562 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:07:38.722234 kubelet[1588]: E0515 00:07:38.722125 1588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:07:38.727999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:07:38.728293 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:07:38.728793 systemd[1]: kubelet.service: Consumed 1.092s CPU time. May 15 00:07:47.307185 systemd[1]: Started sshd@1-10.0.0.104:22-10.0.0.1:49240.service - OpenSSH per-connection server daemon (10.0.0.1:49240). May 15 00:07:47.357714 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 49240 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:07:47.360619 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:47.372675 systemd-logind[1486]: New session 1 of user core. May 15 00:07:47.374353 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:07:47.386372 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:07:47.402574 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:07:47.412653 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:07:48.173457 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:07:48.420760 systemd[1606]: Queued start job for default target default.target. May 15 00:07:48.432810 systemd[1606]: Created slice app.slice - User Application Slice. May 15 00:07:48.432863 systemd[1606]: Reached target paths.target - Paths. May 15 00:07:48.432878 systemd[1606]: Reached target timers.target - Timers. May 15 00:07:48.435193 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:07:48.448714 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:07:48.448955 systemd[1606]: Reached target sockets.target - Sockets. May 15 00:07:48.448982 systemd[1606]: Reached target basic.target - Basic System. May 15 00:07:48.449045 systemd[1606]: Reached target default.target - Main User Target. May 15 00:07:48.449102 systemd[1606]: Startup finished in 267ms. May 15 00:07:48.449284 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:07:48.451057 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:07:48.514803 systemd[1]: Started sshd@2-10.0.0.104:22-10.0.0.1:49248.service - OpenSSH per-connection server daemon (10.0.0.1:49248). May 15 00:07:48.562091 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 49248 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:07:48.564061 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:48.569265 systemd-logind[1486]: New session 2 of user core. May 15 00:07:48.580189 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:07:48.639905 sshd[1619]: Connection closed by 10.0.0.1 port 49248 May 15 00:07:48.640542 sshd-session[1617]: pam_unix(sshd:session): session closed for user core May 15 00:07:48.656221 systemd[1]: sshd@2-10.0.0.104:22-10.0.0.1:49248.service: Deactivated successfully. May 15 00:07:48.659045 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:07:48.660736 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. May 15 00:07:48.672349 systemd[1]: Started sshd@3-10.0.0.104:22-10.0.0.1:49264.service - OpenSSH per-connection server daemon (10.0.0.1:49264). May 15 00:07:48.674000 systemd-logind[1486]: Removed session 2. May 15 00:07:48.720073 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 49264 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:07:48.722182 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:48.729049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:07:48.729090 systemd-logind[1486]: New session 3 of user core. May 15 00:07:48.740228 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:07:48.742353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:48.795997 sshd[1627]: Connection closed by 10.0.0.1 port 49264 May 15 00:07:48.796658 sshd-session[1624]: pam_unix(sshd:session): session closed for user core May 15 00:07:48.809144 systemd[1]: sshd@3-10.0.0.104:22-10.0.0.1:49264.service: Deactivated successfully. May 15 00:07:48.812038 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:07:48.814931 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. May 15 00:07:48.826362 systemd[1]: Started sshd@4-10.0.0.104:22-10.0.0.1:49274.service - OpenSSH per-connection server daemon (10.0.0.1:49274). May 15 00:07:48.827056 systemd-logind[1486]: Removed session 3. May 15 00:07:48.874052 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 49274 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:07:48.875915 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:48.881505 systemd-logind[1486]: New session 4 of user core. May 15 00:07:48.892337 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:07:48.946656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:48.951065 sshd[1636]: Connection closed by 10.0.0.1 port 49274 May 15 00:07:48.951759 sshd-session[1634]: pam_unix(sshd:session): session closed for user core May 15 00:07:48.953367 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:07:48.955848 systemd[1]: sshd@4-10.0.0.104:22-10.0.0.1:49274.service: Deactivated successfully. May 15 00:07:48.957752 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:07:48.960387 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. May 15 00:07:48.973125 systemd[1]: Started sshd@5-10.0.0.104:22-10.0.0.1:49288.service - OpenSSH per-connection server daemon (10.0.0.1:49288). May 15 00:07:48.975356 systemd-logind[1486]: Removed session 4. May 15 00:07:49.003370 kubelet[1643]: E0515 00:07:49.003287 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:07:49.010187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:07:49.010415 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:07:49.019175 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 49288 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:07:49.021180 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:49.026075 systemd-logind[1486]: New session 5 of user core. May 15 00:07:49.035971 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:07:49.095512 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:07:49.095884 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:07:49.115757 sudo[1658]: pam_unix(sudo:session): session closed for user root May 15 00:07:49.117694 sshd[1657]: Connection closed by 10.0.0.1 port 49288 May 15 00:07:49.118166 sshd-session[1652]: pam_unix(sshd:session): session closed for user core May 15 00:07:49.134115 systemd[1]: sshd@5-10.0.0.104:22-10.0.0.1:49288.service: Deactivated successfully. May 15 00:07:49.136238 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:07:49.137795 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. May 15 00:07:49.139363 systemd[1]: Started sshd@6-10.0.0.104:22-10.0.0.1:49294.service - OpenSSH per-connection server daemon (10.0.0.1:49294). May 15 00:07:49.140695 systemd-logind[1486]: Removed session 5. May 15 00:07:49.184534 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 49294 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:07:49.186180 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:49.190455 systemd-logind[1486]: New session 6 of user core. May 15 00:07:49.200134 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:07:49.257857 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:07:49.258335 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:07:49.264848 sudo[1667]: pam_unix(sudo:session): session closed for user root May 15 00:07:49.272480 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 00:07:49.272869 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:07:49.296307 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 00:07:49.330878 augenrules[1689]: No rules May 15 00:07:49.333511 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:07:49.333888 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 00:07:49.335329 sudo[1666]: pam_unix(sudo:session): session closed for user root May 15 00:07:49.337066 sshd[1665]: Connection closed by 10.0.0.1 port 49294 May 15 00:07:49.337434 sshd-session[1663]: pam_unix(sshd:session): session closed for user core May 15 00:07:49.345774 systemd[1]: sshd@6-10.0.0.104:22-10.0.0.1:49294.service: Deactivated successfully. May 15 00:07:49.347793 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:07:49.349629 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. May 15 00:07:49.361298 systemd[1]: Started sshd@7-10.0.0.104:22-10.0.0.1:49302.service - OpenSSH per-connection server daemon (10.0.0.1:49302). May 15 00:07:49.362591 systemd-logind[1486]: Removed session 6. May 15 00:07:49.404568 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 49302 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:07:49.406519 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:07:49.411528 systemd-logind[1486]: New session 7 of user core. May 15 00:07:49.421058 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:07:49.476267 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:07:49.476652 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:07:49.775269 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:07:49.775924 (dockerd)[1721]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:07:50.063543 dockerd[1721]: time="2025-05-15T00:07:50.063361621Z" level=info msg="Starting up" May 15 00:07:50.711113 dockerd[1721]: time="2025-05-15T00:07:50.711025002Z" level=info msg="Loading containers: start." May 15 00:07:51.115869 kernel: Initializing XFRM netlink socket May 15 00:07:51.218961 systemd-networkd[1410]: docker0: Link UP May 15 00:07:51.264119 dockerd[1721]: time="2025-05-15T00:07:51.264041361Z" level=info msg="Loading containers: done." May 15 00:07:51.282514 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck450613809-merged.mount: Deactivated successfully. May 15 00:07:51.285714 dockerd[1721]: time="2025-05-15T00:07:51.285626205Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:07:51.285871 dockerd[1721]: time="2025-05-15T00:07:51.285780403Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 00:07:51.286033 dockerd[1721]: time="2025-05-15T00:07:51.285996695Z" level=info msg="Daemon has completed initialization" May 15 00:07:51.337444 dockerd[1721]: time="2025-05-15T00:07:51.337328471Z" level=info msg="API listen on /run/docker.sock" May 15 00:07:51.337643 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:07:52.031589 containerd[1499]: time="2025-05-15T00:07:52.031532410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 00:07:52.684937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714815587.mount: Deactivated successfully. May 15 00:07:56.246854 containerd[1499]: time="2025-05-15T00:07:56.246768685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:56.638533 containerd[1499]: time="2025-05-15T00:07:56.638339960Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 15 00:07:56.675474 containerd[1499]: time="2025-05-15T00:07:56.675373569Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:56.689563 containerd[1499]: time="2025-05-15T00:07:56.689449120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:07:56.691059 containerd[1499]: time="2025-05-15T00:07:56.690936767Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 4.659356029s" May 15 00:07:56.691059 containerd[1499]: time="2025-05-15T00:07:56.690992761Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 15 00:07:56.692692 containerd[1499]: time="2025-05-15T00:07:56.692634415Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 00:07:59.192161 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:07:59.201063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:07:59.361085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:07:59.366458 (kubelet)[1979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:08:00.013224 kubelet[1979]: E0515 00:08:00.013157 1979 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:08:00.017929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:08:00.018200 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:08:01.782756 containerd[1499]: time="2025-05-15T00:08:01.782642070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:01.784908 containerd[1499]: time="2025-05-15T00:08:01.784646300Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 15 00:08:01.791519 containerd[1499]: time="2025-05-15T00:08:01.791450321Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:01.802575 containerd[1499]: time="2025-05-15T00:08:01.802490799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:01.803873 containerd[1499]: time="2025-05-15T00:08:01.803790649Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 5.11111543s" May 15 00:08:01.803873 containerd[1499]: time="2025-05-15T00:08:01.803860626Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 15 00:08:01.804462 containerd[1499]: time="2025-05-15T00:08:01.804426285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 00:08:04.578279 containerd[1499]: time="2025-05-15T00:08:04.571689906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:04.603480 containerd[1499]: time="2025-05-15T00:08:04.603386328Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 15 00:08:04.668861 containerd[1499]: time="2025-05-15T00:08:04.667558907Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:04.715084 containerd[1499]: time="2025-05-15T00:08:04.714789089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:04.715930 containerd[1499]: time="2025-05-15T00:08:04.715886110Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.911418615s" May 15 00:08:04.716489 containerd[1499]: time="2025-05-15T00:08:04.716130227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 15 00:08:04.717445 containerd[1499]: time="2025-05-15T00:08:04.716950457Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 00:08:07.051975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785313941.mount: Deactivated successfully. May 15 00:08:10.192320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 00:08:10.207120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:10.385181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:10.391090 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:08:10.563092 kubelet[2007]: E0515 00:08:10.562901 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:08:10.568102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:08:10.568363 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:08:14.620041 containerd[1499]: time="2025-05-15T00:08:14.619930999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:14.699120 containerd[1499]: time="2025-05-15T00:08:14.698977138Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 15 00:08:14.790725 containerd[1499]: time="2025-05-15T00:08:14.790638496Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:14.884026 containerd[1499]: time="2025-05-15T00:08:14.883759189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:14.884911 containerd[1499]: time="2025-05-15T00:08:14.884722154Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 10.16773254s" May 15 00:08:14.884983 containerd[1499]: time="2025-05-15T00:08:14.884912636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 15 00:08:14.886295 containerd[1499]: time="2025-05-15T00:08:14.886228535Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:08:20.097938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824078384.mount: Deactivated successfully. May 15 00:08:20.692290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 00:08:20.708216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:20.898517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:20.906088 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:08:21.005202 kubelet[2031]: E0515 00:08:21.004940 2031 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:08:21.010650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:08:21.010981 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:08:22.282503 update_engine[1488]: I20250515 00:08:22.282326 1488 update_attempter.cc:509] Updating boot flags... May 15 00:08:22.347870 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2049) May 15 00:08:22.445097 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2048) May 15 00:08:22.463073 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (2048) May 15 00:08:26.953870 containerd[1499]: time="2025-05-15T00:08:26.953775889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:26.956857 containerd[1499]: time="2025-05-15T00:08:26.956714525Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 15 00:08:26.958429 containerd[1499]: time="2025-05-15T00:08:26.958350686Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:26.961600 containerd[1499]: time="2025-05-15T00:08:26.961502382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:26.962835 containerd[1499]: time="2025-05-15T00:08:26.962745714Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 12.076458631s" May 15 00:08:26.962835 containerd[1499]: time="2025-05-15T00:08:26.962799675Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 00:08:26.963381 containerd[1499]: time="2025-05-15T00:08:26.963358858Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:08:27.629361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342750881.mount: Deactivated successfully. May 15 00:08:27.645607 containerd[1499]: time="2025-05-15T00:08:27.645499769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:27.647509 containerd[1499]: time="2025-05-15T00:08:27.647337010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 15 00:08:27.650763 containerd[1499]: time="2025-05-15T00:08:27.650259207Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:27.653849 containerd[1499]: time="2025-05-15T00:08:27.653761085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:27.654972 containerd[1499]: time="2025-05-15T00:08:27.654859539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 691.39757ms" May 15 00:08:27.654972 containerd[1499]: time="2025-05-15T00:08:27.654919182Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 00:08:27.655606 containerd[1499]: time="2025-05-15T00:08:27.655483501Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 00:08:29.638223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466963039.mount: Deactivated successfully. May 15 00:08:31.193595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 15 00:08:31.205437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:31.404641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:31.410790 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:08:31.780569 kubelet[2159]: E0515 00:08:31.780493 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:08:31.786483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:08:31.786756 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:08:32.292424 containerd[1499]: time="2025-05-15T00:08:32.292343179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:32.293980 containerd[1499]: time="2025-05-15T00:08:32.293927094Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 15 00:08:32.296564 containerd[1499]: time="2025-05-15T00:08:32.296477039Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:32.300812 containerd[1499]: time="2025-05-15T00:08:32.300748208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:08:32.302177 containerd[1499]: time="2025-05-15T00:08:32.302128794Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.646610121s" May 15 00:08:32.302177 containerd[1499]: time="2025-05-15T00:08:32.302173053Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 00:08:34.987655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:35.001882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:35.036704 systemd[1]: Reloading requested from client PID 2197 ('systemctl') (unit session-7.scope)... May 15 00:08:35.036724 systemd[1]: Reloading... May 15 00:08:35.166887 zram_generator::config[2239]: No configuration found. May 15 00:08:35.539389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:08:35.638938 systemd[1]: Reloading finished in 601 ms. May 15 00:08:35.715432 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:35.722202 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:08:35.722632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:35.730467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:35.924654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:35.931348 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:08:35.978726 kubelet[2286]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:08:35.978726 kubelet[2286]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:08:35.978726 kubelet[2286]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:08:35.979307 kubelet[2286]: I0515 00:08:35.978767 2286 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:08:36.215511 kubelet[2286]: I0515 00:08:36.215446 2286 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:08:36.215511 kubelet[2286]: I0515 00:08:36.215492 2286 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:08:36.215806 kubelet[2286]: I0515 00:08:36.215785 2286 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:08:36.264730 kubelet[2286]: I0515 00:08:36.264647 2286 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:08:36.284112 kubelet[2286]: E0515 00:08:36.284047 2286 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:36.386555 kubelet[2286]: E0515 00:08:36.386480 2286 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:08:36.386555 kubelet[2286]: I0515 00:08:36.386531 2286 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:08:36.396493 kubelet[2286]: I0515 00:08:36.396432 2286 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:08:36.396643 kubelet[2286]: I0515 00:08:36.396555 2286 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:08:36.396779 kubelet[2286]: I0515 00:08:36.396701 2286 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:08:36.396989 kubelet[2286]: I0515 00:08:36.396763 2286 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:08:36.396989 kubelet[2286]: I0515 00:08:36.396964 2286 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:08:36.396989 kubelet[2286]: I0515 00:08:36.396977 2286 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:08:36.397262 kubelet[2286]: I0515 00:08:36.397108 2286 state_mem.go:36] "Initialized new in-memory state store" May 15 00:08:36.405574 kubelet[2286]: I0515 00:08:36.405512 2286 kubelet.go:408] "Attempting to sync node with API server" May 15 00:08:36.405645 kubelet[2286]: I0515 00:08:36.405584 2286 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:08:36.405645 kubelet[2286]: I0515 00:08:36.405637 2286 kubelet.go:314] "Adding apiserver pod source" May 15 00:08:36.405734 kubelet[2286]: I0515 00:08:36.405657 2286 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:08:36.416171 kubelet[2286]: W0515 00:08:36.416080 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:36.416171 kubelet[2286]: E0515 00:08:36.416165 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:36.417325 kubelet[2286]: W0515 00:08:36.417247 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:36.417381 kubelet[2286]: E0515 00:08:36.417332 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:36.433010 kubelet[2286]: I0515 00:08:36.432958 2286 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:08:36.439641 kubelet[2286]: I0515 00:08:36.439586 2286 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:08:36.439814 kubelet[2286]: W0515 00:08:36.439684 2286 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:08:36.440633 kubelet[2286]: I0515 00:08:36.440514 2286 server.go:1269] "Started kubelet" May 15 00:08:36.441235 kubelet[2286]: I0515 00:08:36.441093 2286 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:08:36.441336 kubelet[2286]: I0515 00:08:36.441272 2286 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:08:36.441635 kubelet[2286]: I0515 00:08:36.441593 2286 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:08:36.442281 kubelet[2286]: I0515 00:08:36.442115 2286 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:08:36.442620 kubelet[2286]: I0515 00:08:36.442583 2286 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:08:36.442620 kubelet[2286]: I0515 00:08:36.442602 2286 server.go:460] "Adding debug handlers to kubelet server" May 15 00:08:36.444865 kubelet[2286]: I0515 00:08:36.444840 2286 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:08:36.444957 kubelet[2286]: I0515 00:08:36.444938 2286 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:08:36.444994 kubelet[2286]: I0515 00:08:36.444985 2286 reconciler.go:26] "Reconciler: start to sync state" May 15 00:08:36.445418 kubelet[2286]: W0515 00:08:36.445354 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:36.445418 kubelet[2286]: E0515 00:08:36.445409 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:36.445699 kubelet[2286]: I0515 00:08:36.445671 2286 factory.go:221] Registration of the systemd container factory successfully May 15 00:08:36.445791 kubelet[2286]: I0515 00:08:36.445767 2286 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:08:36.446686 kubelet[2286]: E0515 00:08:36.446650 2286 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:08:36.447150 kubelet[2286]: E0515 00:08:36.447128 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:36.447230 kubelet[2286]: E0515 00:08:36.447205 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="200ms" May 15 00:08:36.447383 kubelet[2286]: I0515 00:08:36.447354 2286 factory.go:221] Registration of the containerd container factory successfully May 15 00:08:36.463658 kubelet[2286]: I0515 00:08:36.463568 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:08:36.465371 kubelet[2286]: I0515 00:08:36.465327 2286 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:08:36.465371 kubelet[2286]: I0515 00:08:36.465358 2286 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:08:36.465496 kubelet[2286]: I0515 00:08:36.465385 2286 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:08:36.465496 kubelet[2286]: E0515 00:08:36.465445 2286 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:08:36.471578 kubelet[2286]: W0515 00:08:36.471411 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:36.471578 kubelet[2286]: E0515 00:08:36.471482 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:36.479199 kubelet[2286]: I0515 00:08:36.479041 2286 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:08:36.479199 kubelet[2286]: I0515 00:08:36.479071 2286 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:08:36.479199 kubelet[2286]: I0515 00:08:36.479092 2286 state_mem.go:36] "Initialized new in-memory state store" May 15 00:08:36.533505 kubelet[2286]: E0515 00:08:36.531530 2286 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8abb0df15c31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:08:36.440480817 +0000 UTC m=+0.504111113,LastTimestamp:2025-05-15 00:08:36.440480817 +0000 UTC m=+0.504111113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:08:36.547903 kubelet[2286]: E0515 00:08:36.547861 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:36.565704 kubelet[2286]: E0515 00:08:36.565661 2286 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:08:36.648099 kubelet[2286]: E0515 00:08:36.647981 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:36.648289 kubelet[2286]: E0515 00:08:36.648238 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="400ms" May 15 00:08:36.748369 kubelet[2286]: E0515 00:08:36.748177 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:36.765812 kubelet[2286]: E0515 00:08:36.765724 2286 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:08:36.848382 kubelet[2286]: E0515 00:08:36.848289 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:36.949480 kubelet[2286]: E0515 00:08:36.949400 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.049579 kubelet[2286]: E0515 00:08:37.049402 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="800ms" May 15 00:08:37.049579 kubelet[2286]: E0515 00:08:37.049463 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.150083 kubelet[2286]: E0515 00:08:37.149985 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.166402 kubelet[2286]: E0515 00:08:37.166309 2286 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:08:37.251046 kubelet[2286]: E0515 00:08:37.250961 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.351856 kubelet[2286]: E0515 00:08:37.351641 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.360393 kubelet[2286]: W0515 00:08:37.360329 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:37.360393 kubelet[2286]: E0515 00:08:37.360377 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:37.435927 kubelet[2286]: W0515 00:08:37.435742 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:37.435927 kubelet[2286]: E0515 00:08:37.435810 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:37.453041 kubelet[2286]: E0515 00:08:37.452950 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.553640 kubelet[2286]: E0515 00:08:37.553570 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.555378 kubelet[2286]: W0515 00:08:37.555319 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:37.555450 kubelet[2286]: E0515 00:08:37.555387 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:37.640627 kubelet[2286]: W0515 00:08:37.640427 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:37.640627 kubelet[2286]: E0515 00:08:37.640499 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:37.654918 kubelet[2286]: E0515 00:08:37.654169 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.754971 kubelet[2286]: E0515 00:08:37.754896 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.851282 kubelet[2286]: E0515 00:08:37.851178 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="1.6s" May 15 00:08:37.856131 kubelet[2286]: E0515 00:08:37.855811 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.957036 kubelet[2286]: E0515 00:08:37.956976 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:37.967195 kubelet[2286]: E0515 00:08:37.967130 2286 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:08:38.057848 kubelet[2286]: E0515 00:08:38.057767 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:38.158427 kubelet[2286]: E0515 00:08:38.158348 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:38.258766 kubelet[2286]: E0515 00:08:38.258555 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:38.359764 kubelet[2286]: E0515 00:08:38.359694 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:38.460594 kubelet[2286]: E0515 00:08:38.460497 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:38.561703 kubelet[2286]: E0515 00:08:38.561499 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:38.662883 kubelet[2286]: E0515 00:08:38.662753 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:38.763282 kubelet[2286]: E0515 00:08:38.763206 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:38.771230 kubelet[2286]: E0515 00:08:38.771170 2286 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:38.864201 kubelet[2286]: E0515 00:08:38.864008 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:38.964942 kubelet[2286]: E0515 00:08:38.964883 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.065641 kubelet[2286]: E0515 00:08:39.065544 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.166263 kubelet[2286]: E0515 00:08:39.166073 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.267005 kubelet[2286]: E0515 00:08:39.266893 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.367598 kubelet[2286]: E0515 00:08:39.367520 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.389300 kubelet[2286]: W0515 00:08:39.389257 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:39.389373 kubelet[2286]: E0515 00:08:39.389302 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:39.452685 kubelet[2286]: E0515 00:08:39.452590 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="3.2s" May 15 00:08:39.467951 kubelet[2286]: E0515 00:08:39.467879 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.567610 kubelet[2286]: E0515 00:08:39.567541 2286 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:08:39.568721 kubelet[2286]: E0515 00:08:39.568663 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.669411 kubelet[2286]: E0515 00:08:39.669337 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.770341 kubelet[2286]: E0515 00:08:39.770109 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.776649 kubelet[2286]: W0515 00:08:39.776590 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:39.776649 kubelet[2286]: E0515 00:08:39.776634 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:39.870532 kubelet[2286]: E0515 00:08:39.870471 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:39.962206 kubelet[2286]: W0515 00:08:39.962143 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:39.962206 kubelet[2286]: E0515 00:08:39.962199 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:39.971522 kubelet[2286]: E0515 00:08:39.971491 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.072138 kubelet[2286]: E0515 00:08:40.071967 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.172891 kubelet[2286]: E0515 00:08:40.172737 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.273622 kubelet[2286]: E0515 00:08:40.273548 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.374364 kubelet[2286]: E0515 00:08:40.374171 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.392847 kubelet[2286]: W0515 00:08:40.392782 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:40.392847 kubelet[2286]: E0515 00:08:40.392852 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:40.474868 kubelet[2286]: E0515 00:08:40.474738 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.575374 kubelet[2286]: E0515 00:08:40.575248 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.675992 kubelet[2286]: E0515 00:08:40.675738 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.776554 kubelet[2286]: E0515 00:08:40.776473 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.799287 kubelet[2286]: E0515 00:08:40.799174 2286 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8abb0df15c31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:08:36.440480817 +0000 UTC m=+0.504111113,LastTimestamp:2025-05-15 00:08:36.440480817 +0000 UTC m=+0.504111113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:08:40.877403 kubelet[2286]: E0515 00:08:40.877363 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:40.978126 kubelet[2286]: E0515 00:08:40.978091 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.078983 kubelet[2286]: E0515 00:08:41.078899 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.179545 kubelet[2286]: E0515 00:08:41.179427 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.280450 kubelet[2286]: E0515 00:08:41.280179 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.380982 kubelet[2286]: E0515 00:08:41.380858 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.481005 kubelet[2286]: E0515 00:08:41.480949 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.581668 kubelet[2286]: E0515 00:08:41.581486 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.681941 kubelet[2286]: E0515 00:08:41.681878 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.782682 kubelet[2286]: E0515 00:08:41.782594 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.883369 kubelet[2286]: E0515 00:08:41.883171 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:41.887842 kubelet[2286]: I0515 00:08:41.887782 2286 policy_none.go:49] "None policy: Start" May 15 00:08:41.958004 kubelet[2286]: I0515 00:08:41.888554 2286 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:08:41.958004 kubelet[2286]: I0515 00:08:41.888580 2286 state_mem.go:35] "Initializing new in-memory state store" May 15 00:08:41.983402 kubelet[2286]: E0515 00:08:41.983309 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:42.083998 kubelet[2286]: E0515 00:08:42.083936 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:42.184909 kubelet[2286]: E0515 00:08:42.184647 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:42.285568 kubelet[2286]: E0515 00:08:42.285481 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:42.328533 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:08:42.342109 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:08:42.345134 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:08:42.356696 kubelet[2286]: I0515 00:08:42.356652 2286 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:08:42.356926 kubelet[2286]: I0515 00:08:42.356898 2286 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:08:42.356964 kubelet[2286]: I0515 00:08:42.356913 2286 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:08:42.357181 kubelet[2286]: I0515 00:08:42.357101 2286 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:08:42.358131 kubelet[2286]: E0515 00:08:42.358086 2286 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:08:42.458302 kubelet[2286]: I0515 00:08:42.458257 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:08:42.458721 kubelet[2286]: E0515 00:08:42.458681 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 15 00:08:42.653611 kubelet[2286]: E0515 00:08:42.653542 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="6.4s" May 15 00:08:42.660836 kubelet[2286]: I0515 00:08:42.660774 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:08:42.661278 kubelet[2286]: E0515 00:08:42.661208 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 15 00:08:42.777652 systemd[1]: Created slice kubepods-burstable-podbc6e98582f35631794f802289a44cbb6.slice - libcontainer container kubepods-burstable-podbc6e98582f35631794f802289a44cbb6.slice. May 15 00:08:42.792160 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 15 00:08:42.808213 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 15 00:08:42.889578 kubelet[2286]: I0515 00:08:42.889502 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc6e98582f35631794f802289a44cbb6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bc6e98582f35631794f802289a44cbb6\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:42.889578 kubelet[2286]: I0515 00:08:42.889568 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:42.889578 kubelet[2286]: I0515 00:08:42.889589 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:42.889884 kubelet[2286]: I0515 00:08:42.889610 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:42.889884 kubelet[2286]: I0515 00:08:42.889627 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:08:42.889884 kubelet[2286]: I0515 00:08:42.889640 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc6e98582f35631794f802289a44cbb6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc6e98582f35631794f802289a44cbb6\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:42.889884 kubelet[2286]: I0515 00:08:42.889655 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc6e98582f35631794f802289a44cbb6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc6e98582f35631794f802289a44cbb6\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:42.889884 kubelet[2286]: I0515 00:08:42.889698 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:42.890006 kubelet[2286]: I0515 00:08:42.889750 2286 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:42.962066 kubelet[2286]: E0515 00:08:42.962010 2286 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:43.063648 kubelet[2286]: I0515 00:08:43.063487 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:08:43.064021 kubelet[2286]: E0515 00:08:43.063978 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 15 00:08:43.089515 kubelet[2286]: E0515 00:08:43.089442 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:43.090281 containerd[1499]: time="2025-05-15T00:08:43.090225106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bc6e98582f35631794f802289a44cbb6,Namespace:kube-system,Attempt:0,}" May 15 00:08:43.105880 kubelet[2286]: E0515 00:08:43.105729 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:43.106454 containerd[1499]: time="2025-05-15T00:08:43.106378412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 15 00:08:43.106925 kubelet[2286]: W0515 00:08:43.106847 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:43.107012 kubelet[2286]: E0515 00:08:43.106939 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:43.111467 kubelet[2286]: E0515 00:08:43.111418 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:43.111957 containerd[1499]: time="2025-05-15T00:08:43.111912973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 15 00:08:43.866399 kubelet[2286]: I0515 00:08:43.866338 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:08:43.866932 kubelet[2286]: E0515 00:08:43.866876 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 15 00:08:44.127203 kubelet[2286]: W0515 00:08:44.126990 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:44.127203 kubelet[2286]: E0515 00:08:44.127071 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:44.897060 kubelet[2286]: W0515 00:08:44.896961 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:44.897060 kubelet[2286]: E0515 00:08:44.897050 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:45.468648 kubelet[2286]: I0515 00:08:45.468596 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:08:45.469185 kubelet[2286]: E0515 00:08:45.468999 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 15 00:08:46.126493 kubelet[2286]: W0515 00:08:46.126375 2286 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.104:6443: connect: connection refused May 15 00:08:46.126493 kubelet[2286]: E0515 00:08:46.126481 2286 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" May 15 00:08:46.197659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932713400.mount: Deactivated successfully. May 15 00:08:46.324136 containerd[1499]: time="2025-05-15T00:08:46.324057476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:46.378742 containerd[1499]: time="2025-05-15T00:08:46.378486873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 15 00:08:46.451869 containerd[1499]: time="2025-05-15T00:08:46.451678129Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:46.464756 containerd[1499]: time="2025-05-15T00:08:46.464213634Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:46.488158 containerd[1499]: time="2025-05-15T00:08:46.488044107Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:08:46.531191 containerd[1499]: time="2025-05-15T00:08:46.531071916Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:46.550027 containerd[1499]: time="2025-05-15T00:08:46.549880047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:08:46.552060 containerd[1499]: time="2025-05-15T00:08:46.551979028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.461111289s" May 15 00:08:46.569358 containerd[1499]: time="2025-05-15T00:08:46.569140927Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:08:46.581920 containerd[1499]: time="2025-05-15T00:08:46.581841917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.475325432s" May 15 00:08:46.679577 containerd[1499]: time="2025-05-15T00:08:46.679128065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.567125426s" May 15 00:08:46.944017 containerd[1499]: time="2025-05-15T00:08:46.943705094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:46.944017 containerd[1499]: time="2025-05-15T00:08:46.943897802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:46.944017 containerd[1499]: time="2025-05-15T00:08:46.943923723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:46.944299 containerd[1499]: time="2025-05-15T00:08:46.944032036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:46.945242 containerd[1499]: time="2025-05-15T00:08:46.944803010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:46.945242 containerd[1499]: time="2025-05-15T00:08:46.944915821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:46.945242 containerd[1499]: time="2025-05-15T00:08:46.944938917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:46.945242 containerd[1499]: time="2025-05-15T00:08:46.945040396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:46.978490 systemd[1]: Started cri-containerd-181da5bf2bb7c65dbee189367e8599d0d3545be6b56b77109c4c5daaf75fe512.scope - libcontainer container 181da5bf2bb7c65dbee189367e8599d0d3545be6b56b77109c4c5daaf75fe512. May 15 00:08:46.979046 containerd[1499]: time="2025-05-15T00:08:46.978462353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:08:46.979046 containerd[1499]: time="2025-05-15T00:08:46.978621615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:08:46.979046 containerd[1499]: time="2025-05-15T00:08:46.978643829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:46.979046 containerd[1499]: time="2025-05-15T00:08:46.978755699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:08:46.982781 systemd[1]: Started cri-containerd-7cd9268f31e5dbaf1d860ae7d05643fed1526be7d5e050840958d38922e3df76.scope - libcontainer container 7cd9268f31e5dbaf1d860ae7d05643fed1526be7d5e050840958d38922e3df76. May 15 00:08:47.009358 systemd[1]: Started cri-containerd-4fc3ed7188d6024a89c410b8dfe68884a1d3d086efdca86248274fbd4558fd52.scope - libcontainer container 4fc3ed7188d6024a89c410b8dfe68884a1d3d086efdca86248274fbd4558fd52. May 15 00:08:47.042236 containerd[1499]: time="2025-05-15T00:08:47.042065207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bc6e98582f35631794f802289a44cbb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"181da5bf2bb7c65dbee189367e8599d0d3545be6b56b77109c4c5daaf75fe512\"" May 15 00:08:47.043633 kubelet[2286]: E0515 00:08:47.043513 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:47.047295 containerd[1499]: time="2025-05-15T00:08:47.047243584Z" level=info msg="CreateContainer within sandbox \"181da5bf2bb7c65dbee189367e8599d0d3545be6b56b77109c4c5daaf75fe512\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:08:47.065722 containerd[1499]: time="2025-05-15T00:08:47.065596098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cd9268f31e5dbaf1d860ae7d05643fed1526be7d5e050840958d38922e3df76\"" May 15 00:08:47.066774 kubelet[2286]: E0515 00:08:47.066748 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:47.069254 containerd[1499]: time="2025-05-15T00:08:47.069209094Z" level=info msg="CreateContainer within sandbox \"7cd9268f31e5dbaf1d860ae7d05643fed1526be7d5e050840958d38922e3df76\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:08:47.075047 containerd[1499]: time="2025-05-15T00:08:47.074995553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fc3ed7188d6024a89c410b8dfe68884a1d3d086efdca86248274fbd4558fd52\"" May 15 00:08:47.076401 kubelet[2286]: E0515 00:08:47.076207 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:47.078991 containerd[1499]: time="2025-05-15T00:08:47.078772641Z" level=info msg="CreateContainer within sandbox \"4fc3ed7188d6024a89c410b8dfe68884a1d3d086efdca86248274fbd4558fd52\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:08:48.671082 kubelet[2286]: I0515 00:08:48.671013 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:08:48.671612 kubelet[2286]: E0515 00:08:48.671539 2286 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" May 15 00:08:49.054769 kubelet[2286]: E0515 00:08:49.054683 2286 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="7s" May 15 00:08:50.032751 containerd[1499]: time="2025-05-15T00:08:50.032677699Z" level=info msg="CreateContainer within sandbox \"4fc3ed7188d6024a89c410b8dfe68884a1d3d086efdca86248274fbd4558fd52\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c8df3a7b3046f5560952d163f0968e862e8952cc76df48341756a3228a830f9\"" May 15 00:08:50.033575 containerd[1499]: time="2025-05-15T00:08:50.033529313Z" level=info msg="StartContainer for \"8c8df3a7b3046f5560952d163f0968e862e8952cc76df48341756a3228a830f9\"" May 15 00:08:50.064965 systemd[1]: Started cri-containerd-8c8df3a7b3046f5560952d163f0968e862e8952cc76df48341756a3228a830f9.scope - libcontainer container 8c8df3a7b3046f5560952d163f0968e862e8952cc76df48341756a3228a830f9. May 15 00:08:50.550087 containerd[1499]: time="2025-05-15T00:08:50.549991987Z" level=info msg="CreateContainer within sandbox \"181da5bf2bb7c65dbee189367e8599d0d3545be6b56b77109c4c5daaf75fe512\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e4f1825b9538fb07fd821e50bdec27d8c58c0c0ca2b5e5d81c5f565418fd117\"" May 15 00:08:50.551818 containerd[1499]: time="2025-05-15T00:08:50.550937185Z" level=info msg="StartContainer for \"8c8df3a7b3046f5560952d163f0968e862e8952cc76df48341756a3228a830f9\" returns successfully" May 15 00:08:50.552043 containerd[1499]: time="2025-05-15T00:08:50.551855581Z" level=info msg="StartContainer for \"6e4f1825b9538fb07fd821e50bdec27d8c58c0c0ca2b5e5d81c5f565418fd117\"" May 15 00:08:50.591041 systemd[1]: Started cri-containerd-6e4f1825b9538fb07fd821e50bdec27d8c58c0c0ca2b5e5d81c5f565418fd117.scope - libcontainer container 6e4f1825b9538fb07fd821e50bdec27d8c58c0c0ca2b5e5d81c5f565418fd117. May 15 00:08:51.401590 containerd[1499]: time="2025-05-15T00:08:51.401503175Z" level=info msg="CreateContainer within sandbox \"7cd9268f31e5dbaf1d860ae7d05643fed1526be7d5e050840958d38922e3df76\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9fb1c643743b85c5a967ddd5356898845638d9d54f47e27bbe5d774b76d81ec1\"" May 15 00:08:51.402145 containerd[1499]: time="2025-05-15T00:08:51.401525147Z" level=info msg="StartContainer for \"6e4f1825b9538fb07fd821e50bdec27d8c58c0c0ca2b5e5d81c5f565418fd117\" returns successfully" May 15 00:08:51.402338 containerd[1499]: time="2025-05-15T00:08:51.402289981Z" level=info msg="StartContainer for \"9fb1c643743b85c5a967ddd5356898845638d9d54f47e27bbe5d774b76d81ec1\"" May 15 00:08:51.458064 systemd[1]: run-containerd-runc-k8s.io-9fb1c643743b85c5a967ddd5356898845638d9d54f47e27bbe5d774b76d81ec1-runc.B0v1Px.mount: Deactivated successfully. May 15 00:08:51.480965 systemd[1]: Started cri-containerd-9fb1c643743b85c5a967ddd5356898845638d9d54f47e27bbe5d774b76d81ec1.scope - libcontainer container 9fb1c643743b85c5a967ddd5356898845638d9d54f47e27bbe5d774b76d81ec1. May 15 00:08:51.563956 kubelet[2286]: E0515 00:08:51.563923 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:51.565855 kubelet[2286]: E0515 00:08:51.564755 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:51.721631 containerd[1499]: time="2025-05-15T00:08:51.721529295Z" level=info msg="StartContainer for \"9fb1c643743b85c5a967ddd5356898845638d9d54f47e27bbe5d774b76d81ec1\" returns successfully" May 15 00:08:51.963341 kubelet[2286]: E0515 00:08:51.963207 2286 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f8abb0df15c31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:08:36.440480817 +0000 UTC m=+0.504111113,LastTimestamp:2025-05-15 00:08:36.440480817 +0000 UTC m=+0.504111113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:08:52.181542 kubelet[2286]: E0515 00:08:52.181106 2286 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183f8abb0e4f597d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:08:36.446640509 +0000 UTC m=+0.510270785,LastTimestamp:2025-05-15 00:08:36.446640509 +0000 UTC m=+0.510270785,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:08:52.358394 kubelet[2286]: E0515 00:08:52.358211 2286 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:08:52.565722 kubelet[2286]: E0515 00:08:52.565682 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:52.566144 kubelet[2286]: E0515 00:08:52.566026 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:52.566203 kubelet[2286]: E0515 00:08:52.566176 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:52.967082 kubelet[2286]: E0515 00:08:52.967016 2286 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 15 00:08:53.568315 kubelet[2286]: E0515 00:08:53.568273 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:53.568315 kubelet[2286]: E0515 00:08:53.568291 2286 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:54.277897 kubelet[2286]: E0515 00:08:54.277810 2286 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 15 00:08:55.073916 kubelet[2286]: I0515 00:08:55.073678 2286 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:08:55.089200 kubelet[2286]: I0515 00:08:55.088867 2286 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:08:55.089200 kubelet[2286]: E0515 00:08:55.088920 2286 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 00:08:55.110242 kubelet[2286]: E0515 00:08:55.110199 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:55.211363 kubelet[2286]: E0515 00:08:55.211276 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:55.311920 kubelet[2286]: E0515 00:08:55.311865 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:55.412446 kubelet[2286]: E0515 00:08:55.412166 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:55.513428 kubelet[2286]: E0515 00:08:55.513353 2286 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:56.428272 kubelet[2286]: I0515 00:08:56.427916 2286 apiserver.go:52] "Watching apiserver" May 15 00:08:56.446070 kubelet[2286]: I0515 00:08:56.446013 2286 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:08:56.988855 systemd[1]: Reloading requested from client PID 2562 ('systemctl') (unit session-7.scope)... May 15 00:08:56.988884 systemd[1]: Reloading... May 15 00:08:57.104058 zram_generator::config[2604]: No configuration found. May 15 00:08:57.259370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:08:57.376816 systemd[1]: Reloading finished in 387 ms. May 15 00:08:57.430950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:57.457989 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:08:57.458293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:57.458356 systemd[1]: kubelet.service: Consumed 1.135s CPU time, 121.0M memory peak, 0B memory swap peak. May 15 00:08:57.468646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:08:57.648294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:08:57.654473 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:08:57.727122 kubelet[2646]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:08:57.727122 kubelet[2646]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:08:57.727122 kubelet[2646]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:08:57.727788 kubelet[2646]: I0515 00:08:57.727117 2646 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:08:57.743789 kubelet[2646]: I0515 00:08:57.743718 2646 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:08:57.743789 kubelet[2646]: I0515 00:08:57.743764 2646 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:08:57.744121 kubelet[2646]: I0515 00:08:57.744092 2646 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:08:57.746045 kubelet[2646]: I0515 00:08:57.745725 2646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:08:57.748360 kubelet[2646]: I0515 00:08:57.748182 2646 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:08:57.752104 kubelet[2646]: E0515 00:08:57.752060 2646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:08:57.752104 kubelet[2646]: I0515 00:08:57.752100 2646 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:08:57.758741 kubelet[2646]: I0515 00:08:57.758696 2646 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:08:57.758948 kubelet[2646]: I0515 00:08:57.758931 2646 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:08:57.759152 kubelet[2646]: I0515 00:08:57.759118 2646 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:08:57.759393 kubelet[2646]: I0515 00:08:57.759152 2646 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:08:57.759504 kubelet[2646]: I0515 00:08:57.759403 2646 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:08:57.759504 kubelet[2646]: I0515 00:08:57.759417 2646 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:08:57.759504 kubelet[2646]: I0515 00:08:57.759480 2646 state_mem.go:36] "Initialized new in-memory state store" May 15 00:08:57.759664 kubelet[2646]: I0515 00:08:57.759623 2646 kubelet.go:408] "Attempting to sync node with API server" May 15 00:08:57.759664 kubelet[2646]: I0515 00:08:57.759643 2646 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:08:57.759736 kubelet[2646]: I0515 00:08:57.759684 2646 kubelet.go:314] "Adding apiserver pod source" May 15 00:08:57.759736 kubelet[2646]: I0515 00:08:57.759703 2646 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:08:57.764344 kubelet[2646]: I0515 00:08:57.764151 2646 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 00:08:57.764629 kubelet[2646]: I0515 00:08:57.764602 2646 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:08:57.769653 kubelet[2646]: I0515 00:08:57.765137 2646 server.go:1269] "Started kubelet" May 15 00:08:57.769653 kubelet[2646]: I0515 00:08:57.765362 2646 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:08:57.769653 kubelet[2646]: I0515 00:08:57.765470 2646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:08:57.769653 kubelet[2646]: I0515 00:08:57.765928 2646 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:08:57.769653 kubelet[2646]: I0515 00:08:57.766420 2646 server.go:460] "Adding debug handlers to kubelet server" May 15 00:08:57.769653 kubelet[2646]: E0515 00:08:57.768424 2646 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:08:57.769653 kubelet[2646]: I0515 00:08:57.768727 2646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:08:57.769653 kubelet[2646]: I0515 00:08:57.769251 2646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:08:57.770024 kubelet[2646]: I0515 00:08:57.769695 2646 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:08:57.770024 kubelet[2646]: I0515 00:08:57.769799 2646 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:08:57.770024 kubelet[2646]: I0515 00:08:57.769994 2646 reconciler.go:26] "Reconciler: start to sync state" May 15 00:08:57.770316 kubelet[2646]: E0515 00:08:57.770291 2646 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:08:57.773464 kubelet[2646]: I0515 00:08:57.773426 2646 factory.go:221] Registration of the systemd container factory successfully May 15 00:08:57.773563 kubelet[2646]: I0515 00:08:57.773546 2646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:08:57.781061 kubelet[2646]: I0515 00:08:57.780986 2646 factory.go:221] Registration of the containerd container factory successfully May 15 00:08:57.787414 kubelet[2646]: I0515 00:08:57.787349 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:08:57.789214 kubelet[2646]: I0515 00:08:57.789158 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:08:57.789300 kubelet[2646]: I0515 00:08:57.789222 2646 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:08:57.789300 kubelet[2646]: I0515 00:08:57.789252 2646 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:08:57.789359 kubelet[2646]: E0515 00:08:57.789310 2646 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:08:57.831380 kubelet[2646]: I0515 00:08:57.831328 2646 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:08:57.831380 kubelet[2646]: I0515 00:08:57.831353 2646 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:08:57.831380 kubelet[2646]: I0515 00:08:57.831372 2646 state_mem.go:36] "Initialized new in-memory state store" May 15 00:08:57.831600 kubelet[2646]: I0515 00:08:57.831543 2646 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:08:57.831600 kubelet[2646]: I0515 00:08:57.831557 2646 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:08:57.831600 kubelet[2646]: I0515 00:08:57.831577 2646 policy_none.go:49] "None policy: Start" May 15 00:08:57.832517 kubelet[2646]: I0515 00:08:57.832495 2646 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:08:57.832658 kubelet[2646]: I0515 00:08:57.832528 2646 state_mem.go:35] "Initializing new in-memory state store" May 15 00:08:57.832750 kubelet[2646]: I0515 00:08:57.832736 2646 state_mem.go:75] "Updated machine memory state" May 15 00:08:57.838273 kubelet[2646]: I0515 00:08:57.838231 2646 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:08:57.838501 kubelet[2646]: I0515 00:08:57.838475 2646 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:08:57.838550 kubelet[2646]: I0515 00:08:57.838495 2646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:08:57.839086 kubelet[2646]: I0515 00:08:57.839070 2646 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:08:57.944986 kubelet[2646]: I0515 00:08:57.944915 2646 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:08:58.071022 kubelet[2646]: I0515 00:08:58.070934 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc6e98582f35631794f802289a44cbb6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc6e98582f35631794f802289a44cbb6\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:58.071022 kubelet[2646]: I0515 00:08:58.071003 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc6e98582f35631794f802289a44cbb6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bc6e98582f35631794f802289a44cbb6\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:58.071022 kubelet[2646]: I0515 00:08:58.071040 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc6e98582f35631794f802289a44cbb6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bc6e98582f35631794f802289a44cbb6\") " pod="kube-system/kube-apiserver-localhost" May 15 00:08:58.071285 kubelet[2646]: I0515 00:08:58.071065 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:58.071285 kubelet[2646]: I0515 00:08:58.071087 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:08:58.071285 kubelet[2646]: I0515 00:08:58.071108 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:58.071285 kubelet[2646]: I0515 00:08:58.071134 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:58.071285 kubelet[2646]: I0515 00:08:58.071154 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:58.071482 kubelet[2646]: I0515 00:08:58.071190 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:08:58.294399 kubelet[2646]: E0515 00:08:58.294219 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:58.294399 kubelet[2646]: E0515 00:08:58.294232 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:58.325106 kubelet[2646]: E0515 00:08:58.324912 2646 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 00:08:58.325396 kubelet[2646]: E0515 00:08:58.325350 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:58.331889 kubelet[2646]: I0515 00:08:58.331818 2646 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 00:08:58.332093 kubelet[2646]: I0515 00:08:58.331971 2646 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:08:58.384966 sudo[2681]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 00:08:58.385481 sudo[2681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 00:08:58.761048 kubelet[2646]: I0515 00:08:58.761004 2646 apiserver.go:52] "Watching apiserver" May 15 00:08:58.771364 kubelet[2646]: I0515 00:08:58.771052 2646 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:08:58.810189 kubelet[2646]: E0515 00:08:58.810137 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:58.810189 kubelet[2646]: E0515 00:08:58.810172 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:58.828069 kubelet[2646]: E0515 00:08:58.827530 2646 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:08:58.828069 kubelet[2646]: E0515 00:08:58.827769 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:08:58.911394 kubelet[2646]: I0515 00:08:58.910960 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.910937033 podStartE2EDuration="1.910937033s" podCreationTimestamp="2025-05-15 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:08:58.87135602 +0000 UTC m=+1.207954340" watchObservedRunningTime="2025-05-15 00:08:58.910937033 +0000 UTC m=+1.247535343" May 15 00:08:58.927556 kubelet[2646]: I0515 00:08:58.927475 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9274511109999999 podStartE2EDuration="1.927451111s" podCreationTimestamp="2025-05-15 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:08:58.91168735 +0000 UTC m=+1.248285660" watchObservedRunningTime="2025-05-15 00:08:58.927451111 +0000 UTC m=+1.264049441" May 15 00:08:58.950324 kubelet[2646]: I0515 00:08:58.950028 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.950003665 podStartE2EDuration="1.950003665s" podCreationTimestamp="2025-05-15 00:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:08:58.927741204 +0000 UTC m=+1.264339534" watchObservedRunningTime="2025-05-15 00:08:58.950003665 +0000 UTC m=+1.286601975" May 15 00:08:59.172969 sudo[2681]: pam_unix(sudo:session): session closed for user root May 15 00:08:59.811518 kubelet[2646]: E0515 00:08:59.811462 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:01.654666 sudo[1700]: pam_unix(sudo:session): session closed for user root May 15 00:09:01.660912 sshd[1699]: Connection closed by 10.0.0.1 port 49302 May 15 00:09:01.669743 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 15 00:09:01.675588 systemd[1]: sshd@7-10.0.0.104:22-10.0.0.1:49302.service: Deactivated successfully. May 15 00:09:01.678978 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:09:01.679352 systemd[1]: session-7.scope: Consumed 5.794s CPU time, 149.5M memory peak, 0B memory swap peak. May 15 00:09:01.682433 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. May 15 00:09:01.686049 systemd-logind[1486]: Removed session 7. May 15 00:09:02.171332 kubelet[2646]: I0515 00:09:02.171278 2646 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:09:02.172081 containerd[1499]: time="2025-05-15T00:09:02.171993536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:09:02.172575 kubelet[2646]: I0515 00:09:02.172216 2646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:09:03.103193 systemd[1]: Created slice kubepods-besteffort-pod487ca65a_a40b_478a_a1a3_8a15002d3b24.slice - libcontainer container kubepods-besteffort-pod487ca65a_a40b_478a_a1a3_8a15002d3b24.slice. May 15 00:09:03.132758 systemd[1]: Created slice kubepods-burstable-pod37772911_5698_443f_8d95_f01a5b4476c2.slice - libcontainer container kubepods-burstable-pod37772911_5698_443f_8d95_f01a5b4476c2.slice. May 15 00:09:03.210239 systemd[1]: Created slice kubepods-besteffort-pod92073cc0_a49a_4df4_892d_42d1cf149021.slice - libcontainer container kubepods-besteffort-pod92073cc0_a49a_4df4_892d_42d1cf149021.slice. May 15 00:09:03.212554 kubelet[2646]: I0515 00:09:03.212074 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cilium-run\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.212554 kubelet[2646]: I0515 00:09:03.212125 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-bpf-maps\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.212554 kubelet[2646]: I0515 00:09:03.212150 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-hostproc\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.212554 kubelet[2646]: I0515 00:09:03.212174 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37772911-5698-443f-8d95-f01a5b4476c2-clustermesh-secrets\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.212554 kubelet[2646]: I0515 00:09:03.212280 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-etc-cni-netd\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.212554 kubelet[2646]: I0515 00:09:03.212302 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-lib-modules\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.213256 kubelet[2646]: I0515 00:09:03.212322 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-xtables-lock\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.213256 kubelet[2646]: I0515 00:09:03.212347 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v29zp\" (UniqueName: \"kubernetes.io/projected/487ca65a-a40b-478a-a1a3-8a15002d3b24-kube-api-access-v29zp\") pod \"kube-proxy-dp96j\" (UID: \"487ca65a-a40b-478a-a1a3-8a15002d3b24\") " pod="kube-system/kube-proxy-dp96j" May 15 00:09:03.213256 kubelet[2646]: I0515 00:09:03.212374 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37772911-5698-443f-8d95-f01a5b4476c2-hubble-tls\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.213256 kubelet[2646]: I0515 00:09:03.212395 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/487ca65a-a40b-478a-a1a3-8a15002d3b24-xtables-lock\") pod \"kube-proxy-dp96j\" (UID: \"487ca65a-a40b-478a-a1a3-8a15002d3b24\") " pod="kube-system/kube-proxy-dp96j" May 15 00:09:03.213256 kubelet[2646]: I0515 00:09:03.212432 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/487ca65a-a40b-478a-a1a3-8a15002d3b24-kube-proxy\") pod \"kube-proxy-dp96j\" (UID: \"487ca65a-a40b-478a-a1a3-8a15002d3b24\") " pod="kube-system/kube-proxy-dp96j" May 15 00:09:03.213256 kubelet[2646]: I0515 00:09:03.212459 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cni-path\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.213454 kubelet[2646]: I0515 00:09:03.212488 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cilium-cgroup\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.213454 kubelet[2646]: I0515 00:09:03.212511 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-host-proc-sys-kernel\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.213454 kubelet[2646]: I0515 00:09:03.212532 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/487ca65a-a40b-478a-a1a3-8a15002d3b24-lib-modules\") pod \"kube-proxy-dp96j\" (UID: \"487ca65a-a40b-478a-a1a3-8a15002d3b24\") " pod="kube-system/kube-proxy-dp96j" May 15 00:09:03.213454 kubelet[2646]: I0515 00:09:03.212555 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37772911-5698-443f-8d95-f01a5b4476c2-cilium-config-path\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.213454 kubelet[2646]: I0515 00:09:03.212579 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-host-proc-sys-net\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.213594 kubelet[2646]: I0515 00:09:03.212602 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nms2z\" (UniqueName: \"kubernetes.io/projected/37772911-5698-443f-8d95-f01a5b4476c2-kube-api-access-nms2z\") pod \"cilium-5lfxb\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " pod="kube-system/cilium-5lfxb" May 15 00:09:03.313175 kubelet[2646]: I0515 00:09:03.313007 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lphf\" (UniqueName: \"kubernetes.io/projected/92073cc0-a49a-4df4-892d-42d1cf149021-kube-api-access-9lphf\") pod \"cilium-operator-5d85765b45-9t8rp\" (UID: \"92073cc0-a49a-4df4-892d-42d1cf149021\") " pod="kube-system/cilium-operator-5d85765b45-9t8rp" May 15 00:09:03.313175 kubelet[2646]: I0515 00:09:03.313078 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92073cc0-a49a-4df4-892d-42d1cf149021-cilium-config-path\") pod \"cilium-operator-5d85765b45-9t8rp\" (UID: \"92073cc0-a49a-4df4-892d-42d1cf149021\") " pod="kube-system/cilium-operator-5d85765b45-9t8rp" May 15 00:09:03.427945 kubelet[2646]: E0515 00:09:03.425246 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.428078 containerd[1499]: time="2025-05-15T00:09:03.426055706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dp96j,Uid:487ca65a-a40b-478a-a1a3-8a15002d3b24,Namespace:kube-system,Attempt:0,}" May 15 00:09:03.439498 kubelet[2646]: E0515 00:09:03.439058 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.440147 containerd[1499]: time="2025-05-15T00:09:03.440072255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lfxb,Uid:37772911-5698-443f-8d95-f01a5b4476c2,Namespace:kube-system,Attempt:0,}" May 15 00:09:03.500308 containerd[1499]: time="2025-05-15T00:09:03.499900802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:03.501518 containerd[1499]: time="2025-05-15T00:09:03.500114947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:03.502028 containerd[1499]: time="2025-05-15T00:09:03.501771308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:03.502028 containerd[1499]: time="2025-05-15T00:09:03.501913705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:03.511252 containerd[1499]: time="2025-05-15T00:09:03.510792691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:03.511252 containerd[1499]: time="2025-05-15T00:09:03.510942792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:03.511252 containerd[1499]: time="2025-05-15T00:09:03.510961959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:03.511252 containerd[1499]: time="2025-05-15T00:09:03.511068015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:03.519217 kubelet[2646]: E0515 00:09:03.519139 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.520092 containerd[1499]: time="2025-05-15T00:09:03.520001487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-9t8rp,Uid:92073cc0-a49a-4df4-892d-42d1cf149021,Namespace:kube-system,Attempt:0,}" May 15 00:09:03.530327 systemd[1]: Started cri-containerd-6a591dda44ba89f2888d91ec84c0a0df3250df9408289a0680a248c4b71ae83f.scope - libcontainer container 6a591dda44ba89f2888d91ec84c0a0df3250df9408289a0680a248c4b71ae83f. May 15 00:09:03.549237 systemd[1]: Started cri-containerd-a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f.scope - libcontainer container a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f. May 15 00:09:03.582150 containerd[1499]: time="2025-05-15T00:09:03.581745117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:03.582150 containerd[1499]: time="2025-05-15T00:09:03.581893935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:03.582150 containerd[1499]: time="2025-05-15T00:09:03.581913885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:03.582150 containerd[1499]: time="2025-05-15T00:09:03.582079716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:03.591514 containerd[1499]: time="2025-05-15T00:09:03.591448201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dp96j,Uid:487ca65a-a40b-478a-a1a3-8a15002d3b24,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a591dda44ba89f2888d91ec84c0a0df3250df9408289a0680a248c4b71ae83f\"" May 15 00:09:03.594506 kubelet[2646]: E0515 00:09:03.594466 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.597359 containerd[1499]: time="2025-05-15T00:09:03.597309527Z" level=info msg="CreateContainer within sandbox \"6a591dda44ba89f2888d91ec84c0a0df3250df9408289a0680a248c4b71ae83f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:09:03.600601 containerd[1499]: time="2025-05-15T00:09:03.600558095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lfxb,Uid:37772911-5698-443f-8d95-f01a5b4476c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\"" May 15 00:09:03.603671 kubelet[2646]: E0515 00:09:03.603637 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.607118 containerd[1499]: time="2025-05-15T00:09:03.606890203Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:09:03.614525 systemd[1]: Started cri-containerd-a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261.scope - libcontainer container a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261. May 15 00:09:03.645770 containerd[1499]: time="2025-05-15T00:09:03.645693112Z" level=info msg="CreateContainer within sandbox \"6a591dda44ba89f2888d91ec84c0a0df3250df9408289a0680a248c4b71ae83f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7806d2e8ed90f029361316836d6cb1a9a94cdc5c19e4af60443c697e265dd635\"" May 15 00:09:03.647195 containerd[1499]: time="2025-05-15T00:09:03.646974066Z" level=info msg="StartContainer for \"7806d2e8ed90f029361316836d6cb1a9a94cdc5c19e4af60443c697e265dd635\"" May 15 00:09:03.677845 containerd[1499]: time="2025-05-15T00:09:03.677756784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-9t8rp,Uid:92073cc0-a49a-4df4-892d-42d1cf149021,Namespace:kube-system,Attempt:0,} returns sandbox id \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\"" May 15 00:09:03.679171 kubelet[2646]: E0515 00:09:03.679041 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.690240 systemd[1]: Started cri-containerd-7806d2e8ed90f029361316836d6cb1a9a94cdc5c19e4af60443c697e265dd635.scope - libcontainer container 7806d2e8ed90f029361316836d6cb1a9a94cdc5c19e4af60443c697e265dd635. May 15 00:09:03.745218 containerd[1499]: time="2025-05-15T00:09:03.745157654Z" level=info msg="StartContainer for \"7806d2e8ed90f029361316836d6cb1a9a94cdc5c19e4af60443c697e265dd635\" returns successfully" May 15 00:09:03.838649 kubelet[2646]: E0515 00:09:03.838609 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.938397 kubelet[2646]: E0515 00:09:03.938016 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:03.957643 kubelet[2646]: I0515 00:09:03.957555 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dp96j" podStartSLOduration=0.957535923 podStartE2EDuration="957.535923ms" podCreationTimestamp="2025-05-15 00:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:09:03.852579223 +0000 UTC m=+6.189177533" watchObservedRunningTime="2025-05-15 00:09:03.957535923 +0000 UTC m=+6.294134244" May 15 00:09:04.354073 kubelet[2646]: E0515 00:09:04.354003 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:04.840629 kubelet[2646]: E0515 00:09:04.840585 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:04.840629 kubelet[2646]: E0515 00:09:04.840651 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:05.870325 kubelet[2646]: E0515 00:09:05.869906 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:06.847787 kubelet[2646]: E0515 00:09:06.847733 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:11.155100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856778051.mount: Deactivated successfully. May 15 00:09:15.512452 containerd[1499]: time="2025-05-15T00:09:15.512310517Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:15.516882 containerd[1499]: time="2025-05-15T00:09:15.516714969Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 15 00:09:15.520500 containerd[1499]: time="2025-05-15T00:09:15.520314086Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:15.523526 containerd[1499]: time="2025-05-15T00:09:15.523406180Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.916454558s" May 15 00:09:15.523526 containerd[1499]: time="2025-05-15T00:09:15.523496658Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 00:09:15.544242 containerd[1499]: time="2025-05-15T00:09:15.544176474Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:09:15.567393 containerd[1499]: time="2025-05-15T00:09:15.567304749Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:09:15.801277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160037899.mount: Deactivated successfully. May 15 00:09:15.809471 containerd[1499]: time="2025-05-15T00:09:15.809368415Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\"" May 15 00:09:15.810147 containerd[1499]: time="2025-05-15T00:09:15.810077681Z" level=info msg="StartContainer for \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\"" May 15 00:09:15.859249 systemd[1]: Started cri-containerd-e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d.scope - libcontainer container e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d. May 15 00:09:15.916414 containerd[1499]: time="2025-05-15T00:09:15.916350683Z" level=info msg="StartContainer for \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\" returns successfully" May 15 00:09:15.916636 systemd[1]: cri-containerd-e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d.scope: Deactivated successfully. May 15 00:09:16.515010 containerd[1499]: time="2025-05-15T00:09:16.514904268Z" level=info msg="shim disconnected" id=e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d namespace=k8s.io May 15 00:09:16.515010 containerd[1499]: time="2025-05-15T00:09:16.514996780Z" level=warning msg="cleaning up after shim disconnected" id=e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d namespace=k8s.io May 15 00:09:16.515010 containerd[1499]: time="2025-05-15T00:09:16.515008682Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:16.797385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d-rootfs.mount: Deactivated successfully. May 15 00:09:16.926249 kubelet[2646]: E0515 00:09:16.926165 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:16.928990 containerd[1499]: time="2025-05-15T00:09:16.928945159Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:09:17.096112 containerd[1499]: time="2025-05-15T00:09:17.095753297Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\"" May 15 00:09:17.098625 containerd[1499]: time="2025-05-15T00:09:17.096893710Z" level=info msg="StartContainer for \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\"" May 15 00:09:17.139062 systemd[1]: Started cri-containerd-3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981.scope - libcontainer container 3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981. May 15 00:09:17.230184 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:09:17.230452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:09:17.230534 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 00:09:17.236201 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:09:17.236496 systemd[1]: cri-containerd-3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981.scope: Deactivated successfully. May 15 00:09:17.255854 containerd[1499]: time="2025-05-15T00:09:17.255754987Z" level=info msg="StartContainer for \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\" returns successfully" May 15 00:09:17.275587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:09:17.334818 containerd[1499]: time="2025-05-15T00:09:17.334731004Z" level=info msg="shim disconnected" id=3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981 namespace=k8s.io May 15 00:09:17.334818 containerd[1499]: time="2025-05-15T00:09:17.334798299Z" level=warning msg="cleaning up after shim disconnected" id=3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981 namespace=k8s.io May 15 00:09:17.334818 containerd[1499]: time="2025-05-15T00:09:17.334812295Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:17.797994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981-rootfs.mount: Deactivated successfully. May 15 00:09:17.879628 kubelet[2646]: E0515 00:09:17.879564 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:17.881919 containerd[1499]: time="2025-05-15T00:09:17.881651544Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:09:18.823359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028971493.mount: Deactivated successfully. May 15 00:09:19.155695 containerd[1499]: time="2025-05-15T00:09:19.155491933Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\"" May 15 00:09:19.156307 containerd[1499]: time="2025-05-15T00:09:19.156264795Z" level=info msg="StartContainer for \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\"" May 15 00:09:19.190082 systemd[1]: Started cri-containerd-495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d.scope - libcontainer container 495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d. May 15 00:09:19.293973 systemd[1]: cri-containerd-495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d.scope: Deactivated successfully. May 15 00:09:19.304667 containerd[1499]: time="2025-05-15T00:09:19.304493071Z" level=info msg="StartContainer for \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\" returns successfully" May 15 00:09:19.509372 containerd[1499]: time="2025-05-15T00:09:19.509288381Z" level=info msg="shim disconnected" id=495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d namespace=k8s.io May 15 00:09:19.509372 containerd[1499]: time="2025-05-15T00:09:19.509360796Z" level=warning msg="cleaning up after shim disconnected" id=495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d namespace=k8s.io May 15 00:09:19.509372 containerd[1499]: time="2025-05-15T00:09:19.509375023Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:19.819604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d-rootfs.mount: Deactivated successfully. May 15 00:09:19.870776 containerd[1499]: time="2025-05-15T00:09:19.870697927Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:19.871441 containerd[1499]: time="2025-05-15T00:09:19.871396260Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 15 00:09:19.872751 containerd[1499]: time="2025-05-15T00:09:19.872676417Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:09:19.873882 containerd[1499]: time="2025-05-15T00:09:19.873846770Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.329619913s" May 15 00:09:19.873925 containerd[1499]: time="2025-05-15T00:09:19.873883237Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 00:09:19.879447 containerd[1499]: time="2025-05-15T00:09:19.879414423Z" level=info msg="CreateContainer within sandbox \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:09:19.886882 kubelet[2646]: E0515 00:09:19.886793 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:19.889624 containerd[1499]: time="2025-05-15T00:09:19.889586113Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:09:19.906790 containerd[1499]: time="2025-05-15T00:09:19.906738020Z" level=info msg="CreateContainer within sandbox \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\"" May 15 00:09:19.907863 containerd[1499]: time="2025-05-15T00:09:19.907682922Z" level=info msg="StartContainer for \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\"" May 15 00:09:19.926317 containerd[1499]: time="2025-05-15T00:09:19.926263974Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\"" May 15 00:09:19.935989 containerd[1499]: time="2025-05-15T00:09:19.935930652Z" level=info msg="StartContainer for \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\"" May 15 00:09:19.952395 systemd[1]: Started cri-containerd-9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c.scope - libcontainer container 9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c. May 15 00:09:19.976037 systemd[1]: Started cri-containerd-c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a.scope - libcontainer container c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a. May 15 00:09:20.001854 containerd[1499]: time="2025-05-15T00:09:20.001758759Z" level=info msg="StartContainer for \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\" returns successfully" May 15 00:09:20.012553 systemd[1]: cri-containerd-c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a.scope: Deactivated successfully. May 15 00:09:20.018731 containerd[1499]: time="2025-05-15T00:09:20.018670957Z" level=info msg="StartContainer for \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\" returns successfully" May 15 00:09:20.708909 containerd[1499]: time="2025-05-15T00:09:20.708663867Z" level=info msg="shim disconnected" id=c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a namespace=k8s.io May 15 00:09:20.708909 containerd[1499]: time="2025-05-15T00:09:20.708737114Z" level=warning msg="cleaning up after shim disconnected" id=c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a namespace=k8s.io May 15 00:09:20.708909 containerd[1499]: time="2025-05-15T00:09:20.708748936Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:09:20.902886 kubelet[2646]: E0515 00:09:20.902444 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:20.914985 kubelet[2646]: E0515 00:09:20.914921 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:20.917630 containerd[1499]: time="2025-05-15T00:09:20.917508900Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:09:20.967084 kubelet[2646]: I0515 00:09:20.966866 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-9t8rp" podStartSLOduration=1.7732414250000001 podStartE2EDuration="17.966839591s" podCreationTimestamp="2025-05-15 00:09:03 +0000 UTC" firstStartedPulling="2025-05-15 00:09:03.682119733 +0000 UTC m=+6.018718043" lastFinishedPulling="2025-05-15 00:09:19.875717899 +0000 UTC m=+22.212316209" observedRunningTime="2025-05-15 00:09:20.928505025 +0000 UTC m=+23.265103336" watchObservedRunningTime="2025-05-15 00:09:20.966839591 +0000 UTC m=+23.303437901" May 15 00:09:20.972397 containerd[1499]: time="2025-05-15T00:09:20.972284885Z" level=info msg="CreateContainer within sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\"" May 15 00:09:20.973136 containerd[1499]: time="2025-05-15T00:09:20.973076793Z" level=info msg="StartContainer for \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\"" May 15 00:09:21.019223 systemd[1]: Started cri-containerd-8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18.scope - libcontainer container 8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18. May 15 00:09:21.091715 containerd[1499]: time="2025-05-15T00:09:21.091625846Z" level=info msg="StartContainer for \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\" returns successfully" May 15 00:09:21.330970 kubelet[2646]: I0515 00:09:21.329081 2646 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 00:09:21.614283 systemd[1]: Created slice kubepods-burstable-podd3a720f9_1373_4590_a893_8b14496be345.slice - libcontainer container kubepods-burstable-podd3a720f9_1373_4590_a893_8b14496be345.slice. May 15 00:09:21.747188 kubelet[2646]: I0515 00:09:21.747107 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49j84\" (UniqueName: \"kubernetes.io/projected/d3a720f9-1373-4590-a893-8b14496be345-kube-api-access-49j84\") pod \"coredns-6f6b679f8f-wzhq4\" (UID: \"d3a720f9-1373-4590-a893-8b14496be345\") " pod="kube-system/coredns-6f6b679f8f-wzhq4" May 15 00:09:21.747188 kubelet[2646]: I0515 00:09:21.747193 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3a720f9-1373-4590-a893-8b14496be345-config-volume\") pod \"coredns-6f6b679f8f-wzhq4\" (UID: \"d3a720f9-1373-4590-a893-8b14496be345\") " pod="kube-system/coredns-6f6b679f8f-wzhq4" May 15 00:09:21.810015 systemd[1]: Created slice kubepods-burstable-podb78a9ad7_a79f_4855_baef_a9d527238f21.slice - libcontainer container kubepods-burstable-podb78a9ad7_a79f_4855_baef_a9d527238f21.slice. May 15 00:09:21.819854 systemd[1]: run-containerd-runc-k8s.io-8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18-runc.JEXiRR.mount: Deactivated successfully. May 15 00:09:21.919608 kubelet[2646]: E0515 00:09:21.919265 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:21.919608 kubelet[2646]: E0515 00:09:21.919386 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:21.948635 kubelet[2646]: I0515 00:09:21.948562 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwc52\" (UniqueName: \"kubernetes.io/projected/b78a9ad7-a79f-4855-baef-a9d527238f21-kube-api-access-xwc52\") pod \"coredns-6f6b679f8f-nj6nx\" (UID: \"b78a9ad7-a79f-4855-baef-a9d527238f21\") " pod="kube-system/coredns-6f6b679f8f-nj6nx" May 15 00:09:21.948635 kubelet[2646]: I0515 00:09:21.948633 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b78a9ad7-a79f-4855-baef-a9d527238f21-config-volume\") pod \"coredns-6f6b679f8f-nj6nx\" (UID: \"b78a9ad7-a79f-4855-baef-a9d527238f21\") " pod="kube-system/coredns-6f6b679f8f-nj6nx" May 15 00:09:22.112795 kubelet[2646]: E0515 00:09:22.112736 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:22.120382 containerd[1499]: time="2025-05-15T00:09:22.120315519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nj6nx,Uid:b78a9ad7-a79f-4855-baef-a9d527238f21,Namespace:kube-system,Attempt:0,}" May 15 00:09:22.134498 kubelet[2646]: I0515 00:09:22.134373 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5lfxb" podStartSLOduration=7.195724474 podStartE2EDuration="19.134350642s" podCreationTimestamp="2025-05-15 00:09:03 +0000 UTC" firstStartedPulling="2025-05-15 00:09:03.605062249 +0000 UTC m=+5.941660559" lastFinishedPulling="2025-05-15 00:09:15.543688417 +0000 UTC m=+17.880286727" observedRunningTime="2025-05-15 00:09:22.062773426 +0000 UTC m=+24.399371736" watchObservedRunningTime="2025-05-15 00:09:22.134350642 +0000 UTC m=+24.470948952" May 15 00:09:22.217491 kubelet[2646]: E0515 00:09:22.217411 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:22.218209 containerd[1499]: time="2025-05-15T00:09:22.218150681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzhq4,Uid:d3a720f9-1373-4590-a893-8b14496be345,Namespace:kube-system,Attempt:0,}" May 15 00:09:22.921280 kubelet[2646]: E0515 00:09:22.921214 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:23.885645 systemd-networkd[1410]: cilium_host: Link UP May 15 00:09:23.886515 systemd-networkd[1410]: cilium_net: Link UP May 15 00:09:23.886809 systemd-networkd[1410]: cilium_net: Gained carrier May 15 00:09:23.888126 systemd-networkd[1410]: cilium_host: Gained carrier May 15 00:09:23.924046 kubelet[2646]: E0515 00:09:23.923987 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:24.015381 systemd-networkd[1410]: cilium_net: Gained IPv6LL May 15 00:09:24.027335 systemd-networkd[1410]: cilium_vxlan: Link UP May 15 00:09:24.027349 systemd-networkd[1410]: cilium_vxlan: Gained carrier May 15 00:09:24.288924 kernel: NET: Registered PF_ALG protocol family May 15 00:09:24.536046 systemd-networkd[1410]: cilium_host: Gained IPv6LL May 15 00:09:25.109082 systemd-networkd[1410]: lxc_health: Link UP May 15 00:09:25.118180 systemd-networkd[1410]: lxc_health: Gained carrier May 15 00:09:25.443175 kubelet[2646]: E0515 00:09:25.441320 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:25.530963 systemd-networkd[1410]: lxc3e74daba9731: Link UP May 15 00:09:25.540848 kernel: eth0: renamed from tmp0f072 May 15 00:09:25.544998 systemd-networkd[1410]: lxc3e74daba9731: Gained carrier May 15 00:09:25.586740 kernel: eth0: renamed from tmp637c6 May 15 00:09:25.590646 systemd-networkd[1410]: lxc7308c8d3fa4e: Link UP May 15 00:09:25.593152 systemd-networkd[1410]: lxc7308c8d3fa4e: Gained carrier May 15 00:09:25.751056 systemd-networkd[1410]: cilium_vxlan: Gained IPv6LL May 15 00:09:25.927652 kubelet[2646]: E0515 00:09:25.927591 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:26.711096 systemd-networkd[1410]: lxc3e74daba9731: Gained IPv6LL May 15 00:09:26.839066 systemd-networkd[1410]: lxc_health: Gained IPv6LL May 15 00:09:26.929260 kubelet[2646]: E0515 00:09:26.929212 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:27.351020 systemd-networkd[1410]: lxc7308c8d3fa4e: Gained IPv6LL May 15 00:09:27.926258 systemd[1]: Started sshd@8-10.0.0.104:22-10.0.0.1:53268.service - OpenSSH per-connection server daemon (10.0.0.1:53268). May 15 00:09:27.997988 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 53268 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:09:28.000158 sshd-session[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:28.006886 systemd-logind[1486]: New session 8 of user core. May 15 00:09:28.015137 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:09:28.173891 sshd[3855]: Connection closed by 10.0.0.1 port 53268 May 15 00:09:28.175105 sshd-session[3851]: pam_unix(sshd:session): session closed for user core May 15 00:09:28.180270 systemd[1]: sshd@8-10.0.0.104:22-10.0.0.1:53268.service: Deactivated successfully. May 15 00:09:28.182937 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:09:28.183741 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. May 15 00:09:28.185049 systemd-logind[1486]: Removed session 8. May 15 00:09:29.587614 containerd[1499]: time="2025-05-15T00:09:29.586616130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:29.587614 containerd[1499]: time="2025-05-15T00:09:29.587457342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:29.587614 containerd[1499]: time="2025-05-15T00:09:29.587471528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:29.588207 containerd[1499]: time="2025-05-15T00:09:29.587899042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:29.588207 containerd[1499]: time="2025-05-15T00:09:29.587874796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:09:29.588421 containerd[1499]: time="2025-05-15T00:09:29.588306818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:09:29.588421 containerd[1499]: time="2025-05-15T00:09:29.588339479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:29.589036 containerd[1499]: time="2025-05-15T00:09:29.588966457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:09:29.618080 systemd[1]: Started cri-containerd-637c62dd758350c110fb796f7ef08d972841536b8c9028141a4f886b9dac5aaa.scope - libcontainer container 637c62dd758350c110fb796f7ef08d972841536b8c9028141a4f886b9dac5aaa. May 15 00:09:29.630036 systemd[1]: Started cri-containerd-0f0720aef0d8c471631a26d1f23f0975452a943747ebaa091ae6947bfd224e66.scope - libcontainer container 0f0720aef0d8c471631a26d1f23f0975452a943747ebaa091ae6947bfd224e66. May 15 00:09:29.640022 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:09:29.646759 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:09:29.673914 containerd[1499]: time="2025-05-15T00:09:29.673864226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wzhq4,Uid:d3a720f9-1373-4590-a893-8b14496be345,Namespace:kube-system,Attempt:0,} returns sandbox id \"637c62dd758350c110fb796f7ef08d972841536b8c9028141a4f886b9dac5aaa\"" May 15 00:09:29.674935 kubelet[2646]: E0515 00:09:29.674880 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:29.676947 containerd[1499]: time="2025-05-15T00:09:29.676914278Z" level=info msg="CreateContainer within sandbox \"637c62dd758350c110fb796f7ef08d972841536b8c9028141a4f886b9dac5aaa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:09:29.689384 containerd[1499]: time="2025-05-15T00:09:29.689322350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nj6nx,Uid:b78a9ad7-a79f-4855-baef-a9d527238f21,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f0720aef0d8c471631a26d1f23f0975452a943747ebaa091ae6947bfd224e66\"" May 15 00:09:29.690989 kubelet[2646]: E0515 00:09:29.690954 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:29.693360 containerd[1499]: time="2025-05-15T00:09:29.693321476Z" level=info msg="CreateContainer within sandbox \"0f0720aef0d8c471631a26d1f23f0975452a943747ebaa091ae6947bfd224e66\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:09:29.700056 containerd[1499]: time="2025-05-15T00:09:29.700006240Z" level=info msg="CreateContainer within sandbox \"637c62dd758350c110fb796f7ef08d972841536b8c9028141a4f886b9dac5aaa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7644842aa548bc34a52157fe3c1cc07e2bb2c6437ae95f8b0686d844d10198f9\"" May 15 00:09:29.700510 containerd[1499]: time="2025-05-15T00:09:29.700481423Z" level=info msg="StartContainer for \"7644842aa548bc34a52157fe3c1cc07e2bb2c6437ae95f8b0686d844d10198f9\"" May 15 00:09:29.716993 containerd[1499]: time="2025-05-15T00:09:29.716916312Z" level=info msg="CreateContainer within sandbox \"0f0720aef0d8c471631a26d1f23f0975452a943747ebaa091ae6947bfd224e66\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"240e1fbfa7b3e8a12e9e7bf42150aee6c542bb7bd816e57870d1ad06c3698225\"" May 15 00:09:29.717944 containerd[1499]: time="2025-05-15T00:09:29.717905863Z" level=info msg="StartContainer for \"240e1fbfa7b3e8a12e9e7bf42150aee6c542bb7bd816e57870d1ad06c3698225\"" May 15 00:09:29.737198 systemd[1]: Started cri-containerd-7644842aa548bc34a52157fe3c1cc07e2bb2c6437ae95f8b0686d844d10198f9.scope - libcontainer container 7644842aa548bc34a52157fe3c1cc07e2bb2c6437ae95f8b0686d844d10198f9. May 15 00:09:29.756989 systemd[1]: Started cri-containerd-240e1fbfa7b3e8a12e9e7bf42150aee6c542bb7bd816e57870d1ad06c3698225.scope - libcontainer container 240e1fbfa7b3e8a12e9e7bf42150aee6c542bb7bd816e57870d1ad06c3698225. May 15 00:09:29.783665 containerd[1499]: time="2025-05-15T00:09:29.783619282Z" level=info msg="StartContainer for \"7644842aa548bc34a52157fe3c1cc07e2bb2c6437ae95f8b0686d844d10198f9\" returns successfully" May 15 00:09:29.801288 containerd[1499]: time="2025-05-15T00:09:29.801232928Z" level=info msg="StartContainer for \"240e1fbfa7b3e8a12e9e7bf42150aee6c542bb7bd816e57870d1ad06c3698225\" returns successfully" May 15 00:09:29.943372 kubelet[2646]: E0515 00:09:29.943308 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:29.946056 kubelet[2646]: E0515 00:09:29.946014 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:29.969625 kubelet[2646]: I0515 00:09:29.969536 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wzhq4" podStartSLOduration=26.969517032 podStartE2EDuration="26.969517032s" podCreationTimestamp="2025-05-15 00:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:09:29.967955036 +0000 UTC m=+32.304553346" watchObservedRunningTime="2025-05-15 00:09:29.969517032 +0000 UTC m=+32.306115342" May 15 00:09:30.096234 kubelet[2646]: I0515 00:09:30.096161 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-nj6nx" podStartSLOduration=27.096139005 podStartE2EDuration="27.096139005s" podCreationTimestamp="2025-05-15 00:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:09:30.095008278 +0000 UTC m=+32.431606599" watchObservedRunningTime="2025-05-15 00:09:30.096139005 +0000 UTC m=+32.432737315" May 15 00:09:30.949090 kubelet[2646]: E0515 00:09:30.947942 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:30.949090 kubelet[2646]: E0515 00:09:30.948060 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:31.950699 kubelet[2646]: E0515 00:09:31.950646 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:31.951354 kubelet[2646]: E0515 00:09:31.950793 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:09:33.191633 systemd[1]: Started sshd@9-10.0.0.104:22-10.0.0.1:53282.service - OpenSSH per-connection server daemon (10.0.0.1:53282). May 15 00:09:33.285138 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 53282 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:09:33.287942 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:33.295250 systemd-logind[1486]: New session 9 of user core. May 15 00:09:33.303097 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:09:33.531928 sshd[4050]: Connection closed by 10.0.0.1 port 53282 May 15 00:09:33.532372 sshd-session[4048]: pam_unix(sshd:session): session closed for user core May 15 00:09:33.537949 systemd[1]: sshd@9-10.0.0.104:22-10.0.0.1:53282.service: Deactivated successfully. May 15 00:09:33.540367 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:09:33.541475 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. May 15 00:09:33.543132 systemd-logind[1486]: Removed session 9. May 15 00:09:38.549646 systemd[1]: Started sshd@10-10.0.0.104:22-10.0.0.1:52078.service - OpenSSH per-connection server daemon (10.0.0.1:52078). May 15 00:09:38.618596 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 52078 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:09:38.620736 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:38.625993 systemd-logind[1486]: New session 10 of user core. May 15 00:09:38.637124 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:09:38.788035 sshd[4070]: Connection closed by 10.0.0.1 port 52078 May 15 00:09:38.788963 sshd-session[4068]: pam_unix(sshd:session): session closed for user core May 15 00:09:38.797553 systemd[1]: sshd@10-10.0.0.104:22-10.0.0.1:52078.service: Deactivated successfully. May 15 00:09:38.800300 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:09:38.801333 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. May 15 00:09:38.802787 systemd-logind[1486]: Removed session 10. May 15 00:09:43.801593 systemd[1]: Started sshd@11-10.0.0.104:22-10.0.0.1:52082.service - OpenSSH per-connection server daemon (10.0.0.1:52082). May 15 00:09:43.866093 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 52082 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:09:43.868440 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:43.873473 systemd-logind[1486]: New session 11 of user core. May 15 00:09:43.890215 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:09:44.008264 sshd[4086]: Connection closed by 10.0.0.1 port 52082 May 15 00:09:44.008807 sshd-session[4084]: pam_unix(sshd:session): session closed for user core May 15 00:09:44.013727 systemd[1]: sshd@11-10.0.0.104:22-10.0.0.1:52082.service: Deactivated successfully. May 15 00:09:44.016168 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:09:44.016998 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. May 15 00:09:44.018380 systemd-logind[1486]: Removed session 11. May 15 00:09:49.023318 systemd[1]: Started sshd@12-10.0.0.104:22-10.0.0.1:47452.service - OpenSSH per-connection server daemon (10.0.0.1:47452). May 15 00:09:49.072003 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 47452 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:09:49.074577 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:49.079906 systemd-logind[1486]: New session 12 of user core. May 15 00:09:49.090115 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:09:49.220566 sshd[4102]: Connection closed by 10.0.0.1 port 47452 May 15 00:09:49.221235 sshd-session[4100]: pam_unix(sshd:session): session closed for user core May 15 00:09:49.234591 systemd[1]: sshd@12-10.0.0.104:22-10.0.0.1:47452.service: Deactivated successfully. May 15 00:09:49.237586 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:09:49.239806 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. May 15 00:09:49.245136 systemd[1]: Started sshd@13-10.0.0.104:22-10.0.0.1:47462.service - OpenSSH per-connection server daemon (10.0.0.1:47462). May 15 00:09:49.246441 systemd-logind[1486]: Removed session 12. May 15 00:09:49.300079 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 47462 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:09:49.302679 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:49.309179 systemd-logind[1486]: New session 13 of user core. May 15 00:09:49.320200 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:09:49.519427 sshd[4117]: Connection closed by 10.0.0.1 port 47462 May 15 00:09:49.520181 sshd-session[4115]: pam_unix(sshd:session): session closed for user core May 15 00:09:49.532236 systemd[1]: sshd@13-10.0.0.104:22-10.0.0.1:47462.service: Deactivated successfully. May 15 00:09:49.534731 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:09:49.540301 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. May 15 00:09:49.548625 systemd[1]: Started sshd@14-10.0.0.104:22-10.0.0.1:47464.service - OpenSSH per-connection server daemon (10.0.0.1:47464). May 15 00:09:49.550666 systemd-logind[1486]: Removed session 13. May 15 00:09:49.596365 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 47464 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:09:49.599097 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:49.605650 systemd-logind[1486]: New session 14 of user core. May 15 00:09:49.615259 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:09:49.762014 sshd[4129]: Connection closed by 10.0.0.1 port 47464 May 15 00:09:49.761227 sshd-session[4127]: pam_unix(sshd:session): session closed for user core May 15 00:09:49.769456 systemd[1]: sshd@14-10.0.0.104:22-10.0.0.1:47464.service: Deactivated successfully. May 15 00:09:49.774843 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:09:49.778299 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. May 15 00:09:49.781563 systemd-logind[1486]: Removed session 14. May 15 00:09:54.778469 systemd[1]: Started sshd@15-10.0.0.104:22-10.0.0.1:47474.service - OpenSSH per-connection server daemon (10.0.0.1:47474). May 15 00:09:54.823733 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 47474 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:09:54.825445 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:09:54.830590 systemd-logind[1486]: New session 15 of user core. May 15 00:09:54.841293 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:09:54.963780 sshd[4143]: Connection closed by 10.0.0.1 port 47474 May 15 00:09:54.964185 sshd-session[4141]: pam_unix(sshd:session): session closed for user core May 15 00:09:54.969008 systemd[1]: sshd@15-10.0.0.104:22-10.0.0.1:47474.service: Deactivated successfully. May 15 00:09:54.971644 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:09:54.972430 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. May 15 00:09:54.973406 systemd-logind[1486]: Removed session 15. May 15 00:09:59.982762 systemd[1]: Started sshd@16-10.0.0.104:22-10.0.0.1:45130.service - OpenSSH per-connection server daemon (10.0.0.1:45130). May 15 00:10:00.036659 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 45130 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:00.039394 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:00.047693 systemd-logind[1486]: New session 16 of user core. May 15 00:10:00.061246 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:10:00.198856 sshd[4159]: Connection closed by 10.0.0.1 port 45130 May 15 00:10:00.199361 sshd-session[4157]: pam_unix(sshd:session): session closed for user core May 15 00:10:00.203671 systemd[1]: sshd@16-10.0.0.104:22-10.0.0.1:45130.service: Deactivated successfully. May 15 00:10:00.206441 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:10:00.209117 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. May 15 00:10:00.211050 systemd-logind[1486]: Removed session 16. May 15 00:10:05.213063 systemd[1]: Started sshd@17-10.0.0.104:22-10.0.0.1:45136.service - OpenSSH per-connection server daemon (10.0.0.1:45136). May 15 00:10:05.264213 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 45136 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:05.266880 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:05.273005 systemd-logind[1486]: New session 17 of user core. May 15 00:10:05.286194 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:10:05.421334 sshd[4175]: Connection closed by 10.0.0.1 port 45136 May 15 00:10:05.421856 sshd-session[4173]: pam_unix(sshd:session): session closed for user core May 15 00:10:05.442359 systemd[1]: sshd@17-10.0.0.104:22-10.0.0.1:45136.service: Deactivated successfully. May 15 00:10:05.444536 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:10:05.446314 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. May 15 00:10:05.456169 systemd[1]: Started sshd@18-10.0.0.104:22-10.0.0.1:45146.service - OpenSSH per-connection server daemon (10.0.0.1:45146). May 15 00:10:05.457156 systemd-logind[1486]: Removed session 17. May 15 00:10:05.502316 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 45146 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:05.504268 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:05.508584 systemd-logind[1486]: New session 18 of user core. May 15 00:10:05.517990 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:10:06.400975 sshd[4189]: Connection closed by 10.0.0.1 port 45146 May 15 00:10:06.402748 sshd-session[4187]: pam_unix(sshd:session): session closed for user core May 15 00:10:06.414194 systemd[1]: sshd@18-10.0.0.104:22-10.0.0.1:45146.service: Deactivated successfully. May 15 00:10:06.416360 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:10:06.418247 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. May 15 00:10:06.426350 systemd[1]: Started sshd@19-10.0.0.104:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). May 15 00:10:06.427773 systemd-logind[1486]: Removed session 18. May 15 00:10:06.473416 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:06.475696 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:06.481934 systemd-logind[1486]: New session 19 of user core. May 15 00:10:06.492152 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:10:07.790980 kubelet[2646]: E0515 00:10:07.790914 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:10.436038 sshd[4201]: Connection closed by 10.0.0.1 port 45342 May 15 00:10:10.446279 sshd-session[4199]: pam_unix(sshd:session): session closed for user core May 15 00:10:10.452026 systemd[1]: sshd@19-10.0.0.104:22-10.0.0.1:45342.service: Deactivated successfully. May 15 00:10:10.454812 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:10:10.455815 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. May 15 00:10:10.462156 systemd[1]: Started sshd@20-10.0.0.104:22-10.0.0.1:45350.service - OpenSSH per-connection server daemon (10.0.0.1:45350). May 15 00:10:10.462978 systemd-logind[1486]: Removed session 19. May 15 00:10:10.661341 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 45350 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:10.663137 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:10.667714 systemd-logind[1486]: New session 20 of user core. May 15 00:10:10.681056 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:10:11.292631 sshd[4224]: Connection closed by 10.0.0.1 port 45350 May 15 00:10:11.293136 sshd-session[4222]: pam_unix(sshd:session): session closed for user core May 15 00:10:11.307052 systemd[1]: sshd@20-10.0.0.104:22-10.0.0.1:45350.service: Deactivated successfully. May 15 00:10:11.309544 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:10:11.311443 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. May 15 00:10:11.320499 systemd[1]: Started sshd@21-10.0.0.104:22-10.0.0.1:45354.service - OpenSSH per-connection server daemon (10.0.0.1:45354). May 15 00:10:11.321979 systemd-logind[1486]: Removed session 20. May 15 00:10:11.368812 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 45354 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:11.370594 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:11.374685 systemd-logind[1486]: New session 21 of user core. May 15 00:10:11.385053 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:10:11.526058 sshd[4236]: Connection closed by 10.0.0.1 port 45354 May 15 00:10:11.526622 sshd-session[4234]: pam_unix(sshd:session): session closed for user core May 15 00:10:11.532286 systemd[1]: sshd@21-10.0.0.104:22-10.0.0.1:45354.service: Deactivated successfully. May 15 00:10:11.534671 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:10:11.535455 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. May 15 00:10:11.537397 systemd-logind[1486]: Removed session 21. May 15 00:10:16.540105 systemd[1]: Started sshd@22-10.0.0.104:22-10.0.0.1:35058.service - OpenSSH per-connection server daemon (10.0.0.1:35058). May 15 00:10:16.587756 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 35058 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:16.589456 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:16.593921 systemd-logind[1486]: New session 22 of user core. May 15 00:10:16.603007 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:10:16.740088 sshd[4251]: Connection closed by 10.0.0.1 port 35058 May 15 00:10:16.740540 sshd-session[4249]: pam_unix(sshd:session): session closed for user core May 15 00:10:16.743662 systemd[1]: sshd@22-10.0.0.104:22-10.0.0.1:35058.service: Deactivated successfully. May 15 00:10:16.746016 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:10:16.747775 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. May 15 00:10:16.748770 systemd-logind[1486]: Removed session 22. May 15 00:10:21.754378 systemd[1]: Started sshd@23-10.0.0.104:22-10.0.0.1:35070.service - OpenSSH per-connection server daemon (10.0.0.1:35070). May 15 00:10:21.803451 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 35070 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:21.805567 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:21.811145 systemd-logind[1486]: New session 23 of user core. May 15 00:10:21.818014 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:10:21.972749 sshd[4267]: Connection closed by 10.0.0.1 port 35070 May 15 00:10:21.973545 sshd-session[4265]: pam_unix(sshd:session): session closed for user core May 15 00:10:21.979878 systemd[1]: sshd@23-10.0.0.104:22-10.0.0.1:35070.service: Deactivated successfully. May 15 00:10:21.983064 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:10:21.984380 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. May 15 00:10:21.985871 systemd-logind[1486]: Removed session 23. May 15 00:10:22.791213 kubelet[2646]: E0515 00:10:22.791127 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:26.985899 systemd[1]: Started sshd@24-10.0.0.104:22-10.0.0.1:47284.service - OpenSSH per-connection server daemon (10.0.0.1:47284). May 15 00:10:27.035673 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 47284 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:27.037539 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:27.042606 systemd-logind[1486]: New session 24 of user core. May 15 00:10:27.047993 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 00:10:27.171040 sshd[4284]: Connection closed by 10.0.0.1 port 47284 May 15 00:10:27.171580 sshd-session[4282]: pam_unix(sshd:session): session closed for user core May 15 00:10:27.176459 systemd[1]: sshd@24-10.0.0.104:22-10.0.0.1:47284.service: Deactivated successfully. May 15 00:10:27.179083 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:10:27.180031 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. May 15 00:10:27.181602 systemd-logind[1486]: Removed session 24. May 15 00:10:28.791147 kubelet[2646]: E0515 00:10:28.790620 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:28.791771 kubelet[2646]: E0515 00:10:28.791441 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:30.790952 kubelet[2646]: E0515 00:10:30.790879 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:32.185947 systemd[1]: Started sshd@25-10.0.0.104:22-10.0.0.1:47290.service - OpenSSH per-connection server daemon (10.0.0.1:47290). May 15 00:10:32.241578 sshd[4297]: Accepted publickey for core from 10.0.0.1 port 47290 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:32.269123 sshd-session[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:32.274342 systemd-logind[1486]: New session 25 of user core. May 15 00:10:32.279969 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 00:10:32.420970 sshd[4299]: Connection closed by 10.0.0.1 port 47290 May 15 00:10:32.421358 sshd-session[4297]: pam_unix(sshd:session): session closed for user core May 15 00:10:32.425206 systemd[1]: sshd@25-10.0.0.104:22-10.0.0.1:47290.service: Deactivated successfully. May 15 00:10:32.427133 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:10:32.427875 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. May 15 00:10:32.428993 systemd-logind[1486]: Removed session 25. May 15 00:10:34.790279 kubelet[2646]: E0515 00:10:34.790124 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:37.445727 systemd[1]: Started sshd@26-10.0.0.104:22-10.0.0.1:39514.service - OpenSSH per-connection server daemon (10.0.0.1:39514). May 15 00:10:37.501815 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 39514 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:37.504364 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:37.511871 systemd-logind[1486]: New session 26 of user core. May 15 00:10:37.521376 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 00:10:37.655035 sshd[4316]: Connection closed by 10.0.0.1 port 39514 May 15 00:10:37.655623 sshd-session[4314]: pam_unix(sshd:session): session closed for user core May 15 00:10:37.664884 systemd[1]: sshd@26-10.0.0.104:22-10.0.0.1:39514.service: Deactivated successfully. May 15 00:10:37.667425 systemd[1]: session-26.scope: Deactivated successfully. May 15 00:10:37.670548 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. May 15 00:10:37.679582 systemd[1]: Started sshd@27-10.0.0.104:22-10.0.0.1:39518.service - OpenSSH per-connection server daemon (10.0.0.1:39518). May 15 00:10:37.681291 systemd-logind[1486]: Removed session 26. May 15 00:10:37.730047 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 39518 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:37.732350 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:37.737379 systemd-logind[1486]: New session 27 of user core. May 15 00:10:37.747086 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 00:10:38.790808 kubelet[2646]: E0515 00:10:38.790726 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:40.032547 containerd[1499]: time="2025-05-15T00:10:40.032291767Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:10:40.036344 containerd[1499]: time="2025-05-15T00:10:40.036305551Z" level=info msg="StopContainer for \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\" with timeout 2 (s)" May 15 00:10:40.042018 containerd[1499]: time="2025-05-15T00:10:40.041965086Z" level=info msg="Stop container \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\" with signal terminated" May 15 00:10:40.051295 systemd-networkd[1410]: lxc_health: Link DOWN May 15 00:10:40.051309 systemd-networkd[1410]: lxc_health: Lost carrier May 15 00:10:40.096528 systemd[1]: cri-containerd-8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18.scope: Deactivated successfully. May 15 00:10:40.097502 systemd[1]: cri-containerd-8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18.scope: Consumed 8.306s CPU time. May 15 00:10:40.102435 containerd[1499]: time="2025-05-15T00:10:40.102380038Z" level=info msg="StopContainer for \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\" with timeout 30 (s)" May 15 00:10:40.103909 containerd[1499]: time="2025-05-15T00:10:40.103466088Z" level=info msg="Stop container \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\" with signal terminated" May 15 00:10:40.116784 systemd[1]: cri-containerd-9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c.scope: Deactivated successfully. May 15 00:10:40.130657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18-rootfs.mount: Deactivated successfully. May 15 00:10:40.144383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c-rootfs.mount: Deactivated successfully. May 15 00:10:40.158934 containerd[1499]: time="2025-05-15T00:10:40.158811084Z" level=info msg="shim disconnected" id=8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18 namespace=k8s.io May 15 00:10:40.158934 containerd[1499]: time="2025-05-15T00:10:40.158909192Z" level=warning msg="cleaning up after shim disconnected" id=8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18 namespace=k8s.io May 15 00:10:40.158934 containerd[1499]: time="2025-05-15T00:10:40.158927276Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:40.159334 containerd[1499]: time="2025-05-15T00:10:40.159058788Z" level=info msg="shim disconnected" id=9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c namespace=k8s.io May 15 00:10:40.159334 containerd[1499]: time="2025-05-15T00:10:40.159133521Z" level=warning msg="cleaning up after shim disconnected" id=9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c namespace=k8s.io May 15 00:10:40.159334 containerd[1499]: time="2025-05-15T00:10:40.159150093Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:40.186337 containerd[1499]: time="2025-05-15T00:10:40.186223574Z" level=info msg="StopContainer for \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\" returns successfully" May 15 00:10:40.186592 containerd[1499]: time="2025-05-15T00:10:40.186244354Z" level=info msg="StopContainer for \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\" returns successfully" May 15 00:10:40.190060 containerd[1499]: time="2025-05-15T00:10:40.189994844Z" level=info msg="StopPodSandbox for \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\"" May 15 00:10:40.191544 containerd[1499]: time="2025-05-15T00:10:40.191482633Z" level=info msg="StopPodSandbox for \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\"" May 15 00:10:40.199733 containerd[1499]: time="2025-05-15T00:10:40.190062444Z" level=info msg="Container to stop \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:10:40.200173 containerd[1499]: time="2025-05-15T00:10:40.191548939Z" level=info msg="Container to stop \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:10:40.200173 containerd[1499]: time="2025-05-15T00:10:40.199865160Z" level=info msg="Container to stop \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:10:40.200173 containerd[1499]: time="2025-05-15T00:10:40.199877094Z" level=info msg="Container to stop \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:10:40.200173 containerd[1499]: time="2025-05-15T00:10:40.199885890Z" level=info msg="Container to stop \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:10:40.200173 containerd[1499]: time="2025-05-15T00:10:40.199895579Z" level=info msg="Container to stop \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:10:40.202606 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261-shm.mount: Deactivated successfully. May 15 00:10:40.208791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f-shm.mount: Deactivated successfully. May 15 00:10:40.212041 systemd[1]: cri-containerd-a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f.scope: Deactivated successfully. May 15 00:10:40.214775 systemd[1]: cri-containerd-a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261.scope: Deactivated successfully. May 15 00:10:40.250522 containerd[1499]: time="2025-05-15T00:10:40.250445876Z" level=info msg="shim disconnected" id=a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261 namespace=k8s.io May 15 00:10:40.250522 containerd[1499]: time="2025-05-15T00:10:40.250518665Z" level=warning msg="cleaning up after shim disconnected" id=a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261 namespace=k8s.io May 15 00:10:40.250522 containerd[1499]: time="2025-05-15T00:10:40.250531078Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:40.250976 containerd[1499]: time="2025-05-15T00:10:40.250483177Z" level=info msg="shim disconnected" id=a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f namespace=k8s.io May 15 00:10:40.250976 containerd[1499]: time="2025-05-15T00:10:40.250685945Z" level=warning msg="cleaning up after shim disconnected" id=a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f namespace=k8s.io May 15 00:10:40.250976 containerd[1499]: time="2025-05-15T00:10:40.250696916Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:40.270673 containerd[1499]: time="2025-05-15T00:10:40.270585762Z" level=info msg="TearDown network for sandbox \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\" successfully" May 15 00:10:40.270673 containerd[1499]: time="2025-05-15T00:10:40.270646960Z" level=info msg="StopPodSandbox for \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\" returns successfully" May 15 00:10:40.271620 containerd[1499]: time="2025-05-15T00:10:40.271270323Z" level=info msg="TearDown network for sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" successfully" May 15 00:10:40.271620 containerd[1499]: time="2025-05-15T00:10:40.271322723Z" level=info msg="StopPodSandbox for \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" returns successfully" May 15 00:10:40.400734 kubelet[2646]: I0515 00:10:40.400545 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-bpf-maps\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.400734 kubelet[2646]: I0515 00:10:40.400613 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37772911-5698-443f-8d95-f01a5b4476c2-clustermesh-secrets\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.400734 kubelet[2646]: I0515 00:10:40.400649 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-etc-cni-netd\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.400734 kubelet[2646]: I0515 00:10:40.400672 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-host-proc-sys-kernel\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.400734 kubelet[2646]: I0515 00:10:40.400692 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-host-proc-sys-net\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.400734 kubelet[2646]: I0515 00:10:40.400717 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nms2z\" (UniqueName: \"kubernetes.io/projected/37772911-5698-443f-8d95-f01a5b4476c2-kube-api-access-nms2z\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.401359 kubelet[2646]: I0515 00:10:40.400738 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37772911-5698-443f-8d95-f01a5b4476c2-cilium-config-path\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.401359 kubelet[2646]: I0515 00:10:40.400729 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.401359 kubelet[2646]: I0515 00:10:40.400758 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cilium-run\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.401359 kubelet[2646]: I0515 00:10:40.400807 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.401359 kubelet[2646]: I0515 00:10:40.400877 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.401492 kubelet[2646]: I0515 00:10:40.400885 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-lib-modules\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.401492 kubelet[2646]: I0515 00:10:40.400902 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.401492 kubelet[2646]: I0515 00:10:40.400913 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-xtables-lock\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.401492 kubelet[2646]: I0515 00:10:40.401067 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.401492 kubelet[2646]: I0515 00:10:40.401172 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.402676 kubelet[2646]: I0515 00:10:40.402648 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-hostproc\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.402736 kubelet[2646]: I0515 00:10:40.402678 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cilium-cgroup\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.402771 kubelet[2646]: I0515 00:10:40.402741 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lphf\" (UniqueName: \"kubernetes.io/projected/92073cc0-a49a-4df4-892d-42d1cf149021-kube-api-access-9lphf\") pod \"92073cc0-a49a-4df4-892d-42d1cf149021\" (UID: \"92073cc0-a49a-4df4-892d-42d1cf149021\") " May 15 00:10:40.402850 kubelet[2646]: I0515 00:10:40.402799 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92073cc0-a49a-4df4-892d-42d1cf149021-cilium-config-path\") pod \"92073cc0-a49a-4df4-892d-42d1cf149021\" (UID: \"92073cc0-a49a-4df4-892d-42d1cf149021\") " May 15 00:10:40.404888 kubelet[2646]: I0515 00:10:40.402886 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37772911-5698-443f-8d95-f01a5b4476c2-hubble-tls\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.404888 kubelet[2646]: I0515 00:10:40.402912 2646 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cni-path\") pod \"37772911-5698-443f-8d95-f01a5b4476c2\" (UID: \"37772911-5698-443f-8d95-f01a5b4476c2\") " May 15 00:10:40.404888 kubelet[2646]: I0515 00:10:40.402976 2646 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.404888 kubelet[2646]: I0515 00:10:40.402986 2646 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.404888 kubelet[2646]: I0515 00:10:40.402999 2646 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.404888 kubelet[2646]: I0515 00:10:40.403008 2646 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.404888 kubelet[2646]: I0515 00:10:40.403019 2646 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.404888 kubelet[2646]: I0515 00:10:40.403030 2646 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.405158 kubelet[2646]: I0515 00:10:40.403059 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cni-path" (OuterVolumeSpecName: "cni-path") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.405158 kubelet[2646]: I0515 00:10:40.403084 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.405158 kubelet[2646]: I0515 00:10:40.403104 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-hostproc" (OuterVolumeSpecName: "hostproc") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.405158 kubelet[2646]: I0515 00:10:40.403123 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:10:40.406101 kubelet[2646]: I0515 00:10:40.406027 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37772911-5698-443f-8d95-f01a5b4476c2-kube-api-access-nms2z" (OuterVolumeSpecName: "kube-api-access-nms2z") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "kube-api-access-nms2z". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:10:40.406408 kubelet[2646]: I0515 00:10:40.406376 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37772911-5698-443f-8d95-f01a5b4476c2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:10:40.409641 kubelet[2646]: I0515 00:10:40.409549 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37772911-5698-443f-8d95-f01a5b4476c2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:10:40.410287 kubelet[2646]: I0515 00:10:40.410229 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92073cc0-a49a-4df4-892d-42d1cf149021-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92073cc0-a49a-4df4-892d-42d1cf149021" (UID: "92073cc0-a49a-4df4-892d-42d1cf149021"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:10:40.411070 kubelet[2646]: I0515 00:10:40.410998 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92073cc0-a49a-4df4-892d-42d1cf149021-kube-api-access-9lphf" (OuterVolumeSpecName: "kube-api-access-9lphf") pod "92073cc0-a49a-4df4-892d-42d1cf149021" (UID: "92073cc0-a49a-4df4-892d-42d1cf149021"). InnerVolumeSpecName "kube-api-access-9lphf". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:10:40.411737 kubelet[2646]: I0515 00:10:40.411673 2646 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37772911-5698-443f-8d95-f01a5b4476c2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "37772911-5698-443f-8d95-f01a5b4476c2" (UID: "37772911-5698-443f-8d95-f01a5b4476c2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:10:40.504309 kubelet[2646]: I0515 00:10:40.504226 2646 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.504309 kubelet[2646]: I0515 00:10:40.504291 2646 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92073cc0-a49a-4df4-892d-42d1cf149021-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.504309 kubelet[2646]: I0515 00:10:40.504308 2646 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.504309 kubelet[2646]: I0515 00:10:40.504320 2646 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.504309 kubelet[2646]: I0515 00:10:40.504332 2646 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9lphf\" (UniqueName: \"kubernetes.io/projected/92073cc0-a49a-4df4-892d-42d1cf149021-kube-api-access-9lphf\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.504309 kubelet[2646]: I0515 00:10:40.504343 2646 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37772911-5698-443f-8d95-f01a5b4476c2-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.504309 kubelet[2646]: I0515 00:10:40.504355 2646 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37772911-5698-443f-8d95-f01a5b4476c2-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.504698 kubelet[2646]: I0515 00:10:40.504367 2646 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37772911-5698-443f-8d95-f01a5b4476c2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.504698 kubelet[2646]: I0515 00:10:40.504379 2646 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nms2z\" (UniqueName: \"kubernetes.io/projected/37772911-5698-443f-8d95-f01a5b4476c2-kube-api-access-nms2z\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.504698 kubelet[2646]: I0515 00:10:40.504393 2646 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37772911-5698-443f-8d95-f01a5b4476c2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:10:40.999628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261-rootfs.mount: Deactivated successfully. May 15 00:10:40.999807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f-rootfs.mount: Deactivated successfully. May 15 00:10:40.999926 systemd[1]: var-lib-kubelet-pods-92073cc0\x2da49a\x2d4df4\x2d892d\x2d42d1cf149021-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9lphf.mount: Deactivated successfully. May 15 00:10:41.000053 systemd[1]: var-lib-kubelet-pods-37772911\x2d5698\x2d443f\x2d8d95\x2df01a5b4476c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnms2z.mount: Deactivated successfully. May 15 00:10:41.000175 systemd[1]: var-lib-kubelet-pods-37772911\x2d5698\x2d443f\x2d8d95\x2df01a5b4476c2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:10:41.000282 systemd[1]: var-lib-kubelet-pods-37772911\x2d5698\x2d443f\x2d8d95\x2df01a5b4476c2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:10:41.101336 kubelet[2646]: I0515 00:10:41.101287 2646 scope.go:117] "RemoveContainer" containerID="9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c" May 15 00:10:41.108809 systemd[1]: Removed slice kubepods-besteffort-pod92073cc0_a49a_4df4_892d_42d1cf149021.slice - libcontainer container kubepods-besteffort-pod92073cc0_a49a_4df4_892d_42d1cf149021.slice. May 15 00:10:41.111151 containerd[1499]: time="2025-05-15T00:10:41.110656985Z" level=info msg="RemoveContainer for \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\"" May 15 00:10:41.114877 systemd[1]: Removed slice kubepods-burstable-pod37772911_5698_443f_8d95_f01a5b4476c2.slice - libcontainer container kubepods-burstable-pod37772911_5698_443f_8d95_f01a5b4476c2.slice. May 15 00:10:41.115000 systemd[1]: kubepods-burstable-pod37772911_5698_443f_8d95_f01a5b4476c2.slice: Consumed 8.444s CPU time. May 15 00:10:41.191577 containerd[1499]: time="2025-05-15T00:10:41.191489666Z" level=info msg="RemoveContainer for \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\" returns successfully" May 15 00:10:41.191985 kubelet[2646]: I0515 00:10:41.191925 2646 scope.go:117] "RemoveContainer" containerID="9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c" May 15 00:10:41.192349 containerd[1499]: time="2025-05-15T00:10:41.192276464Z" level=error msg="ContainerStatus for \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\": not found" May 15 00:10:41.202485 kubelet[2646]: E0515 00:10:41.202173 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\": not found" containerID="9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c" May 15 00:10:41.202485 kubelet[2646]: I0515 00:10:41.202232 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c"} err="failed to get container status \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9751b3435e8482ddb71928a5303a8323f55567d2a026b5abd394d58a0519682c\": not found" May 15 00:10:41.202485 kubelet[2646]: I0515 00:10:41.202355 2646 scope.go:117] "RemoveContainer" containerID="8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18" May 15 00:10:41.204449 containerd[1499]: time="2025-05-15T00:10:41.203986363Z" level=info msg="RemoveContainer for \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\"" May 15 00:10:41.209216 containerd[1499]: time="2025-05-15T00:10:41.209154829Z" level=info msg="RemoveContainer for \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\" returns successfully" May 15 00:10:41.209505 kubelet[2646]: I0515 00:10:41.209452 2646 scope.go:117] "RemoveContainer" containerID="c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a" May 15 00:10:41.210973 containerd[1499]: time="2025-05-15T00:10:41.210917453Z" level=info msg="RemoveContainer for \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\"" May 15 00:10:41.215810 containerd[1499]: time="2025-05-15T00:10:41.215760366Z" level=info msg="RemoveContainer for \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\" returns successfully" May 15 00:10:41.215990 kubelet[2646]: I0515 00:10:41.215962 2646 scope.go:117] "RemoveContainer" containerID="495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d" May 15 00:10:41.217931 containerd[1499]: time="2025-05-15T00:10:41.217903088Z" level=info msg="RemoveContainer for \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\"" May 15 00:10:41.221677 containerd[1499]: time="2025-05-15T00:10:41.221632759Z" level=info msg="RemoveContainer for \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\" returns successfully" May 15 00:10:41.221945 kubelet[2646]: I0515 00:10:41.221882 2646 scope.go:117] "RemoveContainer" containerID="3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981" May 15 00:10:41.223082 containerd[1499]: time="2025-05-15T00:10:41.223051936Z" level=info msg="RemoveContainer for \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\"" May 15 00:10:41.226536 containerd[1499]: time="2025-05-15T00:10:41.226503814Z" level=info msg="RemoveContainer for \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\" returns successfully" May 15 00:10:41.226672 kubelet[2646]: I0515 00:10:41.226639 2646 scope.go:117] "RemoveContainer" containerID="e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d" May 15 00:10:41.227777 containerd[1499]: time="2025-05-15T00:10:41.227735914Z" level=info msg="RemoveContainer for \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\"" May 15 00:10:41.231626 containerd[1499]: time="2025-05-15T00:10:41.231572710Z" level=info msg="RemoveContainer for \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\" returns successfully" May 15 00:10:41.231764 kubelet[2646]: I0515 00:10:41.231738 2646 scope.go:117] "RemoveContainer" containerID="8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18" May 15 00:10:41.231944 containerd[1499]: time="2025-05-15T00:10:41.231897161Z" level=error msg="ContainerStatus for \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\": not found" May 15 00:10:41.232066 kubelet[2646]: E0515 00:10:41.232029 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\": not found" containerID="8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18" May 15 00:10:41.232122 kubelet[2646]: I0515 00:10:41.232066 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18"} err="failed to get container status \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f7c636d892205a82405d7d073d14974c73a1e1d3e0ecbdea0df5d7c8e7f4c18\": not found" May 15 00:10:41.232122 kubelet[2646]: I0515 00:10:41.232098 2646 scope.go:117] "RemoveContainer" containerID="c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a" May 15 00:10:41.232273 containerd[1499]: time="2025-05-15T00:10:41.232247001Z" level=error msg="ContainerStatus for \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\": not found" May 15 00:10:41.232376 kubelet[2646]: E0515 00:10:41.232344 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\": not found" containerID="c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a" May 15 00:10:41.232376 kubelet[2646]: I0515 00:10:41.232376 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a"} err="failed to get container status \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9df980bd2ced14c0cbde591ad2c181ac8edbe425d7b1366a969a61d4a40926a\": not found" May 15 00:10:41.232516 kubelet[2646]: I0515 00:10:41.232394 2646 scope.go:117] "RemoveContainer" containerID="495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d" May 15 00:10:41.232790 containerd[1499]: time="2025-05-15T00:10:41.232731900Z" level=error msg="ContainerStatus for \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\": not found" May 15 00:10:41.232978 kubelet[2646]: E0515 00:10:41.232954 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\": not found" containerID="495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d" May 15 00:10:41.233018 kubelet[2646]: I0515 00:10:41.232982 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d"} err="failed to get container status \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"495896be30155a280bfdfae67b43500c782c5a2fca62944de7cd40c345425d0d\": not found" May 15 00:10:41.233018 kubelet[2646]: I0515 00:10:41.233002 2646 scope.go:117] "RemoveContainer" containerID="3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981" May 15 00:10:41.233201 containerd[1499]: time="2025-05-15T00:10:41.233160540Z" level=error msg="ContainerStatus for \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\": not found" May 15 00:10:41.233311 kubelet[2646]: E0515 00:10:41.233287 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\": not found" containerID="3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981" May 15 00:10:41.233367 kubelet[2646]: I0515 00:10:41.233313 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981"} err="failed to get container status \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\": rpc error: code = NotFound desc = an error occurred when try to find container \"3386878d752a2ffc1cf7d223a810a8e1b3fb1997f51674c31fd7d8cf400bf981\": not found" May 15 00:10:41.233367 kubelet[2646]: I0515 00:10:41.233331 2646 scope.go:117] "RemoveContainer" containerID="e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d" May 15 00:10:41.233474 containerd[1499]: time="2025-05-15T00:10:41.233446317Z" level=error msg="ContainerStatus for \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\": not found" May 15 00:10:41.233579 kubelet[2646]: E0515 00:10:41.233553 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\": not found" containerID="e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d" May 15 00:10:41.233627 kubelet[2646]: I0515 00:10:41.233581 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d"} err="failed to get container status \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5e0949ee8c0c06a88069bc97c2f3585a51953d83d233f5a962b71a9f65a445d\": not found" May 15 00:10:41.633199 sshd[4330]: Connection closed by 10.0.0.1 port 39518 May 15 00:10:41.656608 systemd[1]: Started sshd@28-10.0.0.104:22-10.0.0.1:39530.service - OpenSSH per-connection server daemon (10.0.0.1:39530). May 15 00:10:41.661612 sshd-session[4328]: pam_unix(sshd:session): session closed for user core May 15 00:10:41.665766 systemd[1]: sshd@27-10.0.0.104:22-10.0.0.1:39518.service: Deactivated successfully. May 15 00:10:41.668964 systemd[1]: session-27.scope: Deactivated successfully. May 15 00:10:41.669245 systemd[1]: session-27.scope: Consumed 1.211s CPU time. May 15 00:10:41.671612 systemd-logind[1486]: Session 27 logged out. Waiting for processes to exit. May 15 00:10:41.672965 systemd-logind[1486]: Removed session 27. May 15 00:10:41.742742 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 39530 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:41.744721 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:41.752473 systemd-logind[1486]: New session 28 of user core. May 15 00:10:41.769100 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 00:10:41.793163 kubelet[2646]: I0515 00:10:41.793101 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37772911-5698-443f-8d95-f01a5b4476c2" path="/var/lib/kubelet/pods/37772911-5698-443f-8d95-f01a5b4476c2/volumes" May 15 00:10:41.794107 kubelet[2646]: I0515 00:10:41.794074 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92073cc0-a49a-4df4-892d-42d1cf149021" path="/var/lib/kubelet/pods/92073cc0-a49a-4df4-892d-42d1cf149021/volumes" May 15 00:10:42.873959 kubelet[2646]: E0515 00:10:42.873896 2646 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:10:43.764917 sshd[4495]: Connection closed by 10.0.0.1 port 39530 May 15 00:10:43.765409 sshd-session[4489]: pam_unix(sshd:session): session closed for user core May 15 00:10:43.775314 systemd[1]: sshd@28-10.0.0.104:22-10.0.0.1:39530.service: Deactivated successfully. May 15 00:10:43.777493 systemd[1]: session-28.scope: Deactivated successfully. May 15 00:10:43.779119 systemd-logind[1486]: Session 28 logged out. Waiting for processes to exit. May 15 00:10:43.791148 systemd[1]: Started sshd@29-10.0.0.104:22-10.0.0.1:39534.service - OpenSSH per-connection server daemon (10.0.0.1:39534). May 15 00:10:43.792314 systemd-logind[1486]: Removed session 28. May 15 00:10:43.836918 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 39534 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:43.839114 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:43.850464 systemd-logind[1486]: New session 29 of user core. May 15 00:10:43.861269 systemd[1]: Started session-29.scope - Session 29 of User core. May 15 00:10:43.918219 sshd[4509]: Connection closed by 10.0.0.1 port 39534 May 15 00:10:43.918602 sshd-session[4507]: pam_unix(sshd:session): session closed for user core May 15 00:10:43.930874 systemd[1]: sshd@29-10.0.0.104:22-10.0.0.1:39534.service: Deactivated successfully. May 15 00:10:43.934519 systemd[1]: session-29.scope: Deactivated successfully. May 15 00:10:43.937222 systemd-logind[1486]: Session 29 logged out. Waiting for processes to exit. May 15 00:10:43.946174 systemd[1]: Started sshd@30-10.0.0.104:22-10.0.0.1:39536.service - OpenSSH per-connection server daemon (10.0.0.1:39536). May 15 00:10:43.947178 systemd-logind[1486]: Removed session 29. May 15 00:10:43.989812 sshd[4515]: Accepted publickey for core from 10.0.0.1 port 39536 ssh2: RSA SHA256:4nEUMsNL9WwOiz3nlbYXUyvejbLFdwbVMD0f0hyTg+E May 15 00:10:43.992036 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:10:43.997886 systemd-logind[1486]: New session 30 of user core. May 15 00:10:44.012176 systemd[1]: Started session-30.scope - Session 30 of User core. May 15 00:10:44.111101 kubelet[2646]: E0515 00:10:44.110923 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37772911-5698-443f-8d95-f01a5b4476c2" containerName="apply-sysctl-overwrites" May 15 00:10:44.111101 kubelet[2646]: E0515 00:10:44.110957 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37772911-5698-443f-8d95-f01a5b4476c2" containerName="mount-bpf-fs" May 15 00:10:44.111101 kubelet[2646]: E0515 00:10:44.110965 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92073cc0-a49a-4df4-892d-42d1cf149021" containerName="cilium-operator" May 15 00:10:44.111101 kubelet[2646]: E0515 00:10:44.110972 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37772911-5698-443f-8d95-f01a5b4476c2" containerName="cilium-agent" May 15 00:10:44.111101 kubelet[2646]: E0515 00:10:44.110981 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37772911-5698-443f-8d95-f01a5b4476c2" containerName="mount-cgroup" May 15 00:10:44.111101 kubelet[2646]: E0515 00:10:44.110988 2646 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37772911-5698-443f-8d95-f01a5b4476c2" containerName="clean-cilium-state" May 15 00:10:44.111101 kubelet[2646]: I0515 00:10:44.111008 2646 memory_manager.go:354] "RemoveStaleState removing state" podUID="37772911-5698-443f-8d95-f01a5b4476c2" containerName="cilium-agent" May 15 00:10:44.111101 kubelet[2646]: I0515 00:10:44.111015 2646 memory_manager.go:354] "RemoveStaleState removing state" podUID="92073cc0-a49a-4df4-892d-42d1cf149021" containerName="cilium-operator" May 15 00:10:44.123157 systemd[1]: Created slice kubepods-burstable-podd68830ed_0367_49e8_acad_a46f1dd4358c.slice - libcontainer container kubepods-burstable-podd68830ed_0367_49e8_acad_a46f1dd4358c.slice. May 15 00:10:44.227776 kubelet[2646]: I0515 00:10:44.227479 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-cilium-cgroup\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.227776 kubelet[2646]: I0515 00:10:44.227687 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-cni-path\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.227776 kubelet[2646]: I0515 00:10:44.227750 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-lib-modules\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.227776 kubelet[2646]: I0515 00:10:44.227776 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-cilium-run\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.227776 kubelet[2646]: I0515 00:10:44.227800 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d68830ed-0367-49e8-acad-a46f1dd4358c-cilium-config-path\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228160 kubelet[2646]: I0515 00:10:44.227874 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv48w\" (UniqueName: \"kubernetes.io/projected/d68830ed-0367-49e8-acad-a46f1dd4358c-kube-api-access-dv48w\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228160 kubelet[2646]: I0515 00:10:44.227903 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-xtables-lock\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228160 kubelet[2646]: I0515 00:10:44.227989 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-host-proc-sys-kernel\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228160 kubelet[2646]: I0515 00:10:44.228018 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-host-proc-sys-net\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228160 kubelet[2646]: I0515 00:10:44.228050 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-etc-cni-netd\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228160 kubelet[2646]: I0515 00:10:44.228075 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-bpf-maps\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228375 kubelet[2646]: I0515 00:10:44.228095 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d68830ed-0367-49e8-acad-a46f1dd4358c-hostproc\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228375 kubelet[2646]: I0515 00:10:44.228115 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d68830ed-0367-49e8-acad-a46f1dd4358c-cilium-ipsec-secrets\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228375 kubelet[2646]: I0515 00:10:44.228137 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d68830ed-0367-49e8-acad-a46f1dd4358c-hubble-tls\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.228375 kubelet[2646]: I0515 00:10:44.228160 2646 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d68830ed-0367-49e8-acad-a46f1dd4358c-clustermesh-secrets\") pod \"cilium-fqkph\" (UID: \"d68830ed-0367-49e8-acad-a46f1dd4358c\") " pod="kube-system/cilium-fqkph" May 15 00:10:44.430745 kubelet[2646]: E0515 00:10:44.430525 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:44.431862 containerd[1499]: time="2025-05-15T00:10:44.431794894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fqkph,Uid:d68830ed-0367-49e8-acad-a46f1dd4358c,Namespace:kube-system,Attempt:0,}" May 15 00:10:44.464551 containerd[1499]: time="2025-05-15T00:10:44.464239705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:10:44.464551 containerd[1499]: time="2025-05-15T00:10:44.464374684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:10:44.464794 containerd[1499]: time="2025-05-15T00:10:44.464625493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:10:44.466237 containerd[1499]: time="2025-05-15T00:10:44.465933118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:10:44.503860 systemd[1]: Started cri-containerd-5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9.scope - libcontainer container 5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9. May 15 00:10:44.546433 containerd[1499]: time="2025-05-15T00:10:44.546275520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fqkph,Uid:d68830ed-0367-49e8-acad-a46f1dd4358c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\"" May 15 00:10:44.547469 kubelet[2646]: E0515 00:10:44.547437 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:44.550113 containerd[1499]: time="2025-05-15T00:10:44.550073112Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:10:44.575481 containerd[1499]: time="2025-05-15T00:10:44.575236138Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f84155d98e3ceed6f2843b04f7a1a691563aaebb4716261306153c0944faa19\"" May 15 00:10:44.576530 containerd[1499]: time="2025-05-15T00:10:44.576242055Z" level=info msg="StartContainer for \"5f84155d98e3ceed6f2843b04f7a1a691563aaebb4716261306153c0944faa19\"" May 15 00:10:44.629277 systemd[1]: Started cri-containerd-5f84155d98e3ceed6f2843b04f7a1a691563aaebb4716261306153c0944faa19.scope - libcontainer container 5f84155d98e3ceed6f2843b04f7a1a691563aaebb4716261306153c0944faa19. May 15 00:10:44.778691 systemd[1]: cri-containerd-5f84155d98e3ceed6f2843b04f7a1a691563aaebb4716261306153c0944faa19.scope: Deactivated successfully. May 15 00:10:44.802411 containerd[1499]: time="2025-05-15T00:10:44.802338168Z" level=info msg="StartContainer for \"5f84155d98e3ceed6f2843b04f7a1a691563aaebb4716261306153c0944faa19\" returns successfully" May 15 00:10:44.852596 containerd[1499]: time="2025-05-15T00:10:44.852498693Z" level=info msg="shim disconnected" id=5f84155d98e3ceed6f2843b04f7a1a691563aaebb4716261306153c0944faa19 namespace=k8s.io May 15 00:10:44.852596 containerd[1499]: time="2025-05-15T00:10:44.852581622Z" level=warning msg="cleaning up after shim disconnected" id=5f84155d98e3ceed6f2843b04f7a1a691563aaebb4716261306153c0944faa19 namespace=k8s.io May 15 00:10:44.852596 containerd[1499]: time="2025-05-15T00:10:44.852591501Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:45.121206 kubelet[2646]: E0515 00:10:45.121032 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:45.124210 containerd[1499]: time="2025-05-15T00:10:45.124136461Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:10:45.694115 containerd[1499]: time="2025-05-15T00:10:45.694030347Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c7cbb813b1829693b6acd806ddcd3c3be36c6b04a69e62856fe8563913a9190d\"" May 15 00:10:45.694991 containerd[1499]: time="2025-05-15T00:10:45.694944799Z" level=info msg="StartContainer for \"c7cbb813b1829693b6acd806ddcd3c3be36c6b04a69e62856fe8563913a9190d\"" May 15 00:10:45.739152 systemd[1]: Started cri-containerd-c7cbb813b1829693b6acd806ddcd3c3be36c6b04a69e62856fe8563913a9190d.scope - libcontainer container c7cbb813b1829693b6acd806ddcd3c3be36c6b04a69e62856fe8563913a9190d. May 15 00:10:45.781503 systemd[1]: cri-containerd-c7cbb813b1829693b6acd806ddcd3c3be36c6b04a69e62856fe8563913a9190d.scope: Deactivated successfully. May 15 00:10:46.119409 containerd[1499]: time="2025-05-15T00:10:46.119212284Z" level=info msg="StartContainer for \"c7cbb813b1829693b6acd806ddcd3c3be36c6b04a69e62856fe8563913a9190d\" returns successfully" May 15 00:10:46.124498 kubelet[2646]: E0515 00:10:46.124467 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:46.337025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7cbb813b1829693b6acd806ddcd3c3be36c6b04a69e62856fe8563913a9190d-rootfs.mount: Deactivated successfully. May 15 00:10:46.733029 containerd[1499]: time="2025-05-15T00:10:46.732937256Z" level=info msg="shim disconnected" id=c7cbb813b1829693b6acd806ddcd3c3be36c6b04a69e62856fe8563913a9190d namespace=k8s.io May 15 00:10:46.733029 containerd[1499]: time="2025-05-15T00:10:46.733018542Z" level=warning msg="cleaning up after shim disconnected" id=c7cbb813b1829693b6acd806ddcd3c3be36c6b04a69e62856fe8563913a9190d namespace=k8s.io May 15 00:10:46.733029 containerd[1499]: time="2025-05-15T00:10:46.733030144Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:47.129205 kubelet[2646]: E0515 00:10:47.129041 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:47.131587 containerd[1499]: time="2025-05-15T00:10:47.131400739Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:10:47.875598 kubelet[2646]: E0515 00:10:47.875524 2646 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:10:48.139723 containerd[1499]: time="2025-05-15T00:10:48.139512724Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3dfb63cbf16ad8da07cdcaf2b80cda9b98faa5b8f35c8ff799dac5b6e0466d08\"" May 15 00:10:48.140617 containerd[1499]: time="2025-05-15T00:10:48.140517989Z" level=info msg="StartContainer for \"3dfb63cbf16ad8da07cdcaf2b80cda9b98faa5b8f35c8ff799dac5b6e0466d08\"" May 15 00:10:48.176978 systemd[1]: Started cri-containerd-3dfb63cbf16ad8da07cdcaf2b80cda9b98faa5b8f35c8ff799dac5b6e0466d08.scope - libcontainer container 3dfb63cbf16ad8da07cdcaf2b80cda9b98faa5b8f35c8ff799dac5b6e0466d08. May 15 00:10:48.460146 containerd[1499]: time="2025-05-15T00:10:48.460085013Z" level=info msg="StartContainer for \"3dfb63cbf16ad8da07cdcaf2b80cda9b98faa5b8f35c8ff799dac5b6e0466d08\" returns successfully" May 15 00:10:48.465620 systemd[1]: cri-containerd-3dfb63cbf16ad8da07cdcaf2b80cda9b98faa5b8f35c8ff799dac5b6e0466d08.scope: Deactivated successfully. May 15 00:10:48.588514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dfb63cbf16ad8da07cdcaf2b80cda9b98faa5b8f35c8ff799dac5b6e0466d08-rootfs.mount: Deactivated successfully. May 15 00:10:48.819016 containerd[1499]: time="2025-05-15T00:10:48.818751669Z" level=info msg="shim disconnected" id=3dfb63cbf16ad8da07cdcaf2b80cda9b98faa5b8f35c8ff799dac5b6e0466d08 namespace=k8s.io May 15 00:10:48.819016 containerd[1499]: time="2025-05-15T00:10:48.818868112Z" level=warning msg="cleaning up after shim disconnected" id=3dfb63cbf16ad8da07cdcaf2b80cda9b98faa5b8f35c8ff799dac5b6e0466d08 namespace=k8s.io May 15 00:10:48.819016 containerd[1499]: time="2025-05-15T00:10:48.818883752Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:49.137345 kubelet[2646]: E0515 00:10:49.137027 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:49.140377 containerd[1499]: time="2025-05-15T00:10:49.140272149Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:10:49.440765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946396438.mount: Deactivated successfully. May 15 00:10:49.750125 containerd[1499]: time="2025-05-15T00:10:49.750037276Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a1b1c938bf71910d1bfba2af38af01cd8b3d12b41d122850b456a1effd45e0c\"" May 15 00:10:49.750722 containerd[1499]: time="2025-05-15T00:10:49.750678885Z" level=info msg="StartContainer for \"4a1b1c938bf71910d1bfba2af38af01cd8b3d12b41d122850b456a1effd45e0c\"" May 15 00:10:49.795396 systemd[1]: Started cri-containerd-4a1b1c938bf71910d1bfba2af38af01cd8b3d12b41d122850b456a1effd45e0c.scope - libcontainer container 4a1b1c938bf71910d1bfba2af38af01cd8b3d12b41d122850b456a1effd45e0c. May 15 00:10:49.836097 systemd[1]: cri-containerd-4a1b1c938bf71910d1bfba2af38af01cd8b3d12b41d122850b456a1effd45e0c.scope: Deactivated successfully. May 15 00:10:49.995788 containerd[1499]: time="2025-05-15T00:10:49.995677446Z" level=info msg="StartContainer for \"4a1b1c938bf71910d1bfba2af38af01cd8b3d12b41d122850b456a1effd45e0c\" returns successfully" May 15 00:10:50.018379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a1b1c938bf71910d1bfba2af38af01cd8b3d12b41d122850b456a1effd45e0c-rootfs.mount: Deactivated successfully. May 15 00:10:50.144248 kubelet[2646]: E0515 00:10:50.144199 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:50.423563 containerd[1499]: time="2025-05-15T00:10:50.423362021Z" level=info msg="shim disconnected" id=4a1b1c938bf71910d1bfba2af38af01cd8b3d12b41d122850b456a1effd45e0c namespace=k8s.io May 15 00:10:50.423563 containerd[1499]: time="2025-05-15T00:10:50.423435842Z" level=warning msg="cleaning up after shim disconnected" id=4a1b1c938bf71910d1bfba2af38af01cd8b3d12b41d122850b456a1effd45e0c namespace=k8s.io May 15 00:10:50.423563 containerd[1499]: time="2025-05-15T00:10:50.423447544Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:10:51.147716 kubelet[2646]: E0515 00:10:51.147682 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:51.149303 containerd[1499]: time="2025-05-15T00:10:51.149271453Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:10:51.205919 kubelet[2646]: I0515 00:10:51.205865 2646 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:10:51Z","lastTransitionTime":"2025-05-15T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:10:52.037999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351413144.mount: Deactivated successfully. May 15 00:10:52.455334 containerd[1499]: time="2025-05-15T00:10:52.455238334Z" level=info msg="CreateContainer within sandbox \"5617b9e558a2deec4f1e29768ae4b533253c99f4410b7cca9a59f82131663dd9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"86adbcbe2615fd8bdd7b598d4c004fd132f75f51daf2492f446ef32f87ed8627\"" May 15 00:10:52.455965 containerd[1499]: time="2025-05-15T00:10:52.455949857Z" level=info msg="StartContainer for \"86adbcbe2615fd8bdd7b598d4c004fd132f75f51daf2492f446ef32f87ed8627\"" May 15 00:10:52.508020 systemd[1]: Started cri-containerd-86adbcbe2615fd8bdd7b598d4c004fd132f75f51daf2492f446ef32f87ed8627.scope - libcontainer container 86adbcbe2615fd8bdd7b598d4c004fd132f75f51daf2492f446ef32f87ed8627. May 15 00:10:52.732454 containerd[1499]: time="2025-05-15T00:10:52.732311851Z" level=info msg="StartContainer for \"86adbcbe2615fd8bdd7b598d4c004fd132f75f51daf2492f446ef32f87ed8627\" returns successfully" May 15 00:10:53.035230 systemd[1]: run-containerd-runc-k8s.io-86adbcbe2615fd8bdd7b598d4c004fd132f75f51daf2492f446ef32f87ed8627-runc.4yja9U.mount: Deactivated successfully. May 15 00:10:53.161261 kubelet[2646]: E0515 00:10:53.161225 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:53.198966 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 00:10:53.259345 kubelet[2646]: I0515 00:10:53.259277 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fqkph" podStartSLOduration=10.259260284 podStartE2EDuration="10.259260284s" podCreationTimestamp="2025-05-15 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:10:53.259122049 +0000 UTC m=+115.595720369" watchObservedRunningTime="2025-05-15 00:10:53.259260284 +0000 UTC m=+115.595858594" May 15 00:10:54.431180 kubelet[2646]: E0515 00:10:54.431117 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:56.797645 systemd-networkd[1410]: lxc_health: Link UP May 15 00:10:56.805482 systemd-networkd[1410]: lxc_health: Gained carrier May 15 00:10:57.795170 containerd[1499]: time="2025-05-15T00:10:57.795124108Z" level=info msg="StopPodSandbox for \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\"" May 15 00:10:57.798135 containerd[1499]: time="2025-05-15T00:10:57.796030735Z" level=info msg="TearDown network for sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" successfully" May 15 00:10:57.798135 containerd[1499]: time="2025-05-15T00:10:57.796088325Z" level=info msg="StopPodSandbox for \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" returns successfully" May 15 00:10:57.798135 containerd[1499]: time="2025-05-15T00:10:57.796592751Z" level=info msg="RemovePodSandbox for \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\"" May 15 00:10:57.798135 containerd[1499]: time="2025-05-15T00:10:57.796619794Z" level=info msg="Forcibly stopping sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\"" May 15 00:10:57.798135 containerd[1499]: time="2025-05-15T00:10:57.796680340Z" level=info msg="TearDown network for sandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" successfully" May 15 00:10:57.936872 containerd[1499]: time="2025-05-15T00:10:57.936661969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:10:57.936872 containerd[1499]: time="2025-05-15T00:10:57.936799062Z" level=info msg="RemovePodSandbox \"a66c3f158334ea0aeafeecd187b759ea35f357835d26fcf39d8affc512a0a69f\" returns successfully" May 15 00:10:57.937870 containerd[1499]: time="2025-05-15T00:10:57.937722211Z" level=info msg="StopPodSandbox for \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\"" May 15 00:10:57.937921 containerd[1499]: time="2025-05-15T00:10:57.937892456Z" level=info msg="TearDown network for sandbox \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\" successfully" May 15 00:10:57.937921 containerd[1499]: time="2025-05-15T00:10:57.937906183Z" level=info msg="StopPodSandbox for \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\" returns successfully" May 15 00:10:57.938557 containerd[1499]: time="2025-05-15T00:10:57.938381224Z" level=info msg="RemovePodSandbox for \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\"" May 15 00:10:57.938557 containerd[1499]: time="2025-05-15T00:10:57.938412363Z" level=info msg="Forcibly stopping sandbox \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\"" May 15 00:10:57.938557 containerd[1499]: time="2025-05-15T00:10:57.938470344Z" level=info msg="TearDown network for sandbox \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\" successfully" May 15 00:10:58.161238 containerd[1499]: time="2025-05-15T00:10:58.160509994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:10:58.161238 containerd[1499]: time="2025-05-15T00:10:58.160660742Z" level=info msg="RemovePodSandbox \"a63ad77eb00447cee032a6f746de1c2e784dbaf3d9444a22e4db18305ec49261\" returns successfully" May 15 00:10:58.433644 kubelet[2646]: E0515 00:10:58.432979 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:10:58.807152 systemd-networkd[1410]: lxc_health: Gained IPv6LL May 15 00:10:59.173236 kubelet[2646]: E0515 00:10:59.173089 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:11:00.175014 kubelet[2646]: E0515 00:11:00.174918 2646 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:11:08.495563 sshd[4517]: Connection closed by 10.0.0.1 port 39536 May 15 00:11:08.495971 sshd-session[4515]: pam_unix(sshd:session): session closed for user core May 15 00:11:08.499948 systemd[1]: sshd@30-10.0.0.104:22-10.0.0.1:39536.service: Deactivated successfully. May 15 00:11:08.501990 systemd[1]: session-30.scope: Deactivated successfully. May 15 00:11:08.502681 systemd-logind[1486]: Session 30 logged out. Waiting for processes to exit. May 15 00:11:08.503810 systemd-logind[1486]: Removed session 30.