May 14 23:41:20.915037 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 22:09:34 -00 2025 May 14 23:41:20.915065 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e0c956f61127e47bb23a2bdeb0592b0ff91bd857e2344d0bf321acb67c279f1a May 14 23:41:20.915077 kernel: BIOS-provided physical RAM map: May 14 23:41:20.915084 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 23:41:20.915090 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 23:41:20.915097 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 23:41:20.915105 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 14 23:41:20.915111 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 23:41:20.915118 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 23:41:20.915124 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 23:41:20.915131 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 14 23:41:20.915140 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 23:41:20.915146 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 23:41:20.915153 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 23:41:20.915164 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 23:41:20.915172 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 23:41:20.915182 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 14 23:41:20.915189 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 14 23:41:20.915196 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 14 23:41:20.915203 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 14 23:41:20.915210 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 23:41:20.915218 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 23:41:20.915225 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 23:41:20.915232 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 23:41:20.915239 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 23:41:20.915246 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 23:41:20.915253 kernel: NX (Execute Disable) protection: active May 14 23:41:20.915262 kernel: APIC: Static calls initialized May 14 23:41:20.915269 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 14 23:41:20.915277 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 14 23:41:20.915284 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 14 23:41:20.915291 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 14 23:41:20.915297 kernel: extended physical RAM map: May 14 23:41:20.915304 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 14 23:41:20.915312 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 14 23:41:20.915319 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 14 23:41:20.915326 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 14 23:41:20.915333 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 14 23:41:20.915340 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 14 23:41:20.915350 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 14 23:41:20.915361 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 14 23:41:20.915368 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 14 23:41:20.915376 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 14 23:41:20.915383 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 14 23:41:20.915390 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 14 23:41:20.915400 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 14 23:41:20.915407 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 14 23:41:20.915414 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 14 23:41:20.915422 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 14 23:41:20.915429 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 14 23:41:20.915437 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 14 23:41:20.915444 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 14 23:41:20.915451 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 14 23:41:20.915459 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 14 23:41:20.915466 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 14 23:41:20.915475 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 14 23:41:20.915483 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 14 23:41:20.915490 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 23:41:20.915497 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 14 23:41:20.915504 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 23:41:20.915512 kernel: efi: EFI v2.7 by EDK II May 14 23:41:20.915519 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 14 23:41:20.915527 kernel: random: crng init done May 14 23:41:20.915534 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 14 23:41:20.915541 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 14 23:41:20.915561 kernel: secureboot: Secure boot disabled May 14 23:41:20.915571 kernel: SMBIOS 2.8 present. May 14 23:41:20.915579 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 14 23:41:20.915586 kernel: Hypervisor detected: KVM May 14 23:41:20.915593 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 23:41:20.915601 kernel: kvm-clock: using sched offset of 2663713425 cycles May 14 23:41:20.915608 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 23:41:20.915616 kernel: tsc: Detected 2794.748 MHz processor May 14 23:41:20.915624 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 23:41:20.915632 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 23:41:20.915640 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 14 23:41:20.915650 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 14 23:41:20.915657 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 23:41:20.915665 kernel: Using GB pages for direct mapping May 14 23:41:20.915672 kernel: ACPI: Early table checksum verification disabled May 14 23:41:20.915680 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 14 23:41:20.915688 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 14 23:41:20.915695 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:41:20.915703 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:41:20.915710 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 14 23:41:20.915720 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:41:20.915728 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:41:20.915735 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:41:20.915743 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 23:41:20.915750 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 14 23:41:20.915758 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 14 23:41:20.915765 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 14 23:41:20.915773 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 14 23:41:20.915780 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 14 23:41:20.915798 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 14 23:41:20.915805 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 14 23:41:20.915813 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 14 23:41:20.915820 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 14 23:41:20.915828 kernel: No NUMA configuration found May 14 23:41:20.915835 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 14 23:41:20.915843 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 14 23:41:20.915851 kernel: Zone ranges: May 14 23:41:20.915859 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 23:41:20.915869 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 14 23:41:20.915877 kernel: Normal empty May 14 23:41:20.915884 kernel: Movable zone start for each node May 14 23:41:20.915892 kernel: Early memory node ranges May 14 23:41:20.915900 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 14 23:41:20.915907 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 14 23:41:20.915915 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 14 23:41:20.915922 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 14 23:41:20.915930 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 14 23:41:20.915937 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 14 23:41:20.915947 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 14 23:41:20.915955 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 14 23:41:20.915963 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 14 23:41:20.915970 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 23:41:20.915978 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 14 23:41:20.915993 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 14 23:41:20.916004 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 23:41:20.916012 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 14 23:41:20.916020 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 14 23:41:20.916027 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 14 23:41:20.916038 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 14 23:41:20.916046 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 14 23:41:20.916057 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 23:41:20.916065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 23:41:20.916073 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 23:41:20.916081 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 23:41:20.916089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 23:41:20.916099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 23:41:20.916107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 23:41:20.916115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 23:41:20.916123 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 23:41:20.916131 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 23:41:20.916138 kernel: TSC deadline timer available May 14 23:41:20.916146 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 14 23:41:20.916154 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 23:41:20.916162 kernel: kvm-guest: KVM setup pv remote TLB flush May 14 23:41:20.916173 kernel: kvm-guest: setup PV sched yield May 14 23:41:20.916181 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 14 23:41:20.916188 kernel: Booting paravirtualized kernel on KVM May 14 23:41:20.916196 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 23:41:20.916205 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 14 23:41:20.916212 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 14 23:41:20.916220 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 14 23:41:20.916228 kernel: pcpu-alloc: [0] 0 1 2 3 May 14 23:41:20.916236 kernel: kvm-guest: PV spinlocks enabled May 14 23:41:20.916246 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 14 23:41:20.916255 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e0c956f61127e47bb23a2bdeb0592b0ff91bd857e2344d0bf321acb67c279f1a May 14 23:41:20.916263 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:41:20.916271 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:41:20.916279 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:41:20.916287 kernel: Fallback order for Node 0: 0 May 14 23:41:20.916295 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 14 23:41:20.916303 kernel: Policy zone: DMA32 May 14 23:41:20.916313 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:41:20.916322 kernel: Memory: 2385672K/2565800K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 179872K reserved, 0K cma-reserved) May 14 23:41:20.916330 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 23:41:20.916337 kernel: ftrace: allocating 37993 entries in 149 pages May 14 23:41:20.916345 kernel: ftrace: allocated 149 pages with 4 groups May 14 23:41:20.916353 kernel: Dynamic Preempt: voluntary May 14 23:41:20.916361 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:41:20.916370 kernel: rcu: RCU event tracing is enabled. May 14 23:41:20.916378 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 23:41:20.916388 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:41:20.916396 kernel: Rude variant of Tasks RCU enabled. May 14 23:41:20.916404 kernel: Tracing variant of Tasks RCU enabled. May 14 23:41:20.916412 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:41:20.916420 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 23:41:20.916428 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 14 23:41:20.916436 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:41:20.916444 kernel: Console: colour dummy device 80x25 May 14 23:41:20.916451 kernel: printk: console [ttyS0] enabled May 14 23:41:20.916461 kernel: ACPI: Core revision 20230628 May 14 23:41:20.916469 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 23:41:20.916477 kernel: APIC: Switch to symmetric I/O mode setup May 14 23:41:20.916485 kernel: x2apic enabled May 14 23:41:20.916493 kernel: APIC: Switched APIC routing to: physical x2apic May 14 23:41:20.916503 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 14 23:41:20.916512 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 14 23:41:20.916520 kernel: kvm-guest: setup PV IPIs May 14 23:41:20.916528 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 23:41:20.916538 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 23:41:20.916546 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 14 23:41:20.916577 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 23:41:20.916586 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 23:41:20.916593 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 23:41:20.916601 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 23:41:20.916609 kernel: Spectre V2 : Mitigation: Retpolines May 14 23:41:20.916617 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 23:41:20.916625 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 23:41:20.916636 kernel: RETBleed: Mitigation: untrained return thunk May 14 23:41:20.916644 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 23:41:20.916652 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 23:41:20.916660 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 14 23:41:20.916668 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 14 23:41:20.916676 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 14 23:41:20.916684 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 23:41:20.916692 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 23:41:20.916702 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 23:41:20.916710 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 23:41:20.916718 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 23:41:20.916726 kernel: Freeing SMP alternatives memory: 32K May 14 23:41:20.916734 kernel: pid_max: default: 32768 minimum: 301 May 14 23:41:20.916742 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:41:20.916750 kernel: landlock: Up and running. May 14 23:41:20.916758 kernel: SELinux: Initializing. May 14 23:41:20.916766 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:41:20.916777 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:41:20.916785 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 23:41:20.916799 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:41:20.916807 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:41:20.916815 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 23:41:20.916823 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 23:41:20.916832 kernel: ... version: 0 May 14 23:41:20.916840 kernel: ... bit width: 48 May 14 23:41:20.916847 kernel: ... generic registers: 6 May 14 23:41:20.916858 kernel: ... value mask: 0000ffffffffffff May 14 23:41:20.916866 kernel: ... max period: 00007fffffffffff May 14 23:41:20.916873 kernel: ... fixed-purpose events: 0 May 14 23:41:20.916881 kernel: ... event mask: 000000000000003f May 14 23:41:20.916889 kernel: signal: max sigframe size: 1776 May 14 23:41:20.916897 kernel: rcu: Hierarchical SRCU implementation. May 14 23:41:20.916905 kernel: rcu: Max phase no-delay instances is 400. May 14 23:41:20.916913 kernel: smp: Bringing up secondary CPUs ... May 14 23:41:20.916920 kernel: smpboot: x86: Booting SMP configuration: May 14 23:41:20.916931 kernel: .... node #0, CPUs: #1 #2 #3 May 14 23:41:20.916939 kernel: smp: Brought up 1 node, 4 CPUs May 14 23:41:20.916946 kernel: smpboot: Max logical packages: 1 May 14 23:41:20.916954 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 14 23:41:20.916962 kernel: devtmpfs: initialized May 14 23:41:20.916970 kernel: x86/mm: Memory block size: 128MB May 14 23:41:20.916978 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 14 23:41:20.916986 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 14 23:41:20.916994 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 14 23:41:20.917004 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 14 23:41:20.917012 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 14 23:41:20.917020 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 14 23:41:20.917028 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:41:20.917036 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 23:41:20.917044 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:41:20.917052 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:41:20.917060 kernel: audit: initializing netlink subsys (disabled) May 14 23:41:20.917068 kernel: audit: type=2000 audit(1747266080.849:1): state=initialized audit_enabled=0 res=1 May 14 23:41:20.917078 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:41:20.917086 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 23:41:20.917094 kernel: cpuidle: using governor menu May 14 23:41:20.917102 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:41:20.917110 kernel: dca service started, version 1.12.1 May 14 23:41:20.917118 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 14 23:41:20.917126 kernel: PCI: Using configuration type 1 for base access May 14 23:41:20.917134 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 23:41:20.917142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:41:20.917152 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:41:20.917160 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:41:20.917168 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:41:20.917176 kernel: ACPI: Added _OSI(Module Device) May 14 23:41:20.917183 kernel: ACPI: Added _OSI(Processor Device) May 14 23:41:20.917191 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:41:20.917199 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:41:20.917207 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:41:20.917215 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 23:41:20.917225 kernel: ACPI: Interpreter enabled May 14 23:41:20.917233 kernel: ACPI: PM: (supports S0 S3 S5) May 14 23:41:20.917241 kernel: ACPI: Using IOAPIC for interrupt routing May 14 23:41:20.917249 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 23:41:20.917256 kernel: PCI: Using E820 reservations for host bridge windows May 14 23:41:20.917264 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 23:41:20.917272 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 23:41:20.917487 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:41:20.917639 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 23:41:20.917765 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 23:41:20.917776 kernel: PCI host bridge to bus 0000:00 May 14 23:41:20.917925 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 23:41:20.918039 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 23:41:20.918151 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 23:41:20.918265 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 14 23:41:20.918383 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 14 23:41:20.918498 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 14 23:41:20.918633 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 23:41:20.918795 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 23:41:20.918948 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 14 23:41:20.919077 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 14 23:41:20.919205 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 14 23:41:20.919330 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 14 23:41:20.919455 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 14 23:41:20.919599 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 23:41:20.919769 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 14 23:41:20.919925 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 14 23:41:20.920083 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 14 23:41:20.920215 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 14 23:41:20.920370 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 14 23:41:20.920499 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 14 23:41:20.920645 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 14 23:41:20.920770 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 14 23:41:20.920921 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 14 23:41:20.921049 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 14 23:41:20.921203 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 14 23:41:20.921349 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 14 23:41:20.921475 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 14 23:41:20.921633 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 23:41:20.921764 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 23:41:20.921922 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 23:41:20.922049 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 14 23:41:20.922189 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 14 23:41:20.922374 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 23:41:20.922547 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 14 23:41:20.922571 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 23:41:20.922579 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 23:41:20.922587 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 23:41:20.922595 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 23:41:20.922606 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 23:41:20.922614 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 23:41:20.922622 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 23:41:20.922630 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 23:41:20.922638 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 23:41:20.922646 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 23:41:20.922653 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 23:41:20.922661 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 23:41:20.922669 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 23:41:20.922679 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 23:41:20.922697 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 23:41:20.922706 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 23:41:20.922722 kernel: iommu: Default domain type: Translated May 14 23:41:20.922730 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 23:41:20.922738 kernel: efivars: Registered efivars operations May 14 23:41:20.922746 kernel: PCI: Using ACPI for IRQ routing May 14 23:41:20.922753 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 23:41:20.922761 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 14 23:41:20.922769 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 14 23:41:20.922779 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 14 23:41:20.922787 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 14 23:41:20.922803 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 14 23:41:20.922811 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 14 23:41:20.922818 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 14 23:41:20.922826 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 14 23:41:20.922962 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 23:41:20.923086 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 23:41:20.923218 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 23:41:20.923229 kernel: vgaarb: loaded May 14 23:41:20.923237 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 23:41:20.923245 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 23:41:20.923253 kernel: clocksource: Switched to clocksource kvm-clock May 14 23:41:20.923261 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:41:20.923269 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:41:20.923277 kernel: pnp: PnP ACPI init May 14 23:41:20.923435 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 14 23:41:20.923450 kernel: pnp: PnP ACPI: found 6 devices May 14 23:41:20.923459 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 23:41:20.923466 kernel: NET: Registered PF_INET protocol family May 14 23:41:20.923475 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:41:20.923500 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:41:20.923513 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:41:20.923521 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:41:20.923530 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:41:20.923540 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:41:20.923548 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:41:20.923616 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:41:20.923624 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:41:20.923632 kernel: NET: Registered PF_XDP protocol family May 14 23:41:20.923764 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 14 23:41:20.923907 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 14 23:41:20.924023 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 23:41:20.924141 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 23:41:20.924255 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 23:41:20.924367 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 14 23:41:20.924481 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 14 23:41:20.924649 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 14 23:41:20.924661 kernel: PCI: CLS 0 bytes, default 64 May 14 23:41:20.924669 kernel: Initialise system trusted keyrings May 14 23:41:20.924677 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:41:20.924689 kernel: Key type asymmetric registered May 14 23:41:20.924697 kernel: Asymmetric key parser 'x509' registered May 14 23:41:20.924706 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 23:41:20.924714 kernel: io scheduler mq-deadline registered May 14 23:41:20.924722 kernel: io scheduler kyber registered May 14 23:41:20.924730 kernel: io scheduler bfq registered May 14 23:41:20.924739 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 23:41:20.924747 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 23:41:20.924755 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 23:41:20.924767 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 14 23:41:20.924778 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:41:20.924786 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 23:41:20.924803 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 23:41:20.924811 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 23:41:20.924819 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 23:41:20.926177 kernel: rtc_cmos 00:04: RTC can wake from S4 May 14 23:41:20.926198 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 23:41:20.926322 kernel: rtc_cmos 00:04: registered as rtc0 May 14 23:41:20.926439 kernel: rtc_cmos 00:04: setting system clock to 2025-05-14T23:41:20 UTC (1747266080) May 14 23:41:20.926606 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 14 23:41:20.926618 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 23:41:20.926626 kernel: efifb: probing for efifb May 14 23:41:20.926634 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 14 23:41:20.926648 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 14 23:41:20.926656 kernel: efifb: scrolling: redraw May 14 23:41:20.926664 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 23:41:20.926672 kernel: Console: switching to colour frame buffer device 160x50 May 14 23:41:20.926680 kernel: fb0: EFI VGA frame buffer device May 14 23:41:20.926689 kernel: pstore: Using crash dump compression: deflate May 14 23:41:20.926697 kernel: pstore: Registered efi_pstore as persistent store backend May 14 23:41:20.926705 kernel: NET: Registered PF_INET6 protocol family May 14 23:41:20.926713 kernel: Segment Routing with IPv6 May 14 23:41:20.926724 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:41:20.926732 kernel: NET: Registered PF_PACKET protocol family May 14 23:41:20.926740 kernel: Key type dns_resolver registered May 14 23:41:20.926748 kernel: IPI shorthand broadcast: enabled May 14 23:41:20.926756 kernel: sched_clock: Marking stable (1145003065, 151768478)->(1312197578, -15426035) May 14 23:41:20.926765 kernel: registered taskstats version 1 May 14 23:41:20.926773 kernel: Loading compiled-in X.509 certificates May 14 23:41:20.926781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 4f9bc5b8797c7efeb1fcd74892dea83a6cb9d390' May 14 23:41:20.926800 kernel: Key type .fscrypt registered May 14 23:41:20.926810 kernel: Key type fscrypt-provisioning registered May 14 23:41:20.926818 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:41:20.926827 kernel: ima: Allocated hash algorithm: sha1 May 14 23:41:20.926835 kernel: ima: No architecture policies found May 14 23:41:20.926844 kernel: clk: Disabling unused clocks May 14 23:41:20.926852 kernel: Freeing unused kernel image (initmem) memory: 43604K May 14 23:41:20.926860 kernel: Write protecting the kernel read-only data: 40960k May 14 23:41:20.926868 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 14 23:41:20.926877 kernel: Run /init as init process May 14 23:41:20.926887 kernel: with arguments: May 14 23:41:20.926896 kernel: /init May 14 23:41:20.926904 kernel: with environment: May 14 23:41:20.926912 kernel: HOME=/ May 14 23:41:20.926920 kernel: TERM=linux May 14 23:41:20.926928 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:41:20.926937 systemd[1]: Successfully made /usr/ read-only. May 14 23:41:20.926949 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:41:20.926961 systemd[1]: Detected virtualization kvm. May 14 23:41:20.926969 systemd[1]: Detected architecture x86-64. May 14 23:41:20.926978 systemd[1]: Running in initrd. May 14 23:41:20.926991 systemd[1]: No hostname configured, using default hostname. May 14 23:41:20.927000 systemd[1]: Hostname set to . May 14 23:41:20.927008 systemd[1]: Initializing machine ID from VM UUID. May 14 23:41:20.927017 systemd[1]: Queued start job for default target initrd.target. May 14 23:41:20.927026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:41:20.927038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:41:20.927047 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:41:20.927056 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:41:20.927065 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:41:20.927075 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:41:20.927085 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:41:20.927097 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:41:20.927105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:41:20.927114 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:41:20.927123 systemd[1]: Reached target paths.target - Path Units. May 14 23:41:20.927132 systemd[1]: Reached target slices.target - Slice Units. May 14 23:41:20.927140 systemd[1]: Reached target swap.target - Swaps. May 14 23:41:20.927149 systemd[1]: Reached target timers.target - Timer Units. May 14 23:41:20.927158 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:41:20.927167 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:41:20.927178 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:41:20.927186 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:41:20.927195 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:41:20.927204 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:41:20.927213 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:41:20.927221 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:41:20.927230 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:41:20.927239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:41:20.927247 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:41:20.927258 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:41:20.927267 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:41:20.927276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:41:20.927285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:41:20.927294 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:41:20.927302 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:41:20.927314 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:41:20.927345 systemd-journald[193]: Collecting audit messages is disabled. May 14 23:41:20.927369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:41:20.927379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:41:20.927388 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:41:20.927397 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:41:20.927407 systemd-journald[193]: Journal started May 14 23:41:20.927425 systemd-journald[193]: Runtime Journal (/run/log/journal/a43f896c0a504988b6272b187437f351) is 6M, max 48.2M, 42.2M free. May 14 23:41:20.922745 systemd-modules-load[194]: Inserted module 'overlay' May 14 23:41:20.928838 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:41:20.942773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:41:20.944014 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:41:20.955836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:41:20.957976 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:41:20.964586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:41:20.967603 kernel: Bridge firewalling registered May 14 23:41:20.967126 systemd-modules-load[194]: Inserted module 'br_netfilter' May 14 23:41:20.969405 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:41:20.972080 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:41:20.974688 dracut-cmdline[222]: dracut-dracut-053 May 14 23:41:20.975012 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:41:20.977944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:41:20.980242 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e0c956f61127e47bb23a2bdeb0592b0ff91bd857e2344d0bf321acb67c279f1a May 14 23:41:21.001742 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:41:21.004493 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:41:21.051164 systemd-resolved[267]: Positive Trust Anchors: May 14 23:41:21.051192 systemd-resolved[267]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:41:21.051224 systemd-resolved[267]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:41:21.053783 systemd-resolved[267]: Defaulting to hostname 'linux'. May 14 23:41:21.055022 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:41:21.061091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:41:21.066587 kernel: SCSI subsystem initialized May 14 23:41:21.076576 kernel: Loading iSCSI transport class v2.0-870. May 14 23:41:21.087600 kernel: iscsi: registered transport (tcp) May 14 23:41:21.108834 kernel: iscsi: registered transport (qla4xxx) May 14 23:41:21.108871 kernel: QLogic iSCSI HBA Driver May 14 23:41:21.160059 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:41:21.161995 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:41:21.208711 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:41:21.208766 kernel: device-mapper: uevent: version 1.0.3 May 14 23:41:21.209766 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:41:21.251587 kernel: raid6: avx2x4 gen() 30070 MB/s May 14 23:41:21.268580 kernel: raid6: avx2x2 gen() 29803 MB/s May 14 23:41:21.285705 kernel: raid6: avx2x1 gen() 25498 MB/s May 14 23:41:21.285797 kernel: raid6: using algorithm avx2x4 gen() 30070 MB/s May 14 23:41:21.303700 kernel: raid6: .... xor() 7785 MB/s, rmw enabled May 14 23:41:21.303788 kernel: raid6: using avx2x2 recovery algorithm May 14 23:41:21.324584 kernel: xor: automatically using best checksumming function avx May 14 23:41:21.470598 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:41:21.484566 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:41:21.486318 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:41:21.523109 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 14 23:41:21.528908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:41:21.531839 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:41:21.556055 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation May 14 23:41:21.589976 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:41:21.591436 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:41:21.670849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:41:21.673807 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:41:21.691324 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:41:21.693615 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:41:21.696219 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:41:21.696285 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:41:21.697791 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:41:21.708618 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 14 23:41:21.712619 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 23:41:21.720402 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:41:21.720432 kernel: GPT:9289727 != 19775487 May 14 23:41:21.720444 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:41:21.720454 kernel: GPT:9289727 != 19775487 May 14 23:41:21.720464 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:41:21.720474 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:41:21.720343 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:41:21.727580 kernel: cryptd: max_cpu_qlen set to 1000 May 14 23:41:21.738608 kernel: libata version 3.00 loaded. May 14 23:41:21.750576 kernel: AVX2 version of gcm_enc/dec engaged. May 14 23:41:21.750613 kernel: AES CTR mode by8 optimization enabled May 14 23:41:21.750624 kernel: ahci 0000:00:1f.2: version 3.0 May 14 23:41:21.750818 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 23:41:21.755022 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 23:41:21.755193 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 23:41:21.758614 kernel: scsi host0: ahci May 14 23:41:21.760567 kernel: scsi host1: ahci May 14 23:41:21.762028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:41:21.764105 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) May 14 23:41:21.764121 kernel: scsi host2: ahci May 14 23:41:21.762126 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:41:21.778857 kernel: scsi host3: ahci May 14 23:41:21.779030 kernel: scsi host4: ahci May 14 23:41:21.779176 kernel: scsi host5: ahci May 14 23:41:21.779321 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 14 23:41:21.779333 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 14 23:41:21.779349 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 14 23:41:21.779360 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 14 23:41:21.779371 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 14 23:41:21.779381 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 14 23:41:21.779392 kernel: BTRFS: device fsid 267fa270-7a71-43aa-9209-0280512688b5 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (476) May 14 23:41:21.765741 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:41:21.769212 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:41:21.769277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:41:21.774381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:41:21.779537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:41:21.812452 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 23:41:21.812755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:41:21.824824 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 23:41:21.831856 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 23:41:21.831927 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 23:41:21.842796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:41:21.845197 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:41:21.845985 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:41:21.875807 disk-uuid[558]: Primary Header is updated. May 14 23:41:21.875807 disk-uuid[558]: Secondary Entries is updated. May 14 23:41:21.875807 disk-uuid[558]: Secondary Header is updated. May 14 23:41:21.879571 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:41:21.883573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:41:21.886294 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:41:22.083961 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 23:41:22.084027 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 23:41:22.084038 kernel: ata3.00: applying bridge limits May 14 23:41:22.084049 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 23:41:22.084060 kernel: ata3.00: configured for UDMA/100 May 14 23:41:22.085340 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 14 23:41:22.085422 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 23:41:22.086576 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 23:41:22.087581 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 23:41:22.088573 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 23:41:22.133178 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 23:41:22.133404 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 23:41:22.145575 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 23:41:22.885589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 23:41:22.885952 disk-uuid[562]: The operation has completed successfully. May 14 23:41:22.912815 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:41:22.912936 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:41:22.955011 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:41:22.974918 sh[594]: Success May 14 23:41:22.987690 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 23:41:23.022443 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:41:23.026764 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:41:23.040956 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:41:23.046164 kernel: BTRFS info (device dm-0): first mount of filesystem 267fa270-7a71-43aa-9209-0280512688b5 May 14 23:41:23.046194 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 23:41:23.046206 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:41:23.047945 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:41:23.047963 kernel: BTRFS info (device dm-0): using free space tree May 14 23:41:23.052835 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:41:23.053423 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:41:23.054289 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:41:23.056150 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:41:23.080145 kernel: BTRFS info (device vda6): first mount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:41:23.080203 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:41:23.080219 kernel: BTRFS info (device vda6): using free space tree May 14 23:41:23.083568 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:41:23.088606 kernel: BTRFS info (device vda6): last unmount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:41:23.094676 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:41:23.095837 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:41:23.159006 ignition[687]: Ignition 2.20.0 May 14 23:41:23.159616 ignition[687]: Stage: fetch-offline May 14 23:41:23.159671 ignition[687]: no configs at "/usr/lib/ignition/base.d" May 14 23:41:23.159682 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:41:23.159805 ignition[687]: parsed url from cmdline: "" May 14 23:41:23.159809 ignition[687]: no config URL provided May 14 23:41:23.159815 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:41:23.159825 ignition[687]: no config at "/usr/lib/ignition/user.ign" May 14 23:41:23.159865 ignition[687]: op(1): [started] loading QEMU firmware config module May 14 23:41:23.159874 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 23:41:23.168751 ignition[687]: op(1): [finished] loading QEMU firmware config module May 14 23:41:23.180702 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:41:23.185378 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:41:23.211813 ignition[687]: parsing config with SHA512: 7ffd6f19ba49532aacb662897a3ad37bfabeb62d25e28491d076af3131f903e70e9faad6b2260b92a0437d5d51623ad06e4295c1fdec8315f0bfce2c9e4b18c8 May 14 23:41:23.217802 unknown[687]: fetched base config from "system" May 14 23:41:23.217815 unknown[687]: fetched user config from "qemu" May 14 23:41:23.218355 ignition[687]: fetch-offline: fetch-offline passed May 14 23:41:23.218438 ignition[687]: Ignition finished successfully May 14 23:41:23.221165 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:41:23.222961 systemd-networkd[780]: lo: Link UP May 14 23:41:23.222965 systemd-networkd[780]: lo: Gained carrier May 14 23:41:23.224638 systemd-networkd[780]: Enumeration completed May 14 23:41:23.224773 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:41:23.225070 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:41:23.225075 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:41:23.226211 systemd-networkd[780]: eth0: Link UP May 14 23:41:23.226214 systemd-networkd[780]: eth0: Gained carrier May 14 23:41:23.226221 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:41:23.226862 systemd[1]: Reached target network.target - Network. May 14 23:41:23.228407 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 23:41:23.229230 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:41:23.248620 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:41:23.258603 ignition[784]: Ignition 2.20.0 May 14 23:41:23.258615 ignition[784]: Stage: kargs May 14 23:41:23.258771 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 14 23:41:23.258783 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:41:23.259616 ignition[784]: kargs: kargs passed May 14 23:41:23.259659 ignition[784]: Ignition finished successfully May 14 23:41:23.266219 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:41:23.267300 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:41:23.293693 ignition[793]: Ignition 2.20.0 May 14 23:41:23.293704 ignition[793]: Stage: disks May 14 23:41:23.293873 ignition[793]: no configs at "/usr/lib/ignition/base.d" May 14 23:41:23.293884 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:41:23.296877 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:41:23.294658 ignition[793]: disks: disks passed May 14 23:41:23.298525 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:41:23.294701 ignition[793]: Ignition finished successfully May 14 23:41:23.300368 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:41:23.302268 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:41:23.304326 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:41:23.306078 systemd[1]: Reached target basic.target - Basic System. May 14 23:41:23.309288 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:41:23.335789 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 23:41:23.342077 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:41:23.343127 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:41:23.434577 kernel: EXT4-fs (vda9): mounted filesystem 81735587-bac5-4d9e-ae49-5642e655af7f r/w with ordered data mode. Quota mode: none. May 14 23:41:23.434838 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:41:23.435498 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:41:23.438776 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:41:23.440954 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:41:23.442081 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 23:41:23.442124 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:41:23.442147 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:41:23.452886 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:41:23.456610 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:41:23.459744 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) May 14 23:41:23.459771 kernel: BTRFS info (device vda6): first mount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:41:23.461569 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:41:23.461598 kernel: BTRFS info (device vda6): using free space tree May 14 23:41:23.464572 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:41:23.472005 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:41:23.502294 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:41:23.507374 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory May 14 23:41:23.511945 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:41:23.516985 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:41:23.600969 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:41:23.604161 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:41:23.607244 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:41:23.633582 kernel: BTRFS info (device vda6): last unmount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:41:23.645252 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:41:23.658823 ignition[926]: INFO : Ignition 2.20.0 May 14 23:41:23.658823 ignition[926]: INFO : Stage: mount May 14 23:41:23.660685 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:41:23.660685 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:41:23.660685 ignition[926]: INFO : mount: mount passed May 14 23:41:23.660685 ignition[926]: INFO : Ignition finished successfully May 14 23:41:23.666845 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:41:23.668804 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:41:24.045630 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:41:24.047539 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:41:24.070145 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (938) May 14 23:41:24.070197 kernel: BTRFS info (device vda6): first mount of filesystem 4c949817-d4f4-485b-8019-80887ee5206f May 14 23:41:24.070209 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 14 23:41:24.071005 kernel: BTRFS info (device vda6): using free space tree May 14 23:41:24.074571 kernel: BTRFS info (device vda6): auto enabling async discard May 14 23:41:24.075749 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:41:24.102748 ignition[955]: INFO : Ignition 2.20.0 May 14 23:41:24.102748 ignition[955]: INFO : Stage: files May 14 23:41:24.104673 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:41:24.104673 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:41:24.104673 ignition[955]: DEBUG : files: compiled without relabeling support, skipping May 14 23:41:24.108467 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:41:24.108467 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:41:24.113249 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:41:24.114838 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:41:24.114838 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:41:24.114010 unknown[955]: wrote ssh authorized keys file for user: core May 14 23:41:24.118613 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 23:41:24.118613 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 14 23:41:24.180354 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:41:24.339771 systemd-networkd[780]: eth0: Gained IPv6LL May 14 23:41:24.514697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 23:41:24.514697 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:41:24.518908 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 23:41:24.973659 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 23:41:25.060726 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:41:25.060726 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 23:41:25.064505 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 14 23:41:25.367595 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 23:41:25.729773 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 23:41:25.729773 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 23:41:25.733320 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:41:25.733320 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:41:25.733320 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 23:41:25.733320 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 23:41:25.733320 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:41:25.733320 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 23:41:25.733320 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 23:41:25.733320 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 23:41:25.754260 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:41:25.759176 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 23:41:25.761332 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 23:41:25.761332 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 23:41:25.764751 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:41:25.766582 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:41:25.768775 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:41:25.770792 ignition[955]: INFO : files: files passed May 14 23:41:25.771711 ignition[955]: INFO : Ignition finished successfully May 14 23:41:25.775324 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:41:25.777619 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:41:25.780738 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:41:25.802121 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:41:25.803125 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:41:25.805579 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory May 14 23:41:25.806988 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:41:25.806988 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:41:25.812299 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:41:25.808303 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:41:25.810438 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:41:25.813354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:41:25.869973 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:41:25.870114 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:41:25.872755 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:41:25.874411 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:41:25.876431 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:41:25.877259 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:41:25.910333 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:41:25.913983 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:41:25.936563 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:41:25.937825 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:41:25.940092 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:41:25.942083 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:41:25.942192 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:41:25.944517 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:41:25.946089 systemd[1]: Stopped target basic.target - Basic System. May 14 23:41:25.948123 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:41:25.950153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:41:25.952147 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:41:25.954531 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:41:25.956745 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:41:25.959319 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:41:25.961803 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:41:25.964180 systemd[1]: Stopped target swap.target - Swaps. May 14 23:41:25.966162 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:41:25.966343 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:41:25.968904 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:41:25.970515 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:41:25.972835 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:41:25.973016 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:41:25.975324 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:41:25.975475 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:41:25.978141 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:41:25.978294 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:41:25.980345 systemd[1]: Stopped target paths.target - Path Units. May 14 23:41:25.982242 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:41:25.983664 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:41:25.985939 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:41:25.987905 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:41:25.989883 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:41:25.989988 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:41:25.992062 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:41:25.992185 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:41:25.993999 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:41:25.994168 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:41:25.996299 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:41:25.996453 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:41:25.999820 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:41:26.002202 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:41:26.003965 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:41:26.004131 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:41:26.006070 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:41:26.006220 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:41:26.014207 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:41:26.022287 ignition[1012]: INFO : Ignition 2.20.0 May 14 23:41:26.022287 ignition[1012]: INFO : Stage: umount May 14 23:41:26.022287 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:41:26.022287 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 23:41:26.022287 ignition[1012]: INFO : umount: umount passed May 14 23:41:26.022287 ignition[1012]: INFO : Ignition finished successfully May 14 23:41:26.014358 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:41:26.023622 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:41:26.023748 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:41:26.025234 systemd[1]: Stopped target network.target - Network. May 14 23:41:26.026974 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:41:26.027041 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:41:26.029082 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:41:26.029142 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:41:26.031143 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:41:26.031206 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:41:26.033342 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:41:26.033400 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:41:26.036059 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:41:26.038214 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:41:26.041603 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:41:26.042347 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:41:26.042499 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:41:26.046654 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:41:26.047303 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:41:26.047384 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:41:26.053147 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:41:26.053471 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:41:26.053703 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:41:26.056798 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:41:26.057423 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:41:26.057504 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:41:26.059927 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:41:26.061198 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:41:26.061267 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:41:26.063444 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:41:26.063511 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:41:26.065752 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:41:26.065802 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:41:26.068083 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:41:26.071217 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:41:26.094782 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:41:26.094996 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:41:26.097420 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:41:26.097469 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:41:26.099286 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:41:26.099327 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:41:26.101281 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:41:26.101332 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:41:26.103576 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:41:26.103627 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:41:26.105349 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:41:26.105400 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:41:26.108309 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:41:26.109384 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:41:26.109439 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:41:26.111698 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 23:41:26.111751 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:41:26.113727 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:41:26.113789 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:41:26.115873 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:41:26.115921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:41:26.121770 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:41:26.121875 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:41:26.129152 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:41:26.129261 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:41:26.258594 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:41:26.258769 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:41:26.261085 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:41:26.263039 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:41:26.263117 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:41:26.266411 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:41:26.287695 systemd[1]: Switching root. May 14 23:41:26.315049 systemd-journald[193]: Journal stopped May 14 23:41:27.647899 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 14 23:41:27.647973 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:41:27.647989 kernel: SELinux: policy capability open_perms=1 May 14 23:41:27.648001 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:41:27.648012 kernel: SELinux: policy capability always_check_network=0 May 14 23:41:27.648024 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:41:27.648036 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:41:27.648048 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:41:27.648059 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:41:27.648071 kernel: audit: type=1403 audit(1747266086.845:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:41:27.648086 systemd[1]: Successfully loaded SELinux policy in 39.772ms. May 14 23:41:27.648124 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.962ms. May 14 23:41:27.648137 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:41:27.648150 systemd[1]: Detected virtualization kvm. May 14 23:41:27.648162 systemd[1]: Detected architecture x86-64. May 14 23:41:27.648174 systemd[1]: Detected first boot. May 14 23:41:27.648187 systemd[1]: Initializing machine ID from VM UUID. May 14 23:41:27.648199 zram_generator::config[1059]: No configuration found. May 14 23:41:27.648215 kernel: Guest personality initialized and is inactive May 14 23:41:27.648226 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 23:41:27.648238 kernel: Initialized host personality May 14 23:41:27.648249 kernel: NET: Registered PF_VSOCK protocol family May 14 23:41:27.648261 systemd[1]: Populated /etc with preset unit settings. May 14 23:41:27.648275 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:41:27.648287 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:41:27.648299 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:41:27.648312 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:41:27.648327 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:41:27.648339 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:41:27.648351 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:41:27.648364 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:41:27.648377 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:41:27.648389 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:41:27.648401 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:41:27.648413 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:41:27.648428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:41:27.648440 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:41:27.648453 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:41:27.648465 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:41:27.648477 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:41:27.648490 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:41:27.648502 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 23:41:27.648515 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:41:27.648529 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:41:27.648543 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:41:27.648569 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:41:27.648582 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:41:27.648602 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:41:27.648614 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:41:27.648626 systemd[1]: Reached target slices.target - Slice Units. May 14 23:41:27.648638 systemd[1]: Reached target swap.target - Swaps. May 14 23:41:27.648650 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:41:27.648666 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:41:27.648678 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:41:27.648690 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:41:27.648702 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:41:27.648714 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:41:27.648726 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:41:27.648744 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:41:27.648756 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:41:27.648768 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:41:27.648788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:41:27.648801 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:41:27.648813 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:41:27.648830 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:41:27.648844 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:41:27.648856 systemd[1]: Reached target machines.target - Containers. May 14 23:41:27.648868 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:41:27.648881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:41:27.648895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:41:27.648907 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:41:27.648919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:41:27.648931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:41:27.648943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:41:27.648956 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:41:27.648967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:41:27.648980 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:41:27.648993 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:41:27.649007 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:41:27.649019 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:41:27.649031 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:41:27.649044 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:41:27.649057 kernel: fuse: init (API version 7.39) May 14 23:41:27.649069 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:41:27.649081 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:41:27.649093 kernel: loop: module loaded May 14 23:41:27.649106 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:41:27.649121 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:41:27.649134 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:41:27.649146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:41:27.649158 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:41:27.649173 systemd[1]: Stopped verity-setup.service. May 14 23:41:27.649201 systemd-journald[1141]: Collecting audit messages is disabled. May 14 23:41:27.649229 kernel: ACPI: bus type drm_connector registered May 14 23:41:27.649242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:41:27.649255 systemd-journald[1141]: Journal started May 14 23:41:27.649277 systemd-journald[1141]: Runtime Journal (/run/log/journal/a43f896c0a504988b6272b187437f351) is 6M, max 48.2M, 42.2M free. May 14 23:41:27.425825 systemd[1]: Queued start job for default target multi-user.target. May 14 23:41:27.437524 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 23:41:27.438015 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:41:27.660710 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:41:27.661779 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:41:27.662933 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:41:27.664194 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:41:27.665311 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:41:27.666512 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:41:27.667720 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:41:27.669138 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:41:27.670821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:41:27.672429 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:41:27.672898 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:41:27.674406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:41:27.674647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:41:27.676120 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:41:27.676345 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:41:27.677728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:41:27.677985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:41:27.679586 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:41:27.679832 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:41:27.681389 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:41:27.681663 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:41:27.683165 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:41:27.684683 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:41:27.686264 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:41:27.687883 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:41:27.703439 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:41:27.706489 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:41:27.708823 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:41:27.709963 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:41:27.709992 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:41:27.711993 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:41:27.720665 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:41:27.723202 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:41:27.724459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:41:27.727156 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:41:27.730006 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:41:27.731236 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:41:27.733059 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:41:27.734405 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:41:27.735682 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:41:27.737842 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:41:27.747480 systemd-journald[1141]: Time spent on flushing to /var/log/journal/a43f896c0a504988b6272b187437f351 is 19.529ms for 1054 entries. May 14 23:41:27.747480 systemd-journald[1141]: System Journal (/var/log/journal/a43f896c0a504988b6272b187437f351) is 8M, max 195.6M, 187.6M free. May 14 23:41:27.773098 systemd-journald[1141]: Received client request to flush runtime journal. May 14 23:41:27.744729 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:41:27.754111 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:41:27.756062 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:41:27.759201 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:41:27.772218 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:41:27.776202 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:41:27.779851 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:41:27.783190 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:41:27.789903 kernel: loop0: detected capacity change from 0 to 109808 May 14 23:41:27.789672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:41:27.799353 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:41:27.801303 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:41:27.810584 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:41:27.814937 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 23:41:27.817368 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 14 23:41:27.817388 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 14 23:41:27.828003 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:41:27.829738 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:41:27.833238 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:41:27.840749 kernel: loop1: detected capacity change from 0 to 218376 May 14 23:41:27.868410 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:41:27.872661 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:41:27.879654 kernel: loop2: detected capacity change from 0 to 151640 May 14 23:41:27.904006 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 14 23:41:27.904378 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 14 23:41:27.910104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:41:27.919678 kernel: loop3: detected capacity change from 0 to 109808 May 14 23:41:27.929594 kernel: loop4: detected capacity change from 0 to 218376 May 14 23:41:27.938570 kernel: loop5: detected capacity change from 0 to 151640 May 14 23:41:27.951092 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 23:41:27.953041 (sd-merge)[1208]: Merged extensions into '/usr'. May 14 23:41:27.958922 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:41:27.958938 systemd[1]: Reloading... May 14 23:41:28.020594 zram_generator::config[1239]: No configuration found. May 14 23:41:28.075067 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:41:28.146722 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:41:28.212689 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:41:28.212837 systemd[1]: Reloading finished in 253 ms. May 14 23:41:28.233767 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:41:28.235360 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:41:28.254324 systemd[1]: Starting ensure-sysext.service... May 14 23:41:28.256744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:41:28.276951 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... May 14 23:41:28.276972 systemd[1]: Reloading... May 14 23:41:28.281535 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:41:28.281872 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:41:28.282817 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:41:28.283087 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. May 14 23:41:28.283166 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. May 14 23:41:28.287977 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:41:28.287992 systemd-tmpfiles[1274]: Skipping /boot May 14 23:41:28.302463 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:41:28.302477 systemd-tmpfiles[1274]: Skipping /boot May 14 23:41:28.339617 zram_generator::config[1306]: No configuration found. May 14 23:41:28.444367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:41:28.509478 systemd[1]: Reloading finished in 232 ms. May 14 23:41:28.522333 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:41:28.542435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:41:28.551749 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:41:28.554263 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:41:28.564404 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:41:28.568793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:41:28.572216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:41:28.576734 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:41:28.580625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:41:28.580817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:41:28.588677 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:41:28.592874 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:41:28.596608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:41:28.597763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:41:28.597871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:41:28.602625 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:41:28.603699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:41:28.605252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:41:28.605482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:41:28.607480 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:41:28.615861 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:41:28.616074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:41:28.620819 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:41:28.621058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:41:28.626406 augenrules[1371]: No rules May 14 23:41:28.626832 systemd-udevd[1349]: Using default interface naming scheme 'v255'. May 14 23:41:28.628427 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:41:28.628794 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:41:28.633135 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:41:28.640245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:41:28.641738 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:41:28.642977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:41:28.644775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:41:28.651901 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:41:28.654115 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:41:28.659830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:41:28.661020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:41:28.661130 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:41:28.664842 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:41:28.665976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 23:41:28.667456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:41:28.674724 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:41:28.676512 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:41:28.678427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:41:28.678681 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:41:28.681310 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:41:28.681547 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:41:28.684292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:41:28.684533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:41:28.687428 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:41:28.687702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:41:28.693067 augenrules[1382]: /sbin/augenrules: No change May 14 23:41:28.698105 systemd[1]: Finished ensure-sysext.service. May 14 23:41:28.715017 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 23:41:28.721940 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:41:28.725642 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:41:28.725722 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:41:28.727970 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 23:41:28.729419 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:41:28.734574 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1432) May 14 23:41:28.735879 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:41:28.748833 augenrules[1437]: No rules May 14 23:41:28.751730 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:41:28.752652 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:41:28.779119 systemd-resolved[1346]: Positive Trust Anchors: May 14 23:41:28.779145 systemd-resolved[1346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:41:28.779175 systemd-resolved[1346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:41:28.784720 systemd-resolved[1346]: Defaulting to hostname 'linux'. May 14 23:41:28.786667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:41:28.798376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:41:28.807642 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 14 23:41:28.813588 kernel: ACPI: button: Power Button [PWRF] May 14 23:41:28.821545 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 23:41:28.825806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:41:28.835216 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 14 23:41:28.835517 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 23:41:28.835812 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 23:41:28.836200 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 23:41:28.841066 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 23:41:28.843613 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 14 23:41:28.843724 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:41:28.848111 systemd-networkd[1434]: lo: Link UP May 14 23:41:28.848468 systemd-networkd[1434]: lo: Gained carrier May 14 23:41:28.850494 systemd-networkd[1434]: Enumeration completed May 14 23:41:28.850887 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:41:28.850891 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:41:28.851353 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:41:28.852698 systemd-networkd[1434]: eth0: Link UP May 14 23:41:28.852775 systemd-networkd[1434]: eth0: Gained carrier May 14 23:41:28.852837 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:41:28.853266 systemd[1]: Reached target network.target - Network. May 14 23:41:28.854954 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:41:28.858675 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:41:28.860251 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:41:28.867637 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 23:41:28.868268 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. May 14 23:41:30.294047 systemd-resolved[1346]: Clock change detected. Flushing caches. May 14 23:41:30.294161 systemd-timesyncd[1435]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 23:41:30.294276 systemd-timesyncd[1435]: Initial clock synchronization to Wed 2025-05-14 23:41:30.293933 UTC. May 14 23:41:30.317569 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:41:30.343630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:41:30.347780 kernel: mousedev: PS/2 mouse device common for all mice May 14 23:41:30.403516 kernel: kvm_amd: TSC scaling supported May 14 23:41:30.403611 kernel: kvm_amd: Nested Virtualization enabled May 14 23:41:30.403626 kernel: kvm_amd: Nested Paging enabled May 14 23:41:30.403638 kernel: kvm_amd: LBR virtualization supported May 14 23:41:30.404558 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 14 23:41:30.404583 kernel: kvm_amd: Virtual GIF supported May 14 23:41:30.424160 kernel: EDAC MC: Ver: 3.0.0 May 14 23:41:30.442608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:41:30.462431 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:41:30.465351 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:41:30.491701 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:41:30.528703 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:41:30.530363 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:41:30.531531 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:41:30.532743 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:41:30.534035 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:41:30.535509 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:41:30.536894 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:41:30.538271 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:41:30.539548 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:41:30.539576 systemd[1]: Reached target paths.target - Path Units. May 14 23:41:30.540528 systemd[1]: Reached target timers.target - Timer Units. May 14 23:41:30.542410 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:41:30.545181 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:41:30.548751 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:41:30.550332 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:41:30.551647 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:41:30.562514 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:41:30.564023 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:41:30.566382 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:41:30.568014 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:41:30.569246 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:41:30.570263 systemd[1]: Reached target basic.target - Basic System. May 14 23:41:30.571310 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:41:30.571334 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:41:30.572262 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:41:30.574609 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:41:30.577216 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:41:30.576609 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:41:30.578717 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:41:30.581374 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:41:30.584250 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:41:30.588211 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:41:30.590156 jq[1478]: false May 14 23:41:30.590588 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:41:30.600236 dbus-daemon[1477]: [system] SELinux support is enabled May 14 23:41:30.603250 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:41:30.603549 extend-filesystems[1479]: Found loop3 May 14 23:41:30.604992 extend-filesystems[1479]: Found loop4 May 14 23:41:30.604992 extend-filesystems[1479]: Found loop5 May 14 23:41:30.604992 extend-filesystems[1479]: Found sr0 May 14 23:41:30.604992 extend-filesystems[1479]: Found vda May 14 23:41:30.604992 extend-filesystems[1479]: Found vda1 May 14 23:41:30.604992 extend-filesystems[1479]: Found vda2 May 14 23:41:30.604992 extend-filesystems[1479]: Found vda3 May 14 23:41:30.604992 extend-filesystems[1479]: Found usr May 14 23:41:30.604992 extend-filesystems[1479]: Found vda4 May 14 23:41:30.604992 extend-filesystems[1479]: Found vda6 May 14 23:41:30.604992 extend-filesystems[1479]: Found vda7 May 14 23:41:30.604992 extend-filesystems[1479]: Found vda9 May 14 23:41:30.604992 extend-filesystems[1479]: Checking size of /dev/vda9 May 14 23:41:30.611404 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:41:30.616562 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:41:30.617235 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:41:30.618379 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:41:30.621697 extend-filesystems[1479]: Resized partition /dev/vda9 May 14 23:41:30.624017 extend-filesystems[1498]: resize2fs 1.47.2 (1-Jan-2025) May 14 23:41:30.625380 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:41:30.626325 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:41:30.629902 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:41:30.630150 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 23:41:30.631689 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:41:30.632056 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:41:30.632545 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:41:30.632797 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:41:30.641393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1404) May 14 23:41:30.649730 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:41:30.650154 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:41:30.652062 jq[1499]: true May 14 23:41:30.653362 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 23:41:30.677540 update_engine[1495]: I20250514 23:41:30.660478 1495 main.cc:92] Flatcar Update Engine starting May 14 23:41:30.677540 update_engine[1495]: I20250514 23:41:30.673509 1495 update_check_scheduler.cc:74] Next update check in 6m28s May 14 23:41:30.671371 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:41:30.682691 jq[1504]: true May 14 23:41:30.685534 extend-filesystems[1498]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 23:41:30.685534 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 23:41:30.685534 extend-filesystems[1498]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 23:41:30.689102 extend-filesystems[1479]: Resized filesystem in /dev/vda9 May 14 23:41:30.687493 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:41:30.689152 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:41:30.703418 tar[1503]: linux-amd64/LICENSE May 14 23:41:30.703228 systemd-logind[1493]: Watching system buttons on /dev/input/event1 (Power Button) May 14 23:41:30.703927 tar[1503]: linux-amd64/helm May 14 23:41:30.703252 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 23:41:30.706146 systemd-logind[1493]: New seat seat0. May 14 23:41:30.710897 systemd[1]: Started update-engine.service - Update Engine. May 14 23:41:30.722567 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:41:30.724209 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:41:30.724233 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:41:30.725577 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:41:30.725599 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:41:30.728437 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:41:30.729967 bash[1532]: Updated "/home/core/.ssh/authorized_keys" May 14 23:41:30.733026 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:41:30.736647 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 23:41:30.771491 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:41:30.879826 containerd[1505]: time="2025-05-14T23:41:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 23:41:30.880934 containerd[1505]: time="2025-05-14T23:41:30.880894231Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 23:41:30.894244 containerd[1505]: time="2025-05-14T23:41:30.894171315Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.514µs" May 14 23:41:30.894244 containerd[1505]: time="2025-05-14T23:41:30.894211901Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 23:41:30.894244 containerd[1505]: time="2025-05-14T23:41:30.894230406Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 23:41:30.894471 containerd[1505]: time="2025-05-14T23:41:30.894446081Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 23:41:30.894471 containerd[1505]: time="2025-05-14T23:41:30.894461600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 23:41:30.894533 containerd[1505]: time="2025-05-14T23:41:30.894489392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 23:41:30.894568 containerd[1505]: time="2025-05-14T23:41:30.894556608Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 23:41:30.894597 containerd[1505]: time="2025-05-14T23:41:30.894569252Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 23:41:30.894933 containerd[1505]: time="2025-05-14T23:41:30.894895243Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 23:41:30.894933 containerd[1505]: time="2025-05-14T23:41:30.894913708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 23:41:30.894933 containerd[1505]: time="2025-05-14T23:41:30.894923717Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 23:41:30.894933 containerd[1505]: time="2025-05-14T23:41:30.894932092Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 23:41:30.895048 containerd[1505]: time="2025-05-14T23:41:30.895026710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 23:41:30.895346 containerd[1505]: time="2025-05-14T23:41:30.895307807Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 23:41:30.895346 containerd[1505]: time="2025-05-14T23:41:30.895344255Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 23:41:30.895429 containerd[1505]: time="2025-05-14T23:41:30.895355517Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 23:41:30.895429 containerd[1505]: time="2025-05-14T23:41:30.895394951Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 23:41:30.895653 containerd[1505]: time="2025-05-14T23:41:30.895629280Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 23:41:30.895717 containerd[1505]: time="2025-05-14T23:41:30.895696737Z" level=info msg="metadata content store policy set" policy=shared May 14 23:41:30.902193 containerd[1505]: time="2025-05-14T23:41:30.902149791Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 23:41:30.902193 containerd[1505]: time="2025-05-14T23:41:30.902189505Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 23:41:30.902287 containerd[1505]: time="2025-05-14T23:41:30.902202980Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 23:41:30.902287 containerd[1505]: time="2025-05-14T23:41:30.902216135Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 23:41:30.902287 containerd[1505]: time="2025-05-14T23:41:30.902226815Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 23:41:30.902287 containerd[1505]: time="2025-05-14T23:41:30.902236844Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 23:41:30.902287 containerd[1505]: time="2025-05-14T23:41:30.902251682Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 23:41:30.902287 containerd[1505]: time="2025-05-14T23:41:30.902263764Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 23:41:30.902287 containerd[1505]: time="2025-05-14T23:41:30.902274074Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 23:41:30.902287 containerd[1505]: time="2025-05-14T23:41:30.902284283Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 23:41:30.902287 containerd[1505]: time="2025-05-14T23:41:30.902295384Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 23:41:30.902492 containerd[1505]: time="2025-05-14T23:41:30.902307146Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 23:41:30.902492 containerd[1505]: time="2025-05-14T23:41:30.902423003Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 23:41:30.902492 containerd[1505]: time="2025-05-14T23:41:30.902440576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 23:41:30.902492 containerd[1505]: time="2025-05-14T23:41:30.902455424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 23:41:30.902492 containerd[1505]: time="2025-05-14T23:41:30.902467767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 23:41:30.902492 containerd[1505]: time="2025-05-14T23:41:30.902478086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 23:41:30.902492 containerd[1505]: time="2025-05-14T23:41:30.902488115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 23:41:30.902666 containerd[1505]: time="2025-05-14T23:41:30.902499396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 23:41:30.902666 containerd[1505]: time="2025-05-14T23:41:30.902509916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 23:41:30.902666 containerd[1505]: time="2025-05-14T23:41:30.902520225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 23:41:30.902666 containerd[1505]: time="2025-05-14T23:41:30.902536406Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 23:41:30.902666 containerd[1505]: time="2025-05-14T23:41:30.902547517Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 23:41:30.902666 containerd[1505]: time="2025-05-14T23:41:30.902615254Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 23:41:30.902666 containerd[1505]: time="2025-05-14T23:41:30.902627998Z" level=info msg="Start snapshots syncer" May 14 23:41:30.902666 containerd[1505]: time="2025-05-14T23:41:30.902646372Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 23:41:30.902908 containerd[1505]: time="2025-05-14T23:41:30.902849613Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 23:41:30.903040 containerd[1505]: time="2025-05-14T23:41:30.902913934Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 23:41:30.903040 containerd[1505]: time="2025-05-14T23:41:30.902974778Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 23:41:30.903086 containerd[1505]: time="2025-05-14T23:41:30.903068584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903091878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903103630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903139016Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903152501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903162640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903173902Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903194480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903205972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903215409Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903256587Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903268379Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903277336Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 23:41:30.903276 containerd[1505]: time="2025-05-14T23:41:30.903287304Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 23:41:30.903675 containerd[1505]: time="2025-05-14T23:41:30.903297253Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 23:41:30.903675 containerd[1505]: time="2025-05-14T23:41:30.903310027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 23:41:30.903675 containerd[1505]: time="2025-05-14T23:41:30.903329604Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 23:41:30.903675 containerd[1505]: time="2025-05-14T23:41:30.903350683Z" level=info msg="runtime interface created" May 14 23:41:30.903675 containerd[1505]: time="2025-05-14T23:41:30.903357576Z" level=info msg="created NRI interface" May 14 23:41:30.903675 containerd[1505]: time="2025-05-14T23:41:30.903365621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 23:41:30.903675 containerd[1505]: time="2025-05-14T23:41:30.903376021Z" level=info msg="Connect containerd service" May 14 23:41:30.903675 containerd[1505]: time="2025-05-14T23:41:30.903402210Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:41:30.904259 containerd[1505]: time="2025-05-14T23:41:30.904041950Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:41:30.947952 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:41:30.977750 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:41:30.981089 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:41:30.995633 containerd[1505]: time="2025-05-14T23:41:30.995464723Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:41:30.995633 containerd[1505]: time="2025-05-14T23:41:30.995548841Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:41:30.995633 containerd[1505]: time="2025-05-14T23:41:30.995582794Z" level=info msg="Start subscribing containerd event" May 14 23:41:30.995633 containerd[1505]: time="2025-05-14T23:41:30.995619323Z" level=info msg="Start recovering state" May 14 23:41:30.996720 containerd[1505]: time="2025-05-14T23:41:30.995977134Z" level=info msg="Start event monitor" May 14 23:41:30.996720 containerd[1505]: time="2025-05-14T23:41:30.996010026Z" level=info msg="Start cni network conf syncer for default" May 14 23:41:30.996720 containerd[1505]: time="2025-05-14T23:41:30.996025895Z" level=info msg="Start streaming server" May 14 23:41:30.996720 containerd[1505]: time="2025-05-14T23:41:30.996036555Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 23:41:30.996720 containerd[1505]: time="2025-05-14T23:41:30.996044841Z" level=info msg="runtime interface starting up..." May 14 23:41:30.996720 containerd[1505]: time="2025-05-14T23:41:30.996055501Z" level=info msg="starting plugins..." May 14 23:41:30.996720 containerd[1505]: time="2025-05-14T23:41:30.996071010Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 23:41:30.996720 containerd[1505]: time="2025-05-14T23:41:30.996243954Z" level=info msg="containerd successfully booted in 0.117198s" May 14 23:41:30.996344 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:41:30.999705 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:41:31.000011 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:41:31.003238 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:41:31.029482 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:41:31.032547 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:41:31.035031 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 23:41:31.036543 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:41:31.127154 tar[1503]: linux-amd64/README.md May 14 23:41:31.154324 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:41:31.908317 systemd-networkd[1434]: eth0: Gained IPv6LL May 14 23:41:31.911358 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:41:31.913197 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:41:31.915724 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 23:41:31.918047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:31.920512 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:41:31.948546 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:41:31.950410 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 23:41:31.950668 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 23:41:31.953212 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:41:32.596659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:32.598458 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:41:32.600567 systemd[1]: Startup finished in 1.283s (kernel) + 6.133s (initrd) + 4.367s (userspace) = 11.785s. May 14 23:41:32.600802 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:41:33.000723 kubelet[1602]: E0514 23:41:33.000515 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:41:33.005192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:41:33.005413 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:41:33.005828 systemd[1]: kubelet.service: Consumed 966ms CPU time, 255.4M memory peak. May 14 23:41:35.135342 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:41:35.149459 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:39778.service - OpenSSH per-connection server daemon (10.0.0.1:39778). May 14 23:41:35.247839 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 39778 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:41:35.255773 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:35.298450 systemd-logind[1493]: New session 1 of user core. May 14 23:41:35.300348 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:41:35.306549 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:41:35.340684 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:41:35.351146 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:41:35.373522 (systemd)[1619]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:41:35.377378 systemd-logind[1493]: New session c1 of user core. May 14 23:41:35.590910 systemd[1619]: Queued start job for default target default.target. May 14 23:41:35.607811 systemd[1619]: Created slice app.slice - User Application Slice. May 14 23:41:35.607846 systemd[1619]: Reached target paths.target - Paths. May 14 23:41:35.607899 systemd[1619]: Reached target timers.target - Timers. May 14 23:41:35.610188 systemd[1619]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:41:35.626201 systemd[1619]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:41:35.626386 systemd[1619]: Reached target sockets.target - Sockets. May 14 23:41:35.626452 systemd[1619]: Reached target basic.target - Basic System. May 14 23:41:35.626507 systemd[1619]: Reached target default.target - Main User Target. May 14 23:41:35.626549 systemd[1619]: Startup finished in 239ms. May 14 23:41:35.626898 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:41:35.629224 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:41:35.706744 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:39780.service - OpenSSH per-connection server daemon (10.0.0.1:39780). May 14 23:41:35.762806 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 39780 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:41:35.766298 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:35.779216 systemd-logind[1493]: New session 2 of user core. May 14 23:41:35.791360 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:41:35.852496 sshd[1632]: Connection closed by 10.0.0.1 port 39780 May 14 23:41:35.854579 sshd-session[1630]: pam_unix(sshd:session): session closed for user core May 14 23:41:35.872629 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:39780.service: Deactivated successfully. May 14 23:41:35.875893 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:41:35.878374 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. May 14 23:41:35.882717 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:39788.service - OpenSSH per-connection server daemon (10.0.0.1:39788). May 14 23:41:35.883674 systemd-logind[1493]: Removed session 2. May 14 23:41:35.941764 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 39788 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:41:35.943606 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:35.948847 systemd-logind[1493]: New session 3 of user core. May 14 23:41:35.963265 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:41:36.013615 sshd[1640]: Connection closed by 10.0.0.1 port 39788 May 14 23:41:36.014040 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 14 23:41:36.027234 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:39788.service: Deactivated successfully. May 14 23:41:36.029102 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:41:36.030822 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. May 14 23:41:36.032149 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:39796.service - OpenSSH per-connection server daemon (10.0.0.1:39796). May 14 23:41:36.032896 systemd-logind[1493]: Removed session 3. May 14 23:41:36.080489 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 39796 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:41:36.081836 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:36.085992 systemd-logind[1493]: New session 4 of user core. May 14 23:41:36.095254 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:41:36.149408 sshd[1648]: Connection closed by 10.0.0.1 port 39796 May 14 23:41:36.149872 sshd-session[1645]: pam_unix(sshd:session): session closed for user core May 14 23:41:36.162700 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:39796.service: Deactivated successfully. May 14 23:41:36.164405 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:41:36.166074 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. May 14 23:41:36.167415 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:39798.service - OpenSSH per-connection server daemon (10.0.0.1:39798). May 14 23:41:36.168139 systemd-logind[1493]: Removed session 4. May 14 23:41:36.221299 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 39798 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:41:36.222702 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:36.226710 systemd-logind[1493]: New session 5 of user core. May 14 23:41:36.236242 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:41:36.294581 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:41:36.294925 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:41:36.309631 sudo[1657]: pam_unix(sudo:session): session closed for user root May 14 23:41:36.311439 sshd[1656]: Connection closed by 10.0.0.1 port 39798 May 14 23:41:36.311785 sshd-session[1653]: pam_unix(sshd:session): session closed for user core May 14 23:41:36.323700 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:39798.service: Deactivated successfully. May 14 23:41:36.325335 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:41:36.326989 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. May 14 23:41:36.328399 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:39808.service - OpenSSH per-connection server daemon (10.0.0.1:39808). May 14 23:41:36.329082 systemd-logind[1493]: Removed session 5. May 14 23:41:36.392677 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 39808 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:41:36.394531 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:36.398582 systemd-logind[1493]: New session 6 of user core. May 14 23:41:36.407247 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:41:36.461983 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:41:36.462334 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:41:36.466308 sudo[1667]: pam_unix(sudo:session): session closed for user root May 14 23:41:36.472660 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:41:36.472981 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:41:36.483186 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:41:36.528597 augenrules[1689]: No rules May 14 23:41:36.531023 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:41:36.531423 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:41:36.533166 sudo[1666]: pam_unix(sudo:session): session closed for user root May 14 23:41:36.535564 sshd[1665]: Connection closed by 10.0.0.1 port 39808 May 14 23:41:36.536081 sshd-session[1662]: pam_unix(sshd:session): session closed for user core May 14 23:41:36.547661 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:39808.service: Deactivated successfully. May 14 23:41:36.550445 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:41:36.553043 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. May 14 23:41:36.554751 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:39816.service - OpenSSH per-connection server daemon (10.0.0.1:39816). May 14 23:41:36.556288 systemd-logind[1493]: Removed session 6. May 14 23:41:36.618367 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 39816 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:41:36.620241 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:41:36.625695 systemd-logind[1493]: New session 7 of user core. May 14 23:41:36.642457 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:41:36.699690 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:41:36.700080 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:41:37.029562 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:41:37.043426 (dockerd)[1721]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:41:37.290910 dockerd[1721]: time="2025-05-14T23:41:37.290748115Z" level=info msg="Starting up" May 14 23:41:37.292982 dockerd[1721]: time="2025-05-14T23:41:37.292940898Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 23:41:37.804718 dockerd[1721]: time="2025-05-14T23:41:37.804654266Z" level=info msg="Loading containers: start." May 14 23:41:38.266076 kernel: Initializing XFRM netlink socket May 14 23:41:38.423713 systemd-networkd[1434]: docker0: Link UP May 14 23:41:38.517287 dockerd[1721]: time="2025-05-14T23:41:38.517078621Z" level=info msg="Loading containers: done." May 14 23:41:38.541421 dockerd[1721]: time="2025-05-14T23:41:38.541338565Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:41:38.541640 dockerd[1721]: time="2025-05-14T23:41:38.541464652Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 23:41:38.541640 dockerd[1721]: time="2025-05-14T23:41:38.541621396Z" level=info msg="Daemon has completed initialization" May 14 23:41:38.600874 dockerd[1721]: time="2025-05-14T23:41:38.600644206Z" level=info msg="API listen on /run/docker.sock" May 14 23:41:38.600962 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:41:39.599512 containerd[1505]: time="2025-05-14T23:41:39.599138365Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 23:41:40.182700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029351553.mount: Deactivated successfully. May 14 23:41:41.053512 containerd[1505]: time="2025-05-14T23:41:41.053451862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:41.054353 containerd[1505]: time="2025-05-14T23:41:41.054255098Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 14 23:41:41.055431 containerd[1505]: time="2025-05-14T23:41:41.055400337Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:41.057935 containerd[1505]: time="2025-05-14T23:41:41.057904985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:41.058810 containerd[1505]: time="2025-05-14T23:41:41.058786688Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.459596485s" May 14 23:41:41.058856 containerd[1505]: time="2025-05-14T23:41:41.058815362Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 14 23:41:41.059453 containerd[1505]: time="2025-05-14T23:41:41.059311403Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 23:41:42.177735 containerd[1505]: time="2025-05-14T23:41:42.177654364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:42.178737 containerd[1505]: time="2025-05-14T23:41:42.178683956Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 14 23:41:42.180101 containerd[1505]: time="2025-05-14T23:41:42.180052192Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:42.183053 containerd[1505]: time="2025-05-14T23:41:42.182977509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:42.184079 containerd[1505]: time="2025-05-14T23:41:42.184047366Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.124711227s" May 14 23:41:42.184141 containerd[1505]: time="2025-05-14T23:41:42.184079336Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 14 23:41:42.184730 containerd[1505]: time="2025-05-14T23:41:42.184704979Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 23:41:43.072643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:41:43.074292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:43.245888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:43.249918 (kubelet)[1994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:41:43.419426 kubelet[1994]: E0514 23:41:43.419278 1994 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:41:43.426429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:41:43.426797 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:41:43.427319 systemd[1]: kubelet.service: Consumed 215ms CPU time, 104.5M memory peak. May 14 23:41:45.709948 containerd[1505]: time="2025-05-14T23:41:45.709863118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:45.732888 containerd[1505]: time="2025-05-14T23:41:45.732803478Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 14 23:41:45.759994 containerd[1505]: time="2025-05-14T23:41:45.759932644Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:45.784228 containerd[1505]: time="2025-05-14T23:41:45.784153586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:45.785418 containerd[1505]: time="2025-05-14T23:41:45.785362644Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 3.600620876s" May 14 23:41:45.785471 containerd[1505]: time="2025-05-14T23:41:45.785418589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 14 23:41:45.785968 containerd[1505]: time="2025-05-14T23:41:45.785941950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 23:41:46.890562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630961884.mount: Deactivated successfully. May 14 23:41:47.155848 containerd[1505]: time="2025-05-14T23:41:47.155697140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:47.157576 containerd[1505]: time="2025-05-14T23:41:47.157503057Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 14 23:41:47.159099 containerd[1505]: time="2025-05-14T23:41:47.159047574Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:47.160794 containerd[1505]: time="2025-05-14T23:41:47.160760587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:47.161245 containerd[1505]: time="2025-05-14T23:41:47.161209289Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.375232734s" May 14 23:41:47.161245 containerd[1505]: time="2025-05-14T23:41:47.161241119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 14 23:41:47.161737 containerd[1505]: time="2025-05-14T23:41:47.161709868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 23:41:47.704416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325423584.mount: Deactivated successfully. May 14 23:41:48.372265 containerd[1505]: time="2025-05-14T23:41:48.372205331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:48.373068 containerd[1505]: time="2025-05-14T23:41:48.373023536Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 14 23:41:48.374205 containerd[1505]: time="2025-05-14T23:41:48.374179635Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:48.376686 containerd[1505]: time="2025-05-14T23:41:48.376656030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:48.377541 containerd[1505]: time="2025-05-14T23:41:48.377493480Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.215752855s" May 14 23:41:48.377541 containerd[1505]: time="2025-05-14T23:41:48.377539046Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 14 23:41:48.378311 containerd[1505]: time="2025-05-14T23:41:48.378276930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:41:49.939105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479691060.mount: Deactivated successfully. May 14 23:41:50.194973 containerd[1505]: time="2025-05-14T23:41:50.194799146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:41:50.235274 containerd[1505]: time="2025-05-14T23:41:50.235117492Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 23:41:50.260605 containerd[1505]: time="2025-05-14T23:41:50.260542843Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:41:50.312930 containerd[1505]: time="2025-05-14T23:41:50.312858151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:41:50.313774 containerd[1505]: time="2025-05-14T23:41:50.313723464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.93541228s" May 14 23:41:50.313774 containerd[1505]: time="2025-05-14T23:41:50.313770142Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 23:41:50.314368 containerd[1505]: time="2025-05-14T23:41:50.314342345Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 23:41:51.041935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230328438.mount: Deactivated successfully. May 14 23:41:53.129200 containerd[1505]: time="2025-05-14T23:41:53.129136675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:53.130007 containerd[1505]: time="2025-05-14T23:41:53.129966161Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 14 23:41:53.131333 containerd[1505]: time="2025-05-14T23:41:53.131303720Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:53.133781 containerd[1505]: time="2025-05-14T23:41:53.133752082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:41:53.134737 containerd[1505]: time="2025-05-14T23:41:53.134712343Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.820339882s" May 14 23:41:53.134779 containerd[1505]: time="2025-05-14T23:41:53.134739414Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 14 23:41:53.572670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:41:53.574382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:53.757255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:53.771607 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:41:53.807255 kubelet[2144]: E0514 23:41:53.807116 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:41:53.810924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:41:53.811265 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:41:53.811687 systemd[1]: kubelet.service: Consumed 193ms CPU time, 103.9M memory peak. May 14 23:41:55.334658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:55.334871 systemd[1]: kubelet.service: Consumed 193ms CPU time, 103.9M memory peak. May 14 23:41:55.337090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:55.361193 systemd[1]: Reload requested from client PID 2171 ('systemctl') (unit session-7.scope)... May 14 23:41:55.361205 systemd[1]: Reloading... May 14 23:41:55.449154 zram_generator::config[2214]: No configuration found. May 14 23:41:55.962955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:41:56.064886 systemd[1]: Reloading finished in 703 ms. May 14 23:41:56.121324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:56.124905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:56.125568 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:41:56.125825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:56.125859 systemd[1]: kubelet.service: Consumed 149ms CPU time, 91.8M memory peak. May 14 23:41:56.127357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:41:56.295239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:41:56.300263 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:41:56.337228 kubelet[2265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:41:56.337228 kubelet[2265]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:41:56.337228 kubelet[2265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:41:56.337500 kubelet[2265]: I0514 23:41:56.337323 2265 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:41:56.683375 kubelet[2265]: I0514 23:41:56.683317 2265 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:41:56.683375 kubelet[2265]: I0514 23:41:56.683354 2265 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:41:56.683678 kubelet[2265]: I0514 23:41:56.683651 2265 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:41:56.707905 kubelet[2265]: E0514 23:41:56.707869 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 14 23:41:56.708953 kubelet[2265]: I0514 23:41:56.708918 2265 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:41:56.715062 kubelet[2265]: I0514 23:41:56.715032 2265 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 23:41:56.720053 kubelet[2265]: I0514 23:41:56.720030 2265 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:41:56.721073 kubelet[2265]: I0514 23:41:56.721036 2265 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:41:56.721242 kubelet[2265]: I0514 23:41:56.721066 2265 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:41:56.721242 kubelet[2265]: I0514 23:41:56.721238 2265 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:41:56.721402 kubelet[2265]: I0514 23:41:56.721248 2265 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:41:56.721429 kubelet[2265]: I0514 23:41:56.721421 2265 state_mem.go:36] "Initialized new in-memory state store" May 14 23:41:56.723724 kubelet[2265]: I0514 23:41:56.723695 2265 kubelet.go:446] "Attempting to sync node with API server" May 14 23:41:56.723766 kubelet[2265]: I0514 23:41:56.723726 2265 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:41:56.723766 kubelet[2265]: I0514 23:41:56.723751 2265 kubelet.go:352] "Adding apiserver pod source" May 14 23:41:56.727166 kubelet[2265]: I0514 23:41:56.727088 2265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:41:56.729262 kubelet[2265]: W0514 23:41:56.729212 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 14 23:41:56.729302 kubelet[2265]: E0514 23:41:56.729264 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 14 23:41:56.730925 kubelet[2265]: W0514 23:41:56.730297 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 14 23:41:56.730925 kubelet[2265]: E0514 23:41:56.730347 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 14 23:41:56.730925 kubelet[2265]: I0514 23:41:56.730444 2265 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 23:41:56.730925 kubelet[2265]: I0514 23:41:56.730834 2265 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:41:56.731726 kubelet[2265]: W0514 23:41:56.731505 2265 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:41:56.734018 kubelet[2265]: I0514 23:41:56.733993 2265 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:41:56.734068 kubelet[2265]: I0514 23:41:56.734032 2265 server.go:1287] "Started kubelet" May 14 23:41:56.735405 kubelet[2265]: I0514 23:41:56.735188 2265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:41:56.738142 kubelet[2265]: I0514 23:41:56.735633 2265 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:41:56.738142 kubelet[2265]: I0514 23:41:56.735697 2265 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:41:56.738142 kubelet[2265]: I0514 23:41:56.736213 2265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:41:56.738142 kubelet[2265]: I0514 23:41:56.736620 2265 server.go:490] "Adding debug handlers to kubelet server" May 14 23:41:56.738142 kubelet[2265]: I0514 23:41:56.736819 2265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:41:56.738142 kubelet[2265]: E0514 23:41:56.737495 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:56.738142 kubelet[2265]: I0514 23:41:56.737543 2265 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:41:56.738142 kubelet[2265]: I0514 23:41:56.737713 2265 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:41:56.738142 kubelet[2265]: I0514 23:41:56.737772 2265 reconciler.go:26] "Reconciler: start to sync state" May 14 23:41:56.738142 kubelet[2265]: W0514 23:41:56.738015 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 14 23:41:56.738446 kubelet[2265]: E0514 23:41:56.737183 2265 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f89469801be17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 23:41:56.734008855 +0000 UTC m=+0.429697591,LastTimestamp:2025-05-14 23:41:56.734008855 +0000 UTC m=+0.429697591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 23:41:56.738446 kubelet[2265]: E0514 23:41:56.738053 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 14 23:41:56.738446 kubelet[2265]: E0514 23:41:56.738249 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" May 14 23:41:56.738941 kubelet[2265]: I0514 23:41:56.738916 2265 factory.go:221] Registration of the systemd container factory successfully May 14 23:41:56.739007 kubelet[2265]: I0514 23:41:56.738984 2265 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:41:56.739959 kubelet[2265]: I0514 23:41:56.739918 2265 factory.go:221] Registration of the containerd container factory successfully May 14 23:41:56.739959 kubelet[2265]: E0514 23:41:56.739956 2265 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:41:56.752954 kubelet[2265]: I0514 23:41:56.752925 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:41:56.754542 kubelet[2265]: I0514 23:41:56.754256 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:41:56.754542 kubelet[2265]: I0514 23:41:56.754300 2265 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:41:56.754542 kubelet[2265]: I0514 23:41:56.754318 2265 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:41:56.754542 kubelet[2265]: I0514 23:41:56.754324 2265 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:41:56.754907 kubelet[2265]: W0514 23:41:56.754863 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 14 23:41:56.754957 kubelet[2265]: E0514 23:41:56.754912 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 14 23:41:56.755010 kubelet[2265]: E0514 23:41:56.754972 2265 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:41:56.756958 kubelet[2265]: I0514 23:41:56.756935 2265 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:41:56.756958 kubelet[2265]: I0514 23:41:56.756953 2265 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:41:56.757048 kubelet[2265]: I0514 23:41:56.756972 2265 state_mem.go:36] "Initialized new in-memory state store" May 14 23:41:56.837903 kubelet[2265]: E0514 23:41:56.837865 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:56.855095 kubelet[2265]: E0514 23:41:56.855050 2265 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:41:56.938407 kubelet[2265]: E0514 23:41:56.938298 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:56.938677 kubelet[2265]: E0514 23:41:56.938636 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" May 14 23:41:57.039225 kubelet[2265]: E0514 23:41:57.039200 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:57.055426 kubelet[2265]: E0514 23:41:57.055382 2265 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:41:57.083997 kubelet[2265]: I0514 23:41:57.083962 2265 policy_none.go:49] "None policy: Start" May 14 23:41:57.083997 kubelet[2265]: I0514 23:41:57.083984 2265 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:41:57.083997 kubelet[2265]: I0514 23:41:57.084000 2265 state_mem.go:35] "Initializing new in-memory state store" May 14 23:41:57.091660 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:41:57.105171 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:41:57.107975 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:41:57.119980 kubelet[2265]: I0514 23:41:57.119951 2265 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:41:57.120171 kubelet[2265]: I0514 23:41:57.120149 2265 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:41:57.120276 kubelet[2265]: I0514 23:41:57.120164 2265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:41:57.120434 kubelet[2265]: I0514 23:41:57.120386 2265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:41:57.121026 kubelet[2265]: E0514 23:41:57.121001 2265 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:41:57.121093 kubelet[2265]: E0514 23:41:57.121042 2265 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 23:41:57.222008 kubelet[2265]: I0514 23:41:57.221909 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:41:57.222310 kubelet[2265]: E0514 23:41:57.222275 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 14 23:41:57.339479 kubelet[2265]: E0514 23:41:57.339438 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" May 14 23:41:57.423484 kubelet[2265]: I0514 23:41:57.423461 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:41:57.423825 kubelet[2265]: E0514 23:41:57.423769 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 14 23:41:57.464276 systemd[1]: Created slice kubepods-burstable-podf3d062b4e777add671d7d0151c2c6c03.slice - libcontainer container kubepods-burstable-podf3d062b4e777add671d7d0151c2c6c03.slice. May 14 23:41:57.474043 kubelet[2265]: E0514 23:41:57.473958 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:41:57.476403 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 14 23:41:57.477947 kubelet[2265]: E0514 23:41:57.477917 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:41:57.492797 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 14 23:41:57.494398 kubelet[2265]: E0514 23:41:57.494365 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:41:57.541664 kubelet[2265]: I0514 23:41:57.541625 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3d062b4e777add671d7d0151c2c6c03-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f3d062b4e777add671d7d0151c2c6c03\") " pod="kube-system/kube-apiserver-localhost" May 14 23:41:57.541664 kubelet[2265]: I0514 23:41:57.541650 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:57.541764 kubelet[2265]: I0514 23:41:57.541669 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:57.541764 kubelet[2265]: I0514 23:41:57.541686 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 23:41:57.541764 kubelet[2265]: I0514 23:41:57.541701 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3d062b4e777add671d7d0151c2c6c03-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f3d062b4e777add671d7d0151c2c6c03\") " pod="kube-system/kube-apiserver-localhost" May 14 23:41:57.541764 kubelet[2265]: I0514 23:41:57.541715 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3d062b4e777add671d7d0151c2c6c03-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f3d062b4e777add671d7d0151c2c6c03\") " pod="kube-system/kube-apiserver-localhost" May 14 23:41:57.541764 kubelet[2265]: I0514 23:41:57.541730 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:57.541880 kubelet[2265]: I0514 23:41:57.541747 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:57.541880 kubelet[2265]: I0514 23:41:57.541765 2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:41:57.775015 kubelet[2265]: E0514 23:41:57.774914 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:57.775598 containerd[1505]: time="2025-05-14T23:41:57.775545550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f3d062b4e777add671d7d0151c2c6c03,Namespace:kube-system,Attempt:0,}" May 14 23:41:57.778813 kubelet[2265]: E0514 23:41:57.778769 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:57.779205 containerd[1505]: time="2025-05-14T23:41:57.779168405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 14 23:41:57.795484 kubelet[2265]: E0514 23:41:57.795456 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:57.795810 containerd[1505]: time="2025-05-14T23:41:57.795773932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 14 23:41:57.824822 kubelet[2265]: I0514 23:41:57.824782 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:41:57.825106 kubelet[2265]: E0514 23:41:57.825069 2265 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" May 14 23:41:57.840777 kubelet[2265]: W0514 23:41:57.840713 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 14 23:41:57.840837 kubelet[2265]: E0514 23:41:57.840783 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 14 23:41:57.959072 kubelet[2265]: W0514 23:41:57.959011 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 14 23:41:57.959072 kubelet[2265]: E0514 23:41:57.959053 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 14 23:41:58.057258 containerd[1505]: time="2025-05-14T23:41:58.056922883Z" level=info msg="connecting to shim 7794685b32ed59a2dbdef8a8d7c88626ba332bb4abe4f2467b7731f261fd9aac" address="unix:///run/containerd/s/e1564caad6fe4502e8338fc0bdecc04e7d67e915029a43d1ea31365baa5b4445" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:58.057258 containerd[1505]: time="2025-05-14T23:41:58.057063908Z" level=info msg="connecting to shim 2eb59918051ea3fefa09137d3fa932e5736264765fcef8514c8410fe016b1796" address="unix:///run/containerd/s/c437696f13ec92669dcd59a69b24cb41841a816a02587baa837cdb293c8a05c2" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:58.061278 containerd[1505]: time="2025-05-14T23:41:58.061247305Z" level=info msg="connecting to shim 5934522457ad53b14be6ab4d402f611bf64124e921c341c2207240705f195dfb" address="unix:///run/containerd/s/cf43e16945c9c6e9ba7f2935d91ee60434c3c5b7e7ba86c171c067630bd7dda2" namespace=k8s.io protocol=ttrpc version=3 May 14 23:41:58.082294 systemd[1]: Started cri-containerd-7794685b32ed59a2dbdef8a8d7c88626ba332bb4abe4f2467b7731f261fd9aac.scope - libcontainer container 7794685b32ed59a2dbdef8a8d7c88626ba332bb4abe4f2467b7731f261fd9aac. May 14 23:41:58.086881 systemd[1]: Started cri-containerd-2eb59918051ea3fefa09137d3fa932e5736264765fcef8514c8410fe016b1796.scope - libcontainer container 2eb59918051ea3fefa09137d3fa932e5736264765fcef8514c8410fe016b1796. May 14 23:41:58.091708 systemd[1]: Started cri-containerd-5934522457ad53b14be6ab4d402f611bf64124e921c341c2207240705f195dfb.scope - libcontainer container 5934522457ad53b14be6ab4d402f611bf64124e921c341c2207240705f195dfb. May 14 23:41:58.098775 kubelet[2265]: W0514 23:41:58.098721 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 14 23:41:58.098854 kubelet[2265]: E0514 23:41:58.098784 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 14 23:41:58.131316 containerd[1505]: time="2025-05-14T23:41:58.130944437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"7794685b32ed59a2dbdef8a8d7c88626ba332bb4abe4f2467b7731f261fd9aac\"" May 14 23:41:58.132365 kubelet[2265]: E0514 23:41:58.132107 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:58.137330 containerd[1505]: time="2025-05-14T23:41:58.136994364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"2eb59918051ea3fefa09137d3fa932e5736264765fcef8514c8410fe016b1796\"" May 14 23:41:58.137786 containerd[1505]: time="2025-05-14T23:41:58.137703525Z" level=info msg="CreateContainer within sandbox \"7794685b32ed59a2dbdef8a8d7c88626ba332bb4abe4f2467b7731f261fd9aac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:41:58.138819 kubelet[2265]: E0514 23:41:58.138780 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:58.140702 kubelet[2265]: E0514 23:41:58.140565 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" May 14 23:41:58.141463 containerd[1505]: time="2025-05-14T23:41:58.141443309Z" level=info msg="CreateContainer within sandbox \"2eb59918051ea3fefa09137d3fa932e5736264765fcef8514c8410fe016b1796\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:41:58.148858 containerd[1505]: time="2025-05-14T23:41:58.148818684Z" level=info msg="Container 774c1a318b1ff32431c9c4257684bbc206ecde9a8a2a4aa216aa5c666060f508: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:58.153744 containerd[1505]: time="2025-05-14T23:41:58.153718434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f3d062b4e777add671d7d0151c2c6c03,Namespace:kube-system,Attempt:0,} returns sandbox id \"5934522457ad53b14be6ab4d402f611bf64124e921c341c2207240705f195dfb\"" May 14 23:41:58.154337 kubelet[2265]: E0514 23:41:58.154299 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:58.155462 containerd[1505]: time="2025-05-14T23:41:58.155436366Z" level=info msg="CreateContainer within sandbox \"5934522457ad53b14be6ab4d402f611bf64124e921c341c2207240705f195dfb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:41:58.160274 containerd[1505]: time="2025-05-14T23:41:58.160241519Z" level=info msg="Container 2f170093245d387855029a40939964dde89744ef6d13473b7c08354dc0ff62cf: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:58.164986 containerd[1505]: time="2025-05-14T23:41:58.164945332Z" level=info msg="CreateContainer within sandbox \"7794685b32ed59a2dbdef8a8d7c88626ba332bb4abe4f2467b7731f261fd9aac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"774c1a318b1ff32431c9c4257684bbc206ecde9a8a2a4aa216aa5c666060f508\"" May 14 23:41:58.165524 containerd[1505]: time="2025-05-14T23:41:58.165493531Z" level=info msg="StartContainer for \"774c1a318b1ff32431c9c4257684bbc206ecde9a8a2a4aa216aa5c666060f508\"" May 14 23:41:58.166644 containerd[1505]: time="2025-05-14T23:41:58.166610366Z" level=info msg="connecting to shim 774c1a318b1ff32431c9c4257684bbc206ecde9a8a2a4aa216aa5c666060f508" address="unix:///run/containerd/s/e1564caad6fe4502e8338fc0bdecc04e7d67e915029a43d1ea31365baa5b4445" protocol=ttrpc version=3 May 14 23:41:58.170490 containerd[1505]: time="2025-05-14T23:41:58.170458514Z" level=info msg="CreateContainer within sandbox \"2eb59918051ea3fefa09137d3fa932e5736264765fcef8514c8410fe016b1796\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2f170093245d387855029a40939964dde89744ef6d13473b7c08354dc0ff62cf\"" May 14 23:41:58.171140 containerd[1505]: time="2025-05-14T23:41:58.171078577Z" level=info msg="StartContainer for \"2f170093245d387855029a40939964dde89744ef6d13473b7c08354dc0ff62cf\"" May 14 23:41:58.172246 containerd[1505]: time="2025-05-14T23:41:58.172149355Z" level=info msg="connecting to shim 2f170093245d387855029a40939964dde89744ef6d13473b7c08354dc0ff62cf" address="unix:///run/containerd/s/c437696f13ec92669dcd59a69b24cb41841a816a02587baa837cdb293c8a05c2" protocol=ttrpc version=3 May 14 23:41:58.173374 containerd[1505]: time="2025-05-14T23:41:58.173120888Z" level=info msg="Container 42766b9cbe37d5ad2f83bd6ef9c891664ddd243c757a3e29745c5b274473fbd1: CDI devices from CRI Config.CDIDevices: []" May 14 23:41:58.179940 containerd[1505]: time="2025-05-14T23:41:58.179806778Z" level=info msg="CreateContainer within sandbox \"5934522457ad53b14be6ab4d402f611bf64124e921c341c2207240705f195dfb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"42766b9cbe37d5ad2f83bd6ef9c891664ddd243c757a3e29745c5b274473fbd1\"" May 14 23:41:58.180120 containerd[1505]: time="2025-05-14T23:41:58.180097243Z" level=info msg="StartContainer for \"42766b9cbe37d5ad2f83bd6ef9c891664ddd243c757a3e29745c5b274473fbd1\"" May 14 23:41:58.181144 containerd[1505]: time="2025-05-14T23:41:58.181032397Z" level=info msg="connecting to shim 42766b9cbe37d5ad2f83bd6ef9c891664ddd243c757a3e29745c5b274473fbd1" address="unix:///run/containerd/s/cf43e16945c9c6e9ba7f2935d91ee60434c3c5b7e7ba86c171c067630bd7dda2" protocol=ttrpc version=3 May 14 23:41:58.186380 systemd[1]: Started cri-containerd-774c1a318b1ff32431c9c4257684bbc206ecde9a8a2a4aa216aa5c666060f508.scope - libcontainer container 774c1a318b1ff32431c9c4257684bbc206ecde9a8a2a4aa216aa5c666060f508. May 14 23:41:58.189719 systemd[1]: Started cri-containerd-2f170093245d387855029a40939964dde89744ef6d13473b7c08354dc0ff62cf.scope - libcontainer container 2f170093245d387855029a40939964dde89744ef6d13473b7c08354dc0ff62cf. May 14 23:41:58.195927 systemd[1]: Started cri-containerd-42766b9cbe37d5ad2f83bd6ef9c891664ddd243c757a3e29745c5b274473fbd1.scope - libcontainer container 42766b9cbe37d5ad2f83bd6ef9c891664ddd243c757a3e29745c5b274473fbd1. May 14 23:41:58.245113 containerd[1505]: time="2025-05-14T23:41:58.245069422Z" level=info msg="StartContainer for \"2f170093245d387855029a40939964dde89744ef6d13473b7c08354dc0ff62cf\" returns successfully" May 14 23:41:58.246331 containerd[1505]: time="2025-05-14T23:41:58.246293809Z" level=info msg="StartContainer for \"774c1a318b1ff32431c9c4257684bbc206ecde9a8a2a4aa216aa5c666060f508\" returns successfully" May 14 23:41:58.249486 containerd[1505]: time="2025-05-14T23:41:58.249430452Z" level=info msg="StartContainer for \"42766b9cbe37d5ad2f83bd6ef9c891664ddd243c757a3e29745c5b274473fbd1\" returns successfully" May 14 23:41:58.257047 kubelet[2265]: W0514 23:41:58.256922 2265 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused May 14 23:41:58.257223 kubelet[2265]: E0514 23:41:58.257180 2265 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" May 14 23:41:58.628841 kubelet[2265]: I0514 23:41:58.628350 2265 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:41:58.766803 kubelet[2265]: E0514 23:41:58.766575 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:41:58.766803 kubelet[2265]: E0514 23:41:58.766693 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:58.767063 kubelet[2265]: E0514 23:41:58.766882 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:41:58.767860 kubelet[2265]: E0514 23:41:58.767709 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:58.769273 kubelet[2265]: E0514 23:41:58.769096 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:41:58.769273 kubelet[2265]: E0514 23:41:58.769202 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:59.254916 kubelet[2265]: I0514 23:41:59.254860 2265 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 23:41:59.254916 kubelet[2265]: E0514 23:41:59.254909 2265 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 23:41:59.258874 kubelet[2265]: E0514 23:41:59.258847 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:59.359358 kubelet[2265]: E0514 23:41:59.359306 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:59.459976 kubelet[2265]: E0514 23:41:59.459927 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:59.560211 kubelet[2265]: E0514 23:41:59.560047 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:59.660673 kubelet[2265]: E0514 23:41:59.660638 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:59.760864 kubelet[2265]: E0514 23:41:59.760822 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:59.770405 kubelet[2265]: E0514 23:41:59.770375 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:41:59.770505 kubelet[2265]: E0514 23:41:59.770478 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:59.770658 kubelet[2265]: E0514 23:41:59.770626 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:41:59.770719 kubelet[2265]: E0514 23:41:59.770702 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:41:59.861521 kubelet[2265]: E0514 23:41:59.861475 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:41:59.962183 kubelet[2265]: E0514 23:41:59.962115 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.062755 kubelet[2265]: E0514 23:42:00.062696 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.163680 kubelet[2265]: E0514 23:42:00.163556 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.264071 kubelet[2265]: E0514 23:42:00.264023 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.364652 kubelet[2265]: E0514 23:42:00.364601 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.465307 kubelet[2265]: E0514 23:42:00.465165 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.566176 kubelet[2265]: E0514 23:42:00.566115 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.666614 kubelet[2265]: E0514 23:42:00.666586 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.767218 kubelet[2265]: E0514 23:42:00.767085 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.771296 kubelet[2265]: E0514 23:42:00.771277 2265 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 14 23:42:00.771391 kubelet[2265]: E0514 23:42:00.771375 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:00.868027 kubelet[2265]: E0514 23:42:00.867976 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:00.968515 kubelet[2265]: E0514 23:42:00.968484 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:01.069073 kubelet[2265]: E0514 23:42:01.068989 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:01.169743 kubelet[2265]: E0514 23:42:01.169706 2265 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:01.235520 systemd[1]: Reload requested from client PID 2538 ('systemctl') (unit session-7.scope)... May 14 23:42:01.235536 systemd[1]: Reloading... May 14 23:42:01.313174 zram_generator::config[2582]: No configuration found. May 14 23:42:01.338460 kubelet[2265]: I0514 23:42:01.338429 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 23:42:01.346986 kubelet[2265]: I0514 23:42:01.346948 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 23:42:01.350624 kubelet[2265]: I0514 23:42:01.350587 2265 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 23:42:01.423083 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:42:01.556579 systemd[1]: Reloading finished in 320 ms. May 14 23:42:01.577531 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:42:01.597842 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:42:01.598171 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:42:01.598234 systemd[1]: kubelet.service: Consumed 845ms CPU time, 127.2M memory peak. May 14 23:42:01.600255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:42:01.813038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:42:01.827608 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:42:01.872092 kubelet[2627]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:42:01.872092 kubelet[2627]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 23:42:01.872092 kubelet[2627]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:42:01.872627 kubelet[2627]: I0514 23:42:01.872076 2627 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:42:01.880033 kubelet[2627]: I0514 23:42:01.879796 2627 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 23:42:01.880033 kubelet[2627]: I0514 23:42:01.879828 2627 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:42:01.880220 kubelet[2627]: I0514 23:42:01.880148 2627 server.go:954] "Client rotation is on, will bootstrap in background" May 14 23:42:01.881414 kubelet[2627]: I0514 23:42:01.881392 2627 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:42:01.883609 kubelet[2627]: I0514 23:42:01.883572 2627 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:42:01.887242 kubelet[2627]: I0514 23:42:01.887187 2627 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 23:42:01.893029 kubelet[2627]: I0514 23:42:01.892990 2627 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:42:01.893282 kubelet[2627]: I0514 23:42:01.893240 2627 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:42:01.893427 kubelet[2627]: I0514 23:42:01.893270 2627 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:42:01.893427 kubelet[2627]: I0514 23:42:01.893423 2627 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:42:01.893533 kubelet[2627]: I0514 23:42:01.893431 2627 container_manager_linux.go:304] "Creating device plugin manager" May 14 23:42:01.893533 kubelet[2627]: I0514 23:42:01.893468 2627 state_mem.go:36] "Initialized new in-memory state store" May 14 23:42:01.893642 kubelet[2627]: I0514 23:42:01.893623 2627 kubelet.go:446] "Attempting to sync node with API server" May 14 23:42:01.893642 kubelet[2627]: I0514 23:42:01.893636 2627 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:42:01.893707 kubelet[2627]: I0514 23:42:01.893655 2627 kubelet.go:352] "Adding apiserver pod source" May 14 23:42:01.893707 kubelet[2627]: I0514 23:42:01.893666 2627 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:42:01.894192 kubelet[2627]: I0514 23:42:01.894167 2627 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 23:42:01.895696 kubelet[2627]: I0514 23:42:01.894502 2627 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:42:01.895696 kubelet[2627]: I0514 23:42:01.894909 2627 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 23:42:01.895696 kubelet[2627]: I0514 23:42:01.894933 2627 server.go:1287] "Started kubelet" May 14 23:42:01.895696 kubelet[2627]: I0514 23:42:01.895277 2627 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:42:01.895696 kubelet[2627]: I0514 23:42:01.895647 2627 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:42:01.898008 kubelet[2627]: E0514 23:42:01.897990 2627 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:42:01.898429 kubelet[2627]: I0514 23:42:01.898405 2627 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:42:01.899417 kubelet[2627]: I0514 23:42:01.895666 2627 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:42:01.900424 kubelet[2627]: I0514 23:42:01.900397 2627 server.go:490] "Adding debug handlers to kubelet server" May 14 23:42:01.903106 kubelet[2627]: I0514 23:42:01.901383 2627 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:42:01.906066 kubelet[2627]: I0514 23:42:01.905762 2627 factory.go:221] Registration of the systemd container factory successfully May 14 23:42:01.906503 kubelet[2627]: I0514 23:42:01.901424 2627 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 23:42:01.906883 kubelet[2627]: E0514 23:42:01.903141 2627 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 23:42:01.907064 kubelet[2627]: I0514 23:42:01.907024 2627 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:42:01.907208 kubelet[2627]: I0514 23:42:01.907192 2627 reconciler.go:26] "Reconciler: start to sync state" May 14 23:42:01.907559 kubelet[2627]: I0514 23:42:01.901450 2627 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 23:42:01.910661 kubelet[2627]: I0514 23:42:01.910541 2627 factory.go:221] Registration of the containerd container factory successfully May 14 23:42:01.919047 kubelet[2627]: I0514 23:42:01.918991 2627 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:42:01.921253 kubelet[2627]: I0514 23:42:01.921232 2627 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:42:01.921305 kubelet[2627]: I0514 23:42:01.921284 2627 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 23:42:01.922521 kubelet[2627]: I0514 23:42:01.921449 2627 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 23:42:01.922521 kubelet[2627]: I0514 23:42:01.921469 2627 kubelet.go:2388] "Starting kubelet main sync loop" May 14 23:42:01.922521 kubelet[2627]: E0514 23:42:01.921530 2627 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:42:01.946776 kubelet[2627]: I0514 23:42:01.946737 2627 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 23:42:01.946776 kubelet[2627]: I0514 23:42:01.946759 2627 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 23:42:01.946776 kubelet[2627]: I0514 23:42:01.946777 2627 state_mem.go:36] "Initialized new in-memory state store" May 14 23:42:01.946962 kubelet[2627]: I0514 23:42:01.946927 2627 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:42:01.946962 kubelet[2627]: I0514 23:42:01.946940 2627 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:42:01.946962 kubelet[2627]: I0514 23:42:01.946962 2627 policy_none.go:49] "None policy: Start" May 14 23:42:01.947042 kubelet[2627]: I0514 23:42:01.946973 2627 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 23:42:01.947042 kubelet[2627]: I0514 23:42:01.946986 2627 state_mem.go:35] "Initializing new in-memory state store" May 14 23:42:01.947117 kubelet[2627]: I0514 23:42:01.947100 2627 state_mem.go:75] "Updated machine memory state" May 14 23:42:01.951256 kubelet[2627]: I0514 23:42:01.951220 2627 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:42:01.951433 kubelet[2627]: I0514 23:42:01.951412 2627 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:42:01.951470 kubelet[2627]: I0514 23:42:01.951427 2627 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:42:01.951628 kubelet[2627]: I0514 23:42:01.951608 2627 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:42:01.952562 kubelet[2627]: E0514 23:42:01.952528 2627 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 23:42:02.022282 kubelet[2627]: I0514 23:42:02.022202 2627 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 23:42:02.022450 kubelet[2627]: I0514 23:42:02.022362 2627 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 23:42:02.022450 kubelet[2627]: I0514 23:42:02.022394 2627 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 14 23:42:02.027486 kubelet[2627]: E0514 23:42:02.027445 2627 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 23:42:02.027890 kubelet[2627]: E0514 23:42:02.027874 2627 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 14 23:42:02.028404 kubelet[2627]: E0514 23:42:02.028370 2627 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 23:42:02.056569 kubelet[2627]: I0514 23:42:02.056534 2627 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 14 23:42:02.064024 kubelet[2627]: I0514 23:42:02.063983 2627 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 14 23:42:02.064188 kubelet[2627]: I0514 23:42:02.064060 2627 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 14 23:42:02.208275 kubelet[2627]: I0514 23:42:02.208075 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3d062b4e777add671d7d0151c2c6c03-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f3d062b4e777add671d7d0151c2c6c03\") " pod="kube-system/kube-apiserver-localhost" May 14 23:42:02.208275 kubelet[2627]: I0514 23:42:02.208121 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:42:02.208275 kubelet[2627]: I0514 23:42:02.208228 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:42:02.208562 kubelet[2627]: I0514 23:42:02.208278 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:42:02.208562 kubelet[2627]: I0514 23:42:02.208304 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:42:02.208562 kubelet[2627]: I0514 23:42:02.208332 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 14 23:42:02.208562 kubelet[2627]: I0514 23:42:02.208352 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3d062b4e777add671d7d0151c2c6c03-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f3d062b4e777add671d7d0151c2c6c03\") " pod="kube-system/kube-apiserver-localhost" May 14 23:42:02.208562 kubelet[2627]: I0514 23:42:02.208390 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3d062b4e777add671d7d0151c2c6c03-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f3d062b4e777add671d7d0151c2c6c03\") " pod="kube-system/kube-apiserver-localhost" May 14 23:42:02.208709 kubelet[2627]: I0514 23:42:02.208410 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 14 23:42:02.239494 sudo[2664]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 23:42:02.239928 sudo[2664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 23:42:02.328608 kubelet[2627]: E0514 23:42:02.328558 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:02.328608 kubelet[2627]: E0514 23:42:02.328606 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:02.328769 kubelet[2627]: E0514 23:42:02.328687 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:02.702340 sudo[2664]: pam_unix(sudo:session): session closed for user root May 14 23:42:02.894943 kubelet[2627]: I0514 23:42:02.894896 2627 apiserver.go:52] "Watching apiserver" May 14 23:42:02.908239 kubelet[2627]: I0514 23:42:02.908192 2627 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 23:42:02.934594 kubelet[2627]: I0514 23:42:02.934176 2627 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 14 23:42:02.934594 kubelet[2627]: I0514 23:42:02.934348 2627 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 14 23:42:02.934594 kubelet[2627]: E0514 23:42:02.934387 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:02.940630 kubelet[2627]: E0514 23:42:02.940455 2627 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 23:42:02.940921 kubelet[2627]: E0514 23:42:02.940896 2627 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 23:42:02.941073 kubelet[2627]: E0514 23:42:02.941058 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:02.941372 kubelet[2627]: E0514 23:42:02.941316 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:02.950692 kubelet[2627]: I0514 23:42:02.950478 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.950469378 podStartE2EDuration="1.950469378s" podCreationTimestamp="2025-05-14 23:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:42:02.950297445 +0000 UTC m=+1.118453390" watchObservedRunningTime="2025-05-14 23:42:02.950469378 +0000 UTC m=+1.118625323" May 14 23:42:02.968295 kubelet[2627]: I0514 23:42:02.968154 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9681168690000002 podStartE2EDuration="1.968116869s" podCreationTimestamp="2025-05-14 23:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:42:02.960464686 +0000 UTC m=+1.128620631" watchObservedRunningTime="2025-05-14 23:42:02.968116869 +0000 UTC m=+1.136272814" May 14 23:42:02.974745 kubelet[2627]: I0514 23:42:02.974694 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.974682515 podStartE2EDuration="1.974682515s" podCreationTimestamp="2025-05-14 23:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:42:02.968659838 +0000 UTC m=+1.136815783" watchObservedRunningTime="2025-05-14 23:42:02.974682515 +0000 UTC m=+1.142838450" May 14 23:42:03.935777 kubelet[2627]: E0514 23:42:03.935738 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:03.938760 kubelet[2627]: E0514 23:42:03.935848 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:04.162990 sudo[1701]: pam_unix(sudo:session): session closed for user root May 14 23:42:04.164720 sshd[1700]: Connection closed by 10.0.0.1 port 39816 May 14 23:42:04.165114 sshd-session[1697]: pam_unix(sshd:session): session closed for user core May 14 23:42:04.169965 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:39816.service: Deactivated successfully. May 14 23:42:04.172398 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:42:04.172610 systemd[1]: session-7.scope: Consumed 4.294s CPU time, 252.7M memory peak. May 14 23:42:04.173991 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. May 14 23:42:04.174989 systemd-logind[1493]: Removed session 7. May 14 23:42:04.938810 kubelet[2627]: E0514 23:42:04.938765 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:05.022487 kubelet[2627]: E0514 23:42:05.022442 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:06.053766 kubelet[2627]: I0514 23:42:06.053726 2627 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:42:06.054200 kubelet[2627]: I0514 23:42:06.054147 2627 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:42:06.054227 containerd[1505]: time="2025-05-14T23:42:06.053996758Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:42:07.089162 kubelet[2627]: I0514 23:42:07.088869 2627 status_manager.go:890] "Failed to get status for pod" podUID="4812c3d3-5eab-4ce2-a743-4f8094db9082" pod="kube-system/kube-proxy-wdffq" err="pods \"kube-proxy-wdffq\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 14 23:42:07.101214 systemd[1]: Created slice kubepods-besteffort-pod4812c3d3_5eab_4ce2_a743_4f8094db9082.slice - libcontainer container kubepods-besteffort-pod4812c3d3_5eab_4ce2_a743_4f8094db9082.slice. May 14 23:42:07.112951 systemd[1]: Created slice kubepods-burstable-pod45122a94_3b40_4c2e_8c24_172ce26e8ea7.slice - libcontainer container kubepods-burstable-pod45122a94_3b40_4c2e_8c24_172ce26e8ea7.slice. May 14 23:42:07.139867 kubelet[2627]: I0514 23:42:07.139810 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4812c3d3-5eab-4ce2-a743-4f8094db9082-xtables-lock\") pod \"kube-proxy-wdffq\" (UID: \"4812c3d3-5eab-4ce2-a743-4f8094db9082\") " pod="kube-system/kube-proxy-wdffq" May 14 23:42:07.139867 kubelet[2627]: I0514 23:42:07.139853 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4812c3d3-5eab-4ce2-a743-4f8094db9082-lib-modules\") pod \"kube-proxy-wdffq\" (UID: \"4812c3d3-5eab-4ce2-a743-4f8094db9082\") " pod="kube-system/kube-proxy-wdffq" May 14 23:42:07.139867 kubelet[2627]: I0514 23:42:07.139878 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cltqn\" (UniqueName: \"kubernetes.io/projected/4812c3d3-5eab-4ce2-a743-4f8094db9082-kube-api-access-cltqn\") pod \"kube-proxy-wdffq\" (UID: \"4812c3d3-5eab-4ce2-a743-4f8094db9082\") " pod="kube-system/kube-proxy-wdffq" May 14 23:42:07.140075 kubelet[2627]: I0514 23:42:07.139899 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-etc-cni-netd\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140075 kubelet[2627]: I0514 23:42:07.139926 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-host-proc-sys-net\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140075 kubelet[2627]: I0514 23:42:07.139951 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4812c3d3-5eab-4ce2-a743-4f8094db9082-kube-proxy\") pod \"kube-proxy-wdffq\" (UID: \"4812c3d3-5eab-4ce2-a743-4f8094db9082\") " pod="kube-system/kube-proxy-wdffq" May 14 23:42:07.140075 kubelet[2627]: I0514 23:42:07.139970 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-config-path\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140075 kubelet[2627]: I0514 23:42:07.139988 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-lib-modules\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140220 kubelet[2627]: I0514 23:42:07.140007 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-host-proc-sys-kernel\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140354 kubelet[2627]: I0514 23:42:07.140050 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-bpf-maps\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140393 kubelet[2627]: I0514 23:42:07.140355 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45122a94-3b40-4c2e-8c24-172ce26e8ea7-clustermesh-secrets\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140393 kubelet[2627]: I0514 23:42:07.140371 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-run\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140393 kubelet[2627]: I0514 23:42:07.140387 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cni-path\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140499 kubelet[2627]: I0514 23:42:07.140464 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-xtables-lock\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140531 kubelet[2627]: I0514 23:42:07.140521 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45122a94-3b40-4c2e-8c24-172ce26e8ea7-hubble-tls\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140574 kubelet[2627]: I0514 23:42:07.140545 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-cgroup\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140574 kubelet[2627]: I0514 23:42:07.140563 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65zq\" (UniqueName: \"kubernetes.io/projected/45122a94-3b40-4c2e-8c24-172ce26e8ea7-kube-api-access-q65zq\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.140629 kubelet[2627]: I0514 23:42:07.140582 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-hostproc\") pod \"cilium-cljhl\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " pod="kube-system/cilium-cljhl" May 14 23:42:07.192468 kubelet[2627]: E0514 23:42:07.190385 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:07.202183 systemd[1]: Created slice kubepods-besteffort-poda83919b3_026a_4937_9e0a_677048345683.slice - libcontainer container kubepods-besteffort-poda83919b3_026a_4937_9e0a_677048345683.slice. May 14 23:42:07.241774 kubelet[2627]: I0514 23:42:07.241713 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd98b\" (UniqueName: \"kubernetes.io/projected/a83919b3-026a-4937-9e0a-677048345683-kube-api-access-hd98b\") pod \"cilium-operator-6c4d7847fc-k5dkd\" (UID: \"a83919b3-026a-4937-9e0a-677048345683\") " pod="kube-system/cilium-operator-6c4d7847fc-k5dkd" May 14 23:42:07.241930 kubelet[2627]: I0514 23:42:07.241894 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a83919b3-026a-4937-9e0a-677048345683-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-k5dkd\" (UID: \"a83919b3-026a-4937-9e0a-677048345683\") " pod="kube-system/cilium-operator-6c4d7847fc-k5dkd" May 14 23:42:07.410881 kubelet[2627]: E0514 23:42:07.410780 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:07.411303 containerd[1505]: time="2025-05-14T23:42:07.411250934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdffq,Uid:4812c3d3-5eab-4ce2-a743-4f8094db9082,Namespace:kube-system,Attempt:0,}" May 14 23:42:07.416018 kubelet[2627]: E0514 23:42:07.415980 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:07.416436 containerd[1505]: time="2025-05-14T23:42:07.416395622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cljhl,Uid:45122a94-3b40-4c2e-8c24-172ce26e8ea7,Namespace:kube-system,Attempt:0,}" May 14 23:42:07.506578 kubelet[2627]: E0514 23:42:07.506542 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:07.507082 containerd[1505]: time="2025-05-14T23:42:07.507047614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k5dkd,Uid:a83919b3-026a-4937-9e0a-677048345683,Namespace:kube-system,Attempt:0,}" May 14 23:42:07.594595 containerd[1505]: time="2025-05-14T23:42:07.594275489Z" level=info msg="connecting to shim 7e7d562be0faf54513445343798e9e3e38f0e4b1d8100dbdacd9fe488629a389" address="unix:///run/containerd/s/ce7cab95caf836badac49bb061f93dc0d5f99f2284927917a9721093341bc6aa" namespace=k8s.io protocol=ttrpc version=3 May 14 23:42:07.600024 containerd[1505]: time="2025-05-14T23:42:07.599975453Z" level=info msg="connecting to shim 6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79" address="unix:///run/containerd/s/dfde5cfcaace0afad3728cdc10d0974e9e2ccfed978854ba857aa9caa830385a" namespace=k8s.io protocol=ttrpc version=3 May 14 23:42:07.604884 containerd[1505]: time="2025-05-14T23:42:07.604481976Z" level=info msg="connecting to shim 263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb" address="unix:///run/containerd/s/e4abf58c9e7a9c4cc7fda330165d52c02745b659d6867102c5fe8ee52ca5af4e" namespace=k8s.io protocol=ttrpc version=3 May 14 23:42:07.624311 systemd[1]: Started cri-containerd-7e7d562be0faf54513445343798e9e3e38f0e4b1d8100dbdacd9fe488629a389.scope - libcontainer container 7e7d562be0faf54513445343798e9e3e38f0e4b1d8100dbdacd9fe488629a389. May 14 23:42:07.649074 systemd[1]: Started cri-containerd-263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb.scope - libcontainer container 263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb. May 14 23:42:07.653473 systemd[1]: Started cri-containerd-6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79.scope - libcontainer container 6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79. May 14 23:42:07.669735 containerd[1505]: time="2025-05-14T23:42:07.669494519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdffq,Uid:4812c3d3-5eab-4ce2-a743-4f8094db9082,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e7d562be0faf54513445343798e9e3e38f0e4b1d8100dbdacd9fe488629a389\"" May 14 23:42:07.670962 kubelet[2627]: E0514 23:42:07.670937 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:07.673257 containerd[1505]: time="2025-05-14T23:42:07.673020212Z" level=info msg="CreateContainer within sandbox \"7e7d562be0faf54513445343798e9e3e38f0e4b1d8100dbdacd9fe488629a389\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:42:07.690257 containerd[1505]: time="2025-05-14T23:42:07.690200726Z" level=info msg="Container 834d9831a5727b99dec913a48315e92f6f5afb26f0f398edb27cda05c677913e: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:07.693072 containerd[1505]: time="2025-05-14T23:42:07.692989866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cljhl,Uid:45122a94-3b40-4c2e-8c24-172ce26e8ea7,Namespace:kube-system,Attempt:0,} returns sandbox id \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\"" May 14 23:42:07.693749 kubelet[2627]: E0514 23:42:07.693725 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:07.694879 containerd[1505]: time="2025-05-14T23:42:07.694858400Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 23:42:07.703352 containerd[1505]: time="2025-05-14T23:42:07.703252030Z" level=info msg="CreateContainer within sandbox \"7e7d562be0faf54513445343798e9e3e38f0e4b1d8100dbdacd9fe488629a389\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"834d9831a5727b99dec913a48315e92f6f5afb26f0f398edb27cda05c677913e\"" May 14 23:42:07.704207 containerd[1505]: time="2025-05-14T23:42:07.704176694Z" level=info msg="StartContainer for \"834d9831a5727b99dec913a48315e92f6f5afb26f0f398edb27cda05c677913e\"" May 14 23:42:07.705977 containerd[1505]: time="2025-05-14T23:42:07.705947000Z" level=info msg="connecting to shim 834d9831a5727b99dec913a48315e92f6f5afb26f0f398edb27cda05c677913e" address="unix:///run/containerd/s/ce7cab95caf836badac49bb061f93dc0d5f99f2284927917a9721093341bc6aa" protocol=ttrpc version=3 May 14 23:42:07.713942 containerd[1505]: time="2025-05-14T23:42:07.713915986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k5dkd,Uid:a83919b3-026a-4937-9e0a-677048345683,Namespace:kube-system,Attempt:0,} returns sandbox id \"263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb\"" May 14 23:42:07.714597 kubelet[2627]: E0514 23:42:07.714574 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:07.731263 systemd[1]: Started cri-containerd-834d9831a5727b99dec913a48315e92f6f5afb26f0f398edb27cda05c677913e.scope - libcontainer container 834d9831a5727b99dec913a48315e92f6f5afb26f0f398edb27cda05c677913e. May 14 23:42:07.773655 containerd[1505]: time="2025-05-14T23:42:07.773609405Z" level=info msg="StartContainer for \"834d9831a5727b99dec913a48315e92f6f5afb26f0f398edb27cda05c677913e\" returns successfully" May 14 23:42:07.946335 kubelet[2627]: E0514 23:42:07.946185 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:07.955033 kubelet[2627]: I0514 23:42:07.954970 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wdffq" podStartSLOduration=0.954952415 podStartE2EDuration="954.952415ms" podCreationTimestamp="2025-05-14 23:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:42:07.954726792 +0000 UTC m=+6.122882737" watchObservedRunningTime="2025-05-14 23:42:07.954952415 +0000 UTC m=+6.123108360" May 14 23:42:13.300816 kubelet[2627]: E0514 23:42:13.300780 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:15.026633 kubelet[2627]: E0514 23:42:15.026559 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:15.823231 update_engine[1495]: I20250514 23:42:15.823164 1495 update_attempter.cc:509] Updating boot flags... May 14 23:42:15.955506 kubelet[2627]: E0514 23:42:15.955468 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:16.080171 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3014) May 14 23:42:16.142797 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3014) May 14 23:42:16.195214 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3014) May 14 23:42:17.212273 kubelet[2627]: E0514 23:42:17.212231 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:17.675208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255116439.mount: Deactivated successfully. May 14 23:42:19.853970 containerd[1505]: time="2025-05-14T23:42:19.853904491Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:19.854628 containerd[1505]: time="2025-05-14T23:42:19.854584550Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 23:42:19.855701 containerd[1505]: time="2025-05-14T23:42:19.855667423Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:19.857361 containerd[1505]: time="2025-05-14T23:42:19.857315306Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.162356663s" May 14 23:42:19.857361 containerd[1505]: time="2025-05-14T23:42:19.857357175Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 23:42:19.858518 containerd[1505]: time="2025-05-14T23:42:19.858456147Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 23:42:19.859524 containerd[1505]: time="2025-05-14T23:42:19.859496029Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:42:19.870287 containerd[1505]: time="2025-05-14T23:42:19.870217418Z" level=info msg="Container d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:19.880399 containerd[1505]: time="2025-05-14T23:42:19.880356855Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\"" May 14 23:42:19.880946 containerd[1505]: time="2025-05-14T23:42:19.880910434Z" level=info msg="StartContainer for \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\"" May 14 23:42:19.881922 containerd[1505]: time="2025-05-14T23:42:19.881885382Z" level=info msg="connecting to shim d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2" address="unix:///run/containerd/s/dfde5cfcaace0afad3728cdc10d0974e9e2ccfed978854ba857aa9caa830385a" protocol=ttrpc version=3 May 14 23:42:19.906262 systemd[1]: Started cri-containerd-d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2.scope - libcontainer container d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2. May 14 23:42:19.934907 containerd[1505]: time="2025-05-14T23:42:19.934866836Z" level=info msg="StartContainer for \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\" returns successfully" May 14 23:42:19.947025 systemd[1]: cri-containerd-d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2.scope: Deactivated successfully. May 14 23:42:19.948651 containerd[1505]: time="2025-05-14T23:42:19.948606396Z" level=info msg="received exit event container_id:\"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\" id:\"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\" pid:3063 exited_at:{seconds:1747266139 nanos:948214082}" May 14 23:42:19.948779 containerd[1505]: time="2025-05-14T23:42:19.948721284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\" id:\"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\" pid:3063 exited_at:{seconds:1747266139 nanos:948214082}" May 14 23:42:19.965354 kubelet[2627]: E0514 23:42:19.965322 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:19.974930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2-rootfs.mount: Deactivated successfully. May 14 23:42:20.967586 kubelet[2627]: E0514 23:42:20.967551 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:20.969857 containerd[1505]: time="2025-05-14T23:42:20.969817332Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:42:20.985573 containerd[1505]: time="2025-05-14T23:42:20.985518000Z" level=info msg="Container 52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:20.994904 containerd[1505]: time="2025-05-14T23:42:20.994637282Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\"" May 14 23:42:20.995343 containerd[1505]: time="2025-05-14T23:42:20.995313874Z" level=info msg="StartContainer for \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\"" May 14 23:42:20.997480 containerd[1505]: time="2025-05-14T23:42:20.997442426Z" level=info msg="connecting to shim 52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688" address="unix:///run/containerd/s/dfde5cfcaace0afad3728cdc10d0974e9e2ccfed978854ba857aa9caa830385a" protocol=ttrpc version=3 May 14 23:42:21.025243 systemd[1]: Started cri-containerd-52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688.scope - libcontainer container 52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688. May 14 23:42:21.058704 containerd[1505]: time="2025-05-14T23:42:21.058666510Z" level=info msg="StartContainer for \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\" returns successfully" May 14 23:42:21.070784 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:42:21.071294 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:42:21.071740 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 23:42:21.073478 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:42:21.075593 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:42:21.076061 containerd[1505]: time="2025-05-14T23:42:21.075698710Z" level=info msg="received exit event container_id:\"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\" id:\"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\" pid:3108 exited_at:{seconds:1747266141 nanos:75411006}" May 14 23:42:21.076061 containerd[1505]: time="2025-05-14T23:42:21.075924869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\" id:\"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\" pid:3108 exited_at:{seconds:1747266141 nanos:75411006}" May 14 23:42:21.076037 systemd[1]: cri-containerd-52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688.scope: Deactivated successfully. May 14 23:42:21.101842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:42:21.809461 containerd[1505]: time="2025-05-14T23:42:21.809393631Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:21.810173 containerd[1505]: time="2025-05-14T23:42:21.810072546Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 23:42:21.811279 containerd[1505]: time="2025-05-14T23:42:21.811233744Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:42:21.812222 containerd[1505]: time="2025-05-14T23:42:21.812173984Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.953662582s" May 14 23:42:21.812285 containerd[1505]: time="2025-05-14T23:42:21.812226223Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 23:42:21.814015 containerd[1505]: time="2025-05-14T23:42:21.813987767Z" level=info msg="CreateContainer within sandbox \"263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 23:42:21.822437 containerd[1505]: time="2025-05-14T23:42:21.822386686Z" level=info msg="Container b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:21.829285 containerd[1505]: time="2025-05-14T23:42:21.829221293Z" level=info msg="CreateContainer within sandbox \"263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\"" May 14 23:42:21.829970 containerd[1505]: time="2025-05-14T23:42:21.829933451Z" level=info msg="StartContainer for \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\"" May 14 23:42:21.830976 containerd[1505]: time="2025-05-14T23:42:21.830946027Z" level=info msg="connecting to shim b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251" address="unix:///run/containerd/s/e4abf58c9e7a9c4cc7fda330165d52c02745b659d6867102c5fe8ee52ca5af4e" protocol=ttrpc version=3 May 14 23:42:21.859271 systemd[1]: Started cri-containerd-b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251.scope - libcontainer container b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251. May 14 23:42:21.888832 containerd[1505]: time="2025-05-14T23:42:21.888779631Z" level=info msg="StartContainer for \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" returns successfully" May 14 23:42:21.974914 kubelet[2627]: E0514 23:42:21.974860 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:21.979060 kubelet[2627]: E0514 23:42:21.979023 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:21.980934 containerd[1505]: time="2025-05-14T23:42:21.980894014Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:42:21.991138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688-rootfs.mount: Deactivated successfully. May 14 23:42:22.007022 containerd[1505]: time="2025-05-14T23:42:22.004569496Z" level=info msg="Container e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:22.026176 containerd[1505]: time="2025-05-14T23:42:22.026100950Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\"" May 14 23:42:22.028198 containerd[1505]: time="2025-05-14T23:42:22.028171336Z" level=info msg="StartContainer for \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\"" May 14 23:42:22.030777 containerd[1505]: time="2025-05-14T23:42:22.030715920Z" level=info msg="connecting to shim e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2" address="unix:///run/containerd/s/dfde5cfcaace0afad3728cdc10d0974e9e2ccfed978854ba857aa9caa830385a" protocol=ttrpc version=3 May 14 23:42:22.070272 systemd[1]: Started cri-containerd-e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2.scope - libcontainer container e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2. May 14 23:42:22.129411 systemd[1]: cri-containerd-e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2.scope: Deactivated successfully. May 14 23:42:22.130563 containerd[1505]: time="2025-05-14T23:42:22.130504934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\" id:\"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\" pid:3209 exited_at:{seconds:1747266142 nanos:130157486}" May 14 23:42:22.195978 containerd[1505]: time="2025-05-14T23:42:22.195911962Z" level=info msg="received exit event container_id:\"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\" id:\"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\" pid:3209 exited_at:{seconds:1747266142 nanos:130157486}" May 14 23:42:22.199623 containerd[1505]: time="2025-05-14T23:42:22.199370204Z" level=info msg="StartContainer for \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\" returns successfully" May 14 23:42:22.222295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2-rootfs.mount: Deactivated successfully. May 14 23:42:22.983201 kubelet[2627]: E0514 23:42:22.983165 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:22.983820 kubelet[2627]: E0514 23:42:22.983302 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:22.984971 containerd[1505]: time="2025-05-14T23:42:22.984920598Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:42:22.995492 containerd[1505]: time="2025-05-14T23:42:22.995452312Z" level=info msg="Container ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:22.998067 kubelet[2627]: I0514 23:42:22.997999 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-k5dkd" podStartSLOduration=1.90023608 podStartE2EDuration="15.997968402s" podCreationTimestamp="2025-05-14 23:42:07 +0000 UTC" firstStartedPulling="2025-05-14 23:42:07.715228011 +0000 UTC m=+5.883383956" lastFinishedPulling="2025-05-14 23:42:21.812960333 +0000 UTC m=+19.981116278" observedRunningTime="2025-05-14 23:42:22.013184403 +0000 UTC m=+20.181340348" watchObservedRunningTime="2025-05-14 23:42:22.997968402 +0000 UTC m=+21.166124347" May 14 23:42:23.001306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72420226.mount: Deactivated successfully. May 14 23:42:23.003484 containerd[1505]: time="2025-05-14T23:42:23.003437727Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\"" May 14 23:42:23.003953 containerd[1505]: time="2025-05-14T23:42:23.003904549Z" level=info msg="StartContainer for \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\"" May 14 23:42:23.004959 containerd[1505]: time="2025-05-14T23:42:23.004932383Z" level=info msg="connecting to shim ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f" address="unix:///run/containerd/s/dfde5cfcaace0afad3728cdc10d0974e9e2ccfed978854ba857aa9caa830385a" protocol=ttrpc version=3 May 14 23:42:23.025257 systemd[1]: Started cri-containerd-ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f.scope - libcontainer container ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f. May 14 23:42:23.051774 systemd[1]: cri-containerd-ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f.scope: Deactivated successfully. May 14 23:42:23.054313 containerd[1505]: time="2025-05-14T23:42:23.054162689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\" id:\"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\" pid:3247 exited_at:{seconds:1747266143 nanos:52089248}" May 14 23:42:23.054593 containerd[1505]: time="2025-05-14T23:42:23.054404515Z" level=info msg="received exit event container_id:\"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\" id:\"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\" pid:3247 exited_at:{seconds:1747266143 nanos:52089248}" May 14 23:42:23.056831 containerd[1505]: time="2025-05-14T23:42:23.056786229Z" level=info msg="StartContainer for \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\" returns successfully" May 14 23:42:23.075508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f-rootfs.mount: Deactivated successfully. May 14 23:42:23.993115 kubelet[2627]: E0514 23:42:23.992497 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:23.997172 containerd[1505]: time="2025-05-14T23:42:23.995527749Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:42:24.014532 containerd[1505]: time="2025-05-14T23:42:24.014307015Z" level=info msg="Container 9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:24.027720 containerd[1505]: time="2025-05-14T23:42:24.027433181Z" level=info msg="CreateContainer within sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\"" May 14 23:42:24.029788 containerd[1505]: time="2025-05-14T23:42:24.029762724Z" level=info msg="StartContainer for \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\"" May 14 23:42:24.031631 containerd[1505]: time="2025-05-14T23:42:24.031508323Z" level=info msg="connecting to shim 9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61" address="unix:///run/containerd/s/dfde5cfcaace0afad3728cdc10d0974e9e2ccfed978854ba857aa9caa830385a" protocol=ttrpc version=3 May 14 23:42:24.064287 systemd[1]: Started cri-containerd-9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61.scope - libcontainer container 9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61. May 14 23:42:24.097829 containerd[1505]: time="2025-05-14T23:42:24.097779387Z" level=info msg="StartContainer for \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" returns successfully" May 14 23:42:24.164176 containerd[1505]: time="2025-05-14T23:42:24.164072051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" id:\"77ecb301fee9d91b5c069f8c1a4f0f65b4bff46ef1ba558e2a772dff7b02a260\" pid:3316 exited_at:{seconds:1747266144 nanos:162519217}" May 14 23:42:24.227075 kubelet[2627]: I0514 23:42:24.227038 2627 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 23:42:24.254102 systemd[1]: Created slice kubepods-burstable-pod872fb38a_4c3d_447a_b64e_b8f614d12232.slice - libcontainer container kubepods-burstable-pod872fb38a_4c3d_447a_b64e_b8f614d12232.slice. May 14 23:42:24.259880 kubelet[2627]: I0514 23:42:24.259858 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/872fb38a-4c3d-447a-b64e-b8f614d12232-config-volume\") pod \"coredns-668d6bf9bc-zskss\" (UID: \"872fb38a-4c3d-447a-b64e-b8f614d12232\") " pod="kube-system/coredns-668d6bf9bc-zskss" May 14 23:42:24.259960 kubelet[2627]: I0514 23:42:24.259888 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpph\" (UniqueName: \"kubernetes.io/projected/872fb38a-4c3d-447a-b64e-b8f614d12232-kube-api-access-srpph\") pod \"coredns-668d6bf9bc-zskss\" (UID: \"872fb38a-4c3d-447a-b64e-b8f614d12232\") " pod="kube-system/coredns-668d6bf9bc-zskss" May 14 23:42:24.263796 systemd[1]: Created slice kubepods-burstable-pod097dee9d_1df7_43d6_951b_07e2d49eaf30.slice - libcontainer container kubepods-burstable-pod097dee9d_1df7_43d6_951b_07e2d49eaf30.slice. May 14 23:42:24.360855 kubelet[2627]: I0514 23:42:24.360817 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/097dee9d-1df7-43d6-951b-07e2d49eaf30-config-volume\") pod \"coredns-668d6bf9bc-k6q2z\" (UID: \"097dee9d-1df7-43d6-951b-07e2d49eaf30\") " pod="kube-system/coredns-668d6bf9bc-k6q2z" May 14 23:42:24.360986 kubelet[2627]: I0514 23:42:24.360866 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfwxg\" (UniqueName: \"kubernetes.io/projected/097dee9d-1df7-43d6-951b-07e2d49eaf30-kube-api-access-zfwxg\") pod \"coredns-668d6bf9bc-k6q2z\" (UID: \"097dee9d-1df7-43d6-951b-07e2d49eaf30\") " pod="kube-system/coredns-668d6bf9bc-k6q2z" May 14 23:42:24.474193 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:56448.service - OpenSSH per-connection server daemon (10.0.0.1:56448). May 14 23:42:24.526464 sshd[3352]: Accepted publickey for core from 10.0.0.1 port 56448 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:24.527739 sshd-session[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:24.532796 systemd-logind[1493]: New session 8 of user core. May 14 23:42:24.540292 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:42:24.558657 kubelet[2627]: E0514 23:42:24.558596 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:24.559232 containerd[1505]: time="2025-05-14T23:42:24.559182089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zskss,Uid:872fb38a-4c3d-447a-b64e-b8f614d12232,Namespace:kube-system,Attempt:0,}" May 14 23:42:24.570085 kubelet[2627]: E0514 23:42:24.570057 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:24.571840 containerd[1505]: time="2025-05-14T23:42:24.571801017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6q2z,Uid:097dee9d-1df7-43d6-951b-07e2d49eaf30,Namespace:kube-system,Attempt:0,}" May 14 23:42:24.679493 sshd[3374]: Connection closed by 10.0.0.1 port 56448 May 14 23:42:24.679817 sshd-session[3352]: pam_unix(sshd:session): session closed for user core May 14 23:42:24.683654 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. May 14 23:42:24.684424 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:56448.service: Deactivated successfully. May 14 23:42:24.686648 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:42:24.687506 systemd-logind[1493]: Removed session 8. May 14 23:42:24.999556 kubelet[2627]: E0514 23:42:24.999523 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:25.015842 kubelet[2627]: I0514 23:42:25.015740 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cljhl" podStartSLOduration=5.851958729 podStartE2EDuration="18.015725907s" podCreationTimestamp="2025-05-14 23:42:07 +0000 UTC" firstStartedPulling="2025-05-14 23:42:07.694473642 +0000 UTC m=+5.862629587" lastFinishedPulling="2025-05-14 23:42:19.85824082 +0000 UTC m=+18.026396765" observedRunningTime="2025-05-14 23:42:25.015596863 +0000 UTC m=+23.183752808" watchObservedRunningTime="2025-05-14 23:42:25.015725907 +0000 UTC m=+23.183881852" May 14 23:42:26.001735 kubelet[2627]: E0514 23:42:26.001693 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:26.260175 systemd-networkd[1434]: cilium_host: Link UP May 14 23:42:26.260506 systemd-networkd[1434]: cilium_net: Link UP May 14 23:42:26.260753 systemd-networkd[1434]: cilium_net: Gained carrier May 14 23:42:26.260976 systemd-networkd[1434]: cilium_host: Gained carrier May 14 23:42:26.377386 systemd-networkd[1434]: cilium_vxlan: Link UP May 14 23:42:26.377398 systemd-networkd[1434]: cilium_vxlan: Gained carrier May 14 23:42:26.492371 systemd-networkd[1434]: cilium_net: Gained IPv6LL May 14 23:42:26.588163 kernel: NET: Registered PF_ALG protocol family May 14 23:42:26.750477 systemd-networkd[1434]: cilium_host: Gained IPv6LL May 14 23:42:27.003236 kubelet[2627]: E0514 23:42:27.003083 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:27.256249 systemd-networkd[1434]: lxc_health: Link UP May 14 23:42:27.265284 systemd-networkd[1434]: lxc_health: Gained carrier May 14 23:42:27.641656 systemd-networkd[1434]: lxc00e089d2401e: Link UP May 14 23:42:27.651243 kernel: eth0: renamed from tmp07fcc May 14 23:42:27.662784 systemd-networkd[1434]: lxca7a1138fcc97: Link UP May 14 23:42:27.663796 kernel: eth0: renamed from tmp29397 May 14 23:42:27.663233 systemd-networkd[1434]: lxc00e089d2401e: Gained carrier May 14 23:42:27.669176 systemd-networkd[1434]: lxca7a1138fcc97: Gained carrier May 14 23:42:27.974814 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL May 14 23:42:28.005255 kubelet[2627]: E0514 23:42:28.005210 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:28.612474 systemd-networkd[1434]: lxc_health: Gained IPv6LL May 14 23:42:28.933253 systemd-networkd[1434]: lxc00e089d2401e: Gained IPv6LL May 14 23:42:29.637002 systemd-networkd[1434]: lxca7a1138fcc97: Gained IPv6LL May 14 23:42:29.696829 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:58124.service - OpenSSH per-connection server daemon (10.0.0.1:58124). May 14 23:42:29.801040 sshd[3806]: Accepted publickey for core from 10.0.0.1 port 58124 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:29.802546 sshd-session[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:29.807337 systemd-logind[1493]: New session 9 of user core. May 14 23:42:29.812295 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:42:29.950933 sshd[3811]: Connection closed by 10.0.0.1 port 58124 May 14 23:42:29.952776 sshd-session[3806]: pam_unix(sshd:session): session closed for user core May 14 23:42:29.956537 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:58124.service: Deactivated successfully. May 14 23:42:29.958823 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:42:29.959723 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. May 14 23:42:29.960595 systemd-logind[1493]: Removed session 9. May 14 23:42:31.000091 containerd[1505]: time="2025-05-14T23:42:30.999997068Z" level=info msg="connecting to shim 07fcc3320d24b90d285ed4c9168266dca555899027f90e116c49d260b9cf4576" address="unix:///run/containerd/s/018eb263a4bb242e8c9ec209cc9d6c55cfaec7032da0609d80b46ec8ab16c873" namespace=k8s.io protocol=ttrpc version=3 May 14 23:42:31.015006 containerd[1505]: time="2025-05-14T23:42:31.014947907Z" level=info msg="connecting to shim 293974dab2e0e9e3aecb635167408d1342a269c678fa8fdd9173cea9479208bc" address="unix:///run/containerd/s/dc2bb1366526f8df8a2008517a7473ed58af868755a4410fd75bf587ee54232d" namespace=k8s.io protocol=ttrpc version=3 May 14 23:42:31.031289 systemd[1]: Started cri-containerd-07fcc3320d24b90d285ed4c9168266dca555899027f90e116c49d260b9cf4576.scope - libcontainer container 07fcc3320d24b90d285ed4c9168266dca555899027f90e116c49d260b9cf4576. May 14 23:42:31.055244 systemd[1]: Started cri-containerd-293974dab2e0e9e3aecb635167408d1342a269c678fa8fdd9173cea9479208bc.scope - libcontainer container 293974dab2e0e9e3aecb635167408d1342a269c678fa8fdd9173cea9479208bc. May 14 23:42:31.059856 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:42:31.067950 systemd-resolved[1346]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 23:42:31.090262 containerd[1505]: time="2025-05-14T23:42:31.090225434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zskss,Uid:872fb38a-4c3d-447a-b64e-b8f614d12232,Namespace:kube-system,Attempt:0,} returns sandbox id \"07fcc3320d24b90d285ed4c9168266dca555899027f90e116c49d260b9cf4576\"" May 14 23:42:31.090761 kubelet[2627]: E0514 23:42:31.090739 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:31.093060 containerd[1505]: time="2025-05-14T23:42:31.093015392Z" level=info msg="CreateContainer within sandbox \"07fcc3320d24b90d285ed4c9168266dca555899027f90e116c49d260b9cf4576\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:42:31.232849 containerd[1505]: time="2025-05-14T23:42:31.232791823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k6q2z,Uid:097dee9d-1df7-43d6-951b-07e2d49eaf30,Namespace:kube-system,Attempt:0,} returns sandbox id \"293974dab2e0e9e3aecb635167408d1342a269c678fa8fdd9173cea9479208bc\"" May 14 23:42:31.233752 kubelet[2627]: E0514 23:42:31.233716 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:31.235468 containerd[1505]: time="2025-05-14T23:42:31.235435355Z" level=info msg="CreateContainer within sandbox \"293974dab2e0e9e3aecb635167408d1342a269c678fa8fdd9173cea9479208bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:42:31.494155 containerd[1505]: time="2025-05-14T23:42:31.494075482Z" level=info msg="Container cb4a31e5a29610e8436ce228d74e3838d16d000e1103a215005a2f4c11713a4f: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:31.612363 containerd[1505]: time="2025-05-14T23:42:31.612309229Z" level=info msg="Container b539abe302b19b9ea65e9f8d606b6fdcf19666f96dc29d417887ef879f9f7270: CDI devices from CRI Config.CDIDevices: []" May 14 23:42:31.771458 containerd[1505]: time="2025-05-14T23:42:31.771324971Z" level=info msg="CreateContainer within sandbox \"07fcc3320d24b90d285ed4c9168266dca555899027f90e116c49d260b9cf4576\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb4a31e5a29610e8436ce228d74e3838d16d000e1103a215005a2f4c11713a4f\"" May 14 23:42:31.771951 containerd[1505]: time="2025-05-14T23:42:31.771915685Z" level=info msg="StartContainer for \"cb4a31e5a29610e8436ce228d74e3838d16d000e1103a215005a2f4c11713a4f\"" May 14 23:42:31.772820 containerd[1505]: time="2025-05-14T23:42:31.772772359Z" level=info msg="connecting to shim cb4a31e5a29610e8436ce228d74e3838d16d000e1103a215005a2f4c11713a4f" address="unix:///run/containerd/s/018eb263a4bb242e8c9ec209cc9d6c55cfaec7032da0609d80b46ec8ab16c873" protocol=ttrpc version=3 May 14 23:42:31.774504 containerd[1505]: time="2025-05-14T23:42:31.774369219Z" level=info msg="CreateContainer within sandbox \"293974dab2e0e9e3aecb635167408d1342a269c678fa8fdd9173cea9479208bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b539abe302b19b9ea65e9f8d606b6fdcf19666f96dc29d417887ef879f9f7270\"" May 14 23:42:31.774930 containerd[1505]: time="2025-05-14T23:42:31.774877497Z" level=info msg="StartContainer for \"b539abe302b19b9ea65e9f8d606b6fdcf19666f96dc29d417887ef879f9f7270\"" May 14 23:42:31.776934 containerd[1505]: time="2025-05-14T23:42:31.776785273Z" level=info msg="connecting to shim b539abe302b19b9ea65e9f8d606b6fdcf19666f96dc29d417887ef879f9f7270" address="unix:///run/containerd/s/dc2bb1366526f8df8a2008517a7473ed58af868755a4410fd75bf587ee54232d" protocol=ttrpc version=3 May 14 23:42:31.791287 systemd[1]: Started cri-containerd-cb4a31e5a29610e8436ce228d74e3838d16d000e1103a215005a2f4c11713a4f.scope - libcontainer container cb4a31e5a29610e8436ce228d74e3838d16d000e1103a215005a2f4c11713a4f. May 14 23:42:31.795280 systemd[1]: Started cri-containerd-b539abe302b19b9ea65e9f8d606b6fdcf19666f96dc29d417887ef879f9f7270.scope - libcontainer container b539abe302b19b9ea65e9f8d606b6fdcf19666f96dc29d417887ef879f9f7270. May 14 23:42:31.824488 containerd[1505]: time="2025-05-14T23:42:31.824437514Z" level=info msg="StartContainer for \"cb4a31e5a29610e8436ce228d74e3838d16d000e1103a215005a2f4c11713a4f\" returns successfully" May 14 23:42:31.827680 containerd[1505]: time="2025-05-14T23:42:31.827646091Z" level=info msg="StartContainer for \"b539abe302b19b9ea65e9f8d606b6fdcf19666f96dc29d417887ef879f9f7270\" returns successfully" May 14 23:42:31.997104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878213497.mount: Deactivated successfully. May 14 23:42:32.016105 kubelet[2627]: E0514 23:42:32.015434 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:32.018101 kubelet[2627]: E0514 23:42:32.017989 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:32.038716 kubelet[2627]: I0514 23:42:32.038470 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zskss" podStartSLOduration=25.038454827 podStartE2EDuration="25.038454827s" podCreationTimestamp="2025-05-14 23:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:42:32.037277679 +0000 UTC m=+30.205433624" watchObservedRunningTime="2025-05-14 23:42:32.038454827 +0000 UTC m=+30.206610772" May 14 23:42:32.183391 kubelet[2627]: I0514 23:42:32.181835 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k6q2z" podStartSLOduration=25.18179323 podStartE2EDuration="25.18179323s" podCreationTimestamp="2025-05-14 23:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:42:32.181001719 +0000 UTC m=+30.349157664" watchObservedRunningTime="2025-05-14 23:42:32.18179323 +0000 UTC m=+30.349949175" May 14 23:42:33.019982 kubelet[2627]: E0514 23:42:33.019935 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:33.020171 kubelet[2627]: E0514 23:42:33.020155 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:34.021826 kubelet[2627]: E0514 23:42:34.021793 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:34.022294 kubelet[2627]: E0514 23:42:34.022107 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:34.966481 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:58140.service - OpenSSH per-connection server daemon (10.0.0.1:58140). May 14 23:42:35.020519 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 58140 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:35.022165 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:35.026808 systemd-logind[1493]: New session 10 of user core. May 14 23:42:35.037258 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:42:35.161937 sshd[4001]: Connection closed by 10.0.0.1 port 58140 May 14 23:42:35.162296 sshd-session[3999]: pam_unix(sshd:session): session closed for user core May 14 23:42:35.166853 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:58140.service: Deactivated successfully. May 14 23:42:35.169038 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:42:35.169870 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. May 14 23:42:35.170784 systemd-logind[1493]: Removed session 10. May 14 23:42:35.982984 kubelet[2627]: I0514 23:42:35.982816 2627 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 23:42:35.983601 kubelet[2627]: E0514 23:42:35.983347 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:36.025692 kubelet[2627]: E0514 23:42:36.025643 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:42:40.178838 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:35316.service - OpenSSH per-connection server daemon (10.0.0.1:35316). May 14 23:42:40.228804 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 35316 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:40.230307 sshd-session[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:40.234369 systemd-logind[1493]: New session 11 of user core. May 14 23:42:40.250305 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:42:40.356777 sshd[4020]: Connection closed by 10.0.0.1 port 35316 May 14 23:42:40.357231 sshd-session[4018]: pam_unix(sshd:session): session closed for user core May 14 23:42:40.368693 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:35316.service: Deactivated successfully. May 14 23:42:40.370689 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:42:40.372342 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. May 14 23:42:40.373773 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:35332.service - OpenSSH per-connection server daemon (10.0.0.1:35332). May 14 23:42:40.375021 systemd-logind[1493]: Removed session 11. May 14 23:42:40.427722 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 35332 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:40.430467 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:40.435590 systemd-logind[1493]: New session 12 of user core. May 14 23:42:40.445257 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:42:40.611205 sshd[4037]: Connection closed by 10.0.0.1 port 35332 May 14 23:42:40.611711 sshd-session[4034]: pam_unix(sshd:session): session closed for user core May 14 23:42:40.621413 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:35332.service: Deactivated successfully. May 14 23:42:40.623395 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:42:40.626310 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. May 14 23:42:40.628186 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:35346.service - OpenSSH per-connection server daemon (10.0.0.1:35346). May 14 23:42:40.630943 systemd-logind[1493]: Removed session 12. May 14 23:42:40.677770 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 35346 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:40.679786 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:40.685002 systemd-logind[1493]: New session 13 of user core. May 14 23:42:40.694292 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:42:40.821432 sshd[4050]: Connection closed by 10.0.0.1 port 35346 May 14 23:42:40.821793 sshd-session[4047]: pam_unix(sshd:session): session closed for user core May 14 23:42:40.827330 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:35346.service: Deactivated successfully. May 14 23:42:40.830367 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:42:40.831206 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. May 14 23:42:40.832087 systemd-logind[1493]: Removed session 13. May 14 23:42:45.839798 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:35350.service - OpenSSH per-connection server daemon (10.0.0.1:35350). May 14 23:42:45.883109 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 35350 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:45.884588 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:45.888659 systemd-logind[1493]: New session 14 of user core. May 14 23:42:45.898297 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:42:46.007987 sshd[4066]: Connection closed by 10.0.0.1 port 35350 May 14 23:42:46.008304 sshd-session[4064]: pam_unix(sshd:session): session closed for user core May 14 23:42:46.012201 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:35350.service: Deactivated successfully. May 14 23:42:46.014225 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:42:46.014922 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. May 14 23:42:46.015872 systemd-logind[1493]: Removed session 14. May 14 23:42:51.024780 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:59644.service - OpenSSH per-connection server daemon (10.0.0.1:59644). May 14 23:42:51.076326 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 59644 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:51.078172 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:51.082924 systemd-logind[1493]: New session 15 of user core. May 14 23:42:51.093381 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:42:51.208967 sshd[4081]: Connection closed by 10.0.0.1 port 59644 May 14 23:42:51.209279 sshd-session[4079]: pam_unix(sshd:session): session closed for user core May 14 23:42:51.213037 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:59644.service: Deactivated successfully. May 14 23:42:51.215060 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:42:51.215851 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. May 14 23:42:51.216810 systemd-logind[1493]: Removed session 15. May 14 23:42:56.223382 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:59650.service - OpenSSH per-connection server daemon (10.0.0.1:59650). May 14 23:42:56.274364 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 59650 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:56.276004 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:56.280562 systemd-logind[1493]: New session 16 of user core. May 14 23:42:56.290330 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:42:56.399231 sshd[4097]: Connection closed by 10.0.0.1 port 59650 May 14 23:42:56.399606 sshd-session[4095]: pam_unix(sshd:session): session closed for user core May 14 23:42:56.412444 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:59650.service: Deactivated successfully. May 14 23:42:56.414552 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:42:56.415972 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. May 14 23:42:56.417601 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:59654.service - OpenSSH per-connection server daemon (10.0.0.1:59654). May 14 23:42:56.418569 systemd-logind[1493]: Removed session 16. May 14 23:42:56.468702 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 59654 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:56.470625 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:56.475567 systemd-logind[1493]: New session 17 of user core. May 14 23:42:56.488316 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:42:57.282002 sshd[4112]: Connection closed by 10.0.0.1 port 59654 May 14 23:42:57.282565 sshd-session[4109]: pam_unix(sshd:session): session closed for user core May 14 23:42:57.295122 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:59654.service: Deactivated successfully. May 14 23:42:57.296991 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:42:57.301261 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. May 14 23:42:57.301946 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:59662.service - OpenSSH per-connection server daemon (10.0.0.1:59662). May 14 23:42:57.303304 systemd-logind[1493]: Removed session 17. May 14 23:42:57.354852 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 59662 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:57.356493 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:57.361282 systemd-logind[1493]: New session 18 of user core. May 14 23:42:57.380289 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:42:58.702324 sshd[4126]: Connection closed by 10.0.0.1 port 59662 May 14 23:42:58.702808 sshd-session[4123]: pam_unix(sshd:session): session closed for user core May 14 23:42:58.715396 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:59662.service: Deactivated successfully. May 14 23:42:58.717734 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:42:58.719557 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. May 14 23:42:58.720952 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:58784.service - OpenSSH per-connection server daemon (10.0.0.1:58784). May 14 23:42:58.722236 systemd-logind[1493]: Removed session 18. May 14 23:42:58.766212 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 58784 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:58.767985 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:58.772512 systemd-logind[1493]: New session 19 of user core. May 14 23:42:58.782250 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:42:59.407179 sshd[4160]: Connection closed by 10.0.0.1 port 58784 May 14 23:42:59.407576 sshd-session[4157]: pam_unix(sshd:session): session closed for user core May 14 23:42:59.416681 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:58784.service: Deactivated successfully. May 14 23:42:59.418846 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:42:59.420622 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. May 14 23:42:59.422896 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:58800.service - OpenSSH per-connection server daemon (10.0.0.1:58800). May 14 23:42:59.423939 systemd-logind[1493]: Removed session 19. May 14 23:42:59.478581 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 58800 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:42:59.480023 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:42:59.484417 systemd-logind[1493]: New session 20 of user core. May 14 23:42:59.494273 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:42:59.605833 sshd[4174]: Connection closed by 10.0.0.1 port 58800 May 14 23:42:59.606203 sshd-session[4171]: pam_unix(sshd:session): session closed for user core May 14 23:42:59.610763 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:58800.service: Deactivated successfully. May 14 23:42:59.613049 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:42:59.613828 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. May 14 23:42:59.614764 systemd-logind[1493]: Removed session 20. May 14 23:43:04.618220 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:58802.service - OpenSSH per-connection server daemon (10.0.0.1:58802). May 14 23:43:04.667172 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 58802 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:43:04.668899 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:43:04.673408 systemd-logind[1493]: New session 21 of user core. May 14 23:43:04.683351 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:43:04.795301 sshd[4191]: Connection closed by 10.0.0.1 port 58802 May 14 23:43:04.795674 sshd-session[4189]: pam_unix(sshd:session): session closed for user core May 14 23:43:04.799839 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:58802.service: Deactivated successfully. May 14 23:43:04.801900 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:43:04.802568 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. May 14 23:43:04.803461 systemd-logind[1493]: Removed session 21. May 14 23:43:09.810114 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:38110.service - OpenSSH per-connection server daemon (10.0.0.1:38110). May 14 23:43:09.860063 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 38110 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:43:09.861890 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:43:09.866283 systemd-logind[1493]: New session 22 of user core. May 14 23:43:09.876256 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:43:09.996678 sshd[4208]: Connection closed by 10.0.0.1 port 38110 May 14 23:43:09.997069 sshd-session[4206]: pam_unix(sshd:session): session closed for user core May 14 23:43:10.001540 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:38110.service: Deactivated successfully. May 14 23:43:10.003926 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:43:10.004859 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. May 14 23:43:10.005928 systemd-logind[1493]: Removed session 22. May 14 23:43:15.009375 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:38122.service - OpenSSH per-connection server daemon (10.0.0.1:38122). May 14 23:43:15.056929 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 38122 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:43:15.066331 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:43:15.070864 systemd-logind[1493]: New session 23 of user core. May 14 23:43:15.082265 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 23:43:15.192286 sshd[4226]: Connection closed by 10.0.0.1 port 38122 May 14 23:43:15.192650 sshd-session[4224]: pam_unix(sshd:session): session closed for user core May 14 23:43:15.197441 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:38122.service: Deactivated successfully. May 14 23:43:15.199556 systemd[1]: session-23.scope: Deactivated successfully. May 14 23:43:15.200401 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. May 14 23:43:15.201324 systemd-logind[1493]: Removed session 23. May 14 23:43:18.922402 kubelet[2627]: E0514 23:43:18.922360 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:18.922973 kubelet[2627]: E0514 23:43:18.922516 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:20.206296 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:54702.service - OpenSSH per-connection server daemon (10.0.0.1:54702). May 14 23:43:20.258099 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 54702 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:43:20.259701 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:43:20.264092 systemd-logind[1493]: New session 24 of user core. May 14 23:43:20.272260 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 23:43:20.393113 sshd[4242]: Connection closed by 10.0.0.1 port 54702 May 14 23:43:20.393506 sshd-session[4240]: pam_unix(sshd:session): session closed for user core May 14 23:43:20.398141 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:54702.service: Deactivated successfully. May 14 23:43:20.400102 systemd[1]: session-24.scope: Deactivated successfully. May 14 23:43:20.400881 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. May 14 23:43:20.401778 systemd-logind[1493]: Removed session 24. May 14 23:43:25.406266 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:54710.service - OpenSSH per-connection server daemon (10.0.0.1:54710). May 14 23:43:25.452501 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 54710 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:43:25.453893 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:43:25.458029 systemd-logind[1493]: New session 25 of user core. May 14 23:43:25.468265 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 23:43:25.579649 sshd[4257]: Connection closed by 10.0.0.1 port 54710 May 14 23:43:25.580196 sshd-session[4255]: pam_unix(sshd:session): session closed for user core May 14 23:43:25.590437 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:54710.service: Deactivated successfully. May 14 23:43:25.592648 systemd[1]: session-25.scope: Deactivated successfully. May 14 23:43:25.594543 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. May 14 23:43:25.596291 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:54726.service - OpenSSH per-connection server daemon (10.0.0.1:54726). May 14 23:43:25.597284 systemd-logind[1493]: Removed session 25. May 14 23:43:25.653923 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 54726 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:43:25.655525 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:43:25.660348 systemd-logind[1493]: New session 26 of user core. May 14 23:43:25.670282 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 23:43:27.207785 containerd[1505]: time="2025-05-14T23:43:27.207739221Z" level=info msg="StopContainer for \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" with timeout 30 (s)" May 14 23:43:27.211492 containerd[1505]: time="2025-05-14T23:43:27.208176749Z" level=info msg="Stop container \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" with signal terminated" May 14 23:43:27.225483 systemd[1]: cri-containerd-b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251.scope: Deactivated successfully. May 14 23:43:27.227832 containerd[1505]: time="2025-05-14T23:43:27.227197914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" id:\"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" pid:3171 exited_at:{seconds:1747266207 nanos:226699679}" May 14 23:43:27.227832 containerd[1505]: time="2025-05-14T23:43:27.227300220Z" level=info msg="received exit event container_id:\"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" id:\"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" pid:3171 exited_at:{seconds:1747266207 nanos:226699679}" May 14 23:43:27.232748 containerd[1505]: time="2025-05-14T23:43:27.232714196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" id:\"cf90b8881273c41091bb8c9e7232b4911956ed586bf85bcb16d2ef2095aa4336\" pid:4292 exited_at:{seconds:1747266207 nanos:232496721}" May 14 23:43:27.234864 containerd[1505]: time="2025-05-14T23:43:27.234841871Z" level=info msg="StopContainer for \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" with timeout 2 (s)" May 14 23:43:27.236390 containerd[1505]: time="2025-05-14T23:43:27.236372953Z" level=info msg="Stop container \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" with signal terminated" May 14 23:43:27.239175 containerd[1505]: time="2025-05-14T23:43:27.238648951Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:43:27.245056 systemd-networkd[1434]: lxc_health: Link DOWN May 14 23:43:27.245064 systemd-networkd[1434]: lxc_health: Lost carrier May 14 23:43:27.260689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251-rootfs.mount: Deactivated successfully. May 14 23:43:27.266584 systemd[1]: cri-containerd-9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61.scope: Deactivated successfully. May 14 23:43:27.266944 systemd[1]: cri-containerd-9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61.scope: Consumed 6.770s CPU time, 126.3M memory peak, 212K read from disk, 13.3M written to disk. May 14 23:43:27.268265 containerd[1505]: time="2025-05-14T23:43:27.268222142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" id:\"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" pid:3284 exited_at:{seconds:1747266207 nanos:267831334}" May 14 23:43:27.268546 containerd[1505]: time="2025-05-14T23:43:27.268483872Z" level=info msg="received exit event container_id:\"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" id:\"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" pid:3284 exited_at:{seconds:1747266207 nanos:267831334}" May 14 23:43:27.288704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61-rootfs.mount: Deactivated successfully. May 14 23:43:27.515472 containerd[1505]: time="2025-05-14T23:43:27.515337849Z" level=info msg="StopContainer for \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" returns successfully" May 14 23:43:27.516244 containerd[1505]: time="2025-05-14T23:43:27.516198597Z" level=info msg="StopPodSandbox for \"263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb\"" May 14 23:43:27.527480 containerd[1505]: time="2025-05-14T23:43:27.526918114Z" level=info msg="Container to stop \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:43:27.527480 containerd[1505]: time="2025-05-14T23:43:27.527441727Z" level=info msg="StopContainer for \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" returns successfully" May 14 23:43:27.529362 containerd[1505]: time="2025-05-14T23:43:27.528799617Z" level=info msg="StopPodSandbox for \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\"" May 14 23:43:27.529362 containerd[1505]: time="2025-05-14T23:43:27.528889629Z" level=info msg="Container to stop \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:43:27.529362 containerd[1505]: time="2025-05-14T23:43:27.528902244Z" level=info msg="Container to stop \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:43:27.529362 containerd[1505]: time="2025-05-14T23:43:27.528918053Z" level=info msg="Container to stop \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:43:27.529362 containerd[1505]: time="2025-05-14T23:43:27.528927071Z" level=info msg="Container to stop \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:43:27.529362 containerd[1505]: time="2025-05-14T23:43:27.528936259Z" level=info msg="Container to stop \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:43:27.539117 systemd[1]: cri-containerd-6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79.scope: Deactivated successfully. May 14 23:43:27.539617 containerd[1505]: time="2025-05-14T23:43:27.539573588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" id:\"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" pid:2814 exit_status:137 exited_at:{seconds:1747266207 nanos:539024918}" May 14 23:43:27.542223 systemd[1]: cri-containerd-263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb.scope: Deactivated successfully. May 14 23:43:27.542581 systemd[1]: cri-containerd-263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb.scope: Consumed 32ms CPU time, 5.8M memory peak, 1.2M read from disk. May 14 23:43:27.569702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79-rootfs.mount: Deactivated successfully. May 14 23:43:27.572239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb-rootfs.mount: Deactivated successfully. May 14 23:43:27.645964 containerd[1505]: time="2025-05-14T23:43:27.645897029Z" level=info msg="shim disconnected" id=263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb namespace=k8s.io May 14 23:43:27.645964 containerd[1505]: time="2025-05-14T23:43:27.645935944Z" level=warning msg="cleaning up after shim disconnected" id=263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb namespace=k8s.io May 14 23:43:27.645964 containerd[1505]: time="2025-05-14T23:43:27.645944650Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:43:27.646647 containerd[1505]: time="2025-05-14T23:43:27.645906838Z" level=info msg="shim disconnected" id=6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79 namespace=k8s.io May 14 23:43:27.646647 containerd[1505]: time="2025-05-14T23:43:27.646405182Z" level=warning msg="cleaning up after shim disconnected" id=6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79 namespace=k8s.io May 14 23:43:27.646647 containerd[1505]: time="2025-05-14T23:43:27.646416063Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:43:27.664550 containerd[1505]: time="2025-05-14T23:43:27.664303878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb\" id:\"263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb\" pid:2816 exit_status:137 exited_at:{seconds:1747266207 nanos:546900519}" May 14 23:43:27.666534 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb-shm.mount: Deactivated successfully. May 14 23:43:27.666660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79-shm.mount: Deactivated successfully. May 14 23:43:27.673883 containerd[1505]: time="2025-05-14T23:43:27.673116443Z" level=info msg="received exit event sandbox_id:\"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" exit_status:137 exited_at:{seconds:1747266207 nanos:539024918}" May 14 23:43:27.673883 containerd[1505]: time="2025-05-14T23:43:27.673215542Z" level=info msg="received exit event sandbox_id:\"263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb\" exit_status:137 exited_at:{seconds:1747266207 nanos:546900519}" May 14 23:43:27.677205 containerd[1505]: time="2025-05-14T23:43:27.677152912Z" level=info msg="TearDown network for sandbox \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" successfully" May 14 23:43:27.677205 containerd[1505]: time="2025-05-14T23:43:27.677185034Z" level=info msg="StopPodSandbox for \"6115252823a1c5d1ca1922b8404ab724754f08c93dde842d2c4f761e3689ca79\" returns successfully" May 14 23:43:27.678310 containerd[1505]: time="2025-05-14T23:43:27.678266725Z" level=info msg="TearDown network for sandbox \"263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb\" successfully" May 14 23:43:27.678310 containerd[1505]: time="2025-05-14T23:43:27.678293266Z" level=info msg="StopPodSandbox for \"263ad7d88bbed77d3aa2ba20d7076a76606858a267df3ceceb1dac3c228bbfbb\" returns successfully" May 14 23:43:27.729864 kubelet[2627]: I0514 23:43:27.729788 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-etc-cni-netd\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.729864 kubelet[2627]: I0514 23:43:27.729844 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-host-proc-sys-kernel\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.729864 kubelet[2627]: I0514 23:43:27.729873 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45122a94-3b40-4c2e-8c24-172ce26e8ea7-clustermesh-secrets\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730623 kubelet[2627]: I0514 23:43:27.729894 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cni-path\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730623 kubelet[2627]: I0514 23:43:27.729917 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a83919b3-026a-4937-9e0a-677048345683-cilium-config-path\") pod \"a83919b3-026a-4937-9e0a-677048345683\" (UID: \"a83919b3-026a-4937-9e0a-677048345683\") " May 14 23:43:27.730623 kubelet[2627]: I0514 23:43:27.729931 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-cgroup\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730623 kubelet[2627]: I0514 23:43:27.729922 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.730623 kubelet[2627]: I0514 23:43:27.729949 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q65zq\" (UniqueName: \"kubernetes.io/projected/45122a94-3b40-4c2e-8c24-172ce26e8ea7-kube-api-access-q65zq\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730623 kubelet[2627]: I0514 23:43:27.729966 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd98b\" (UniqueName: \"kubernetes.io/projected/a83919b3-026a-4937-9e0a-677048345683-kube-api-access-hd98b\") pod \"a83919b3-026a-4937-9e0a-677048345683\" (UID: \"a83919b3-026a-4937-9e0a-677048345683\") " May 14 23:43:27.730783 kubelet[2627]: I0514 23:43:27.729987 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-config-path\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730783 kubelet[2627]: I0514 23:43:27.730004 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-bpf-maps\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730783 kubelet[2627]: I0514 23:43:27.730017 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-run\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730783 kubelet[2627]: I0514 23:43:27.730031 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-xtables-lock\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730783 kubelet[2627]: I0514 23:43:27.730044 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-hostproc\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730783 kubelet[2627]: I0514 23:43:27.730059 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45122a94-3b40-4c2e-8c24-172ce26e8ea7-hubble-tls\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730920 kubelet[2627]: I0514 23:43:27.730076 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-lib-modules\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730920 kubelet[2627]: I0514 23:43:27.730090 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-host-proc-sys-net\") pod \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\" (UID: \"45122a94-3b40-4c2e-8c24-172ce26e8ea7\") " May 14 23:43:27.730920 kubelet[2627]: I0514 23:43:27.730142 2627 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.730920 kubelet[2627]: I0514 23:43:27.730190 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.730920 kubelet[2627]: I0514 23:43:27.730219 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.731033 kubelet[2627]: I0514 23:43:27.730612 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.731033 kubelet[2627]: I0514 23:43:27.730829 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cni-path" (OuterVolumeSpecName: "cni-path") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.731033 kubelet[2627]: I0514 23:43:27.730865 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.733150 kubelet[2627]: I0514 23:43:27.733102 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.736451 kubelet[2627]: I0514 23:43:27.733180 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.736451 kubelet[2627]: I0514 23:43:27.733197 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-hostproc" (OuterVolumeSpecName: "hostproc") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.736451 kubelet[2627]: I0514 23:43:27.734481 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 23:43:27.736451 kubelet[2627]: I0514 23:43:27.735511 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83919b3-026a-4937-9e0a-677048345683-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a83919b3-026a-4937-9e0a-677048345683" (UID: "a83919b3-026a-4937-9e0a-677048345683"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 14 23:43:27.736451 kubelet[2627]: I0514 23:43:27.736036 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45122a94-3b40-4c2e-8c24-172ce26e8ea7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 14 23:43:27.736695 kubelet[2627]: I0514 23:43:27.736071 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 14 23:43:27.736809 kubelet[2627]: I0514 23:43:27.736778 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45122a94-3b40-4c2e-8c24-172ce26e8ea7-kube-api-access-q65zq" (OuterVolumeSpecName: "kube-api-access-q65zq") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "kube-api-access-q65zq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 23:43:27.737228 kubelet[2627]: I0514 23:43:27.737182 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45122a94-3b40-4c2e-8c24-172ce26e8ea7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "45122a94-3b40-4c2e-8c24-172ce26e8ea7" (UID: "45122a94-3b40-4c2e-8c24-172ce26e8ea7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 23:43:27.738582 kubelet[2627]: I0514 23:43:27.738511 2627 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83919b3-026a-4937-9e0a-677048345683-kube-api-access-hd98b" (OuterVolumeSpecName: "kube-api-access-hd98b") pod "a83919b3-026a-4937-9e0a-677048345683" (UID: "a83919b3-026a-4937-9e0a-677048345683"). InnerVolumeSpecName "kube-api-access-hd98b". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 14 23:43:27.831241 kubelet[2627]: I0514 23:43:27.831205 2627 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831241 kubelet[2627]: I0514 23:43:27.831232 2627 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a83919b3-026a-4937-9e0a-677048345683-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831241 kubelet[2627]: I0514 23:43:27.831241 2627 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831241 kubelet[2627]: I0514 23:43:27.831251 2627 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45122a94-3b40-4c2e-8c24-172ce26e8ea7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831494 kubelet[2627]: I0514 23:43:27.831260 2627 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831494 kubelet[2627]: I0514 23:43:27.831268 2627 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q65zq\" (UniqueName: \"kubernetes.io/projected/45122a94-3b40-4c2e-8c24-172ce26e8ea7-kube-api-access-q65zq\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831494 kubelet[2627]: I0514 23:43:27.831276 2627 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hd98b\" (UniqueName: \"kubernetes.io/projected/a83919b3-026a-4937-9e0a-677048345683-kube-api-access-hd98b\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831494 kubelet[2627]: I0514 23:43:27.831283 2627 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831494 kubelet[2627]: I0514 23:43:27.831291 2627 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831494 kubelet[2627]: I0514 23:43:27.831298 2627 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831494 kubelet[2627]: I0514 23:43:27.831306 2627 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45122a94-3b40-4c2e-8c24-172ce26e8ea7-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831494 kubelet[2627]: I0514 23:43:27.831313 2627 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831758 kubelet[2627]: I0514 23:43:27.831320 2627 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831758 kubelet[2627]: I0514 23:43:27.831328 2627 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.831758 kubelet[2627]: I0514 23:43:27.831335 2627 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45122a94-3b40-4c2e-8c24-172ce26e8ea7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 23:43:27.930248 systemd[1]: Removed slice kubepods-besteffort-poda83919b3_026a_4937_9e0a_677048345683.slice - libcontainer container kubepods-besteffort-poda83919b3_026a_4937_9e0a_677048345683.slice. May 14 23:43:27.930508 systemd[1]: kubepods-besteffort-poda83919b3_026a_4937_9e0a_677048345683.slice: Consumed 411ms CPU time, 30.7M memory peak, 1.2M read from disk, 4K written to disk. May 14 23:43:27.931845 systemd[1]: Removed slice kubepods-burstable-pod45122a94_3b40_4c2e_8c24_172ce26e8ea7.slice - libcontainer container kubepods-burstable-pod45122a94_3b40_4c2e_8c24_172ce26e8ea7.slice. May 14 23:43:27.931981 systemd[1]: kubepods-burstable-pod45122a94_3b40_4c2e_8c24_172ce26e8ea7.slice: Consumed 6.891s CPU time, 126.9M memory peak, 411K read from disk, 13.3M written to disk. May 14 23:43:28.151481 kubelet[2627]: I0514 23:43:28.151344 2627 scope.go:117] "RemoveContainer" containerID="b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251" May 14 23:43:28.154038 containerd[1505]: time="2025-05-14T23:43:28.153907340Z" level=info msg="RemoveContainer for \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\"" May 14 23:43:28.207306 containerd[1505]: time="2025-05-14T23:43:28.207241888Z" level=info msg="RemoveContainer for \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" returns successfully" May 14 23:43:28.207546 kubelet[2627]: I0514 23:43:28.207510 2627 scope.go:117] "RemoveContainer" containerID="b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251" May 14 23:43:28.207872 containerd[1505]: time="2025-05-14T23:43:28.207827139Z" level=error msg="ContainerStatus for \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\": not found" May 14 23:43:28.210151 kubelet[2627]: E0514 23:43:28.210106 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\": not found" containerID="b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251" May 14 23:43:28.210260 kubelet[2627]: I0514 23:43:28.210159 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251"} err="failed to get container status \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5c9e46b95d3588d50c81bb5b4ec1583b7d6371193b2606f887773ae9a483251\": not found" May 14 23:43:28.210313 kubelet[2627]: I0514 23:43:28.210260 2627 scope.go:117] "RemoveContainer" containerID="9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61" May 14 23:43:28.211948 containerd[1505]: time="2025-05-14T23:43:28.211925143Z" level=info msg="RemoveContainer for \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\"" May 14 23:43:28.260833 systemd[1]: var-lib-kubelet-pods-a83919b3\x2d026a\x2d4937\x2d9e0a\x2d677048345683-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhd98b.mount: Deactivated successfully. May 14 23:43:28.260972 systemd[1]: var-lib-kubelet-pods-45122a94\x2d3b40\x2d4c2e\x2d8c24\x2d172ce26e8ea7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 23:43:28.261057 systemd[1]: var-lib-kubelet-pods-45122a94\x2d3b40\x2d4c2e\x2d8c24\x2d172ce26e8ea7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 23:43:28.261161 systemd[1]: var-lib-kubelet-pods-45122a94\x2d3b40\x2d4c2e\x2d8c24\x2d172ce26e8ea7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq65zq.mount: Deactivated successfully. May 14 23:43:28.266188 containerd[1505]: time="2025-05-14T23:43:28.266140387Z" level=info msg="RemoveContainer for \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" returns successfully" May 14 23:43:28.266507 kubelet[2627]: I0514 23:43:28.266471 2627 scope.go:117] "RemoveContainer" containerID="ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f" May 14 23:43:28.268054 containerd[1505]: time="2025-05-14T23:43:28.268021227Z" level=info msg="RemoveContainer for \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\"" May 14 23:43:28.295606 containerd[1505]: time="2025-05-14T23:43:28.295573429Z" level=info msg="RemoveContainer for \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\" returns successfully" May 14 23:43:28.295767 kubelet[2627]: I0514 23:43:28.295737 2627 scope.go:117] "RemoveContainer" containerID="e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2" May 14 23:43:28.297931 containerd[1505]: time="2025-05-14T23:43:28.297885965Z" level=info msg="RemoveContainer for \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\"" May 14 23:43:28.350762 containerd[1505]: time="2025-05-14T23:43:28.350691200Z" level=info msg="RemoveContainer for \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\" returns successfully" May 14 23:43:28.350921 kubelet[2627]: I0514 23:43:28.350854 2627 scope.go:117] "RemoveContainer" containerID="52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688" May 14 23:43:28.352401 containerd[1505]: time="2025-05-14T23:43:28.352290872Z" level=info msg="RemoveContainer for \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\"" May 14 23:43:28.404213 containerd[1505]: time="2025-05-14T23:43:28.404090042Z" level=info msg="RemoveContainer for \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\" returns successfully" May 14 23:43:28.404316 kubelet[2627]: I0514 23:43:28.404296 2627 scope.go:117] "RemoveContainer" containerID="d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2" May 14 23:43:28.405822 containerd[1505]: time="2025-05-14T23:43:28.405788012Z" level=info msg="RemoveContainer for \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\"" May 14 23:43:28.439316 containerd[1505]: time="2025-05-14T23:43:28.439260535Z" level=info msg="RemoveContainer for \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\" returns successfully" May 14 23:43:28.439467 kubelet[2627]: I0514 23:43:28.439436 2627 scope.go:117] "RemoveContainer" containerID="9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61" May 14 23:43:28.439657 containerd[1505]: time="2025-05-14T23:43:28.439618020Z" level=error msg="ContainerStatus for \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\": not found" May 14 23:43:28.439817 kubelet[2627]: E0514 23:43:28.439783 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\": not found" containerID="9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61" May 14 23:43:28.439866 kubelet[2627]: I0514 23:43:28.439822 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61"} err="failed to get container status \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\": rpc error: code = NotFound desc = an error occurred when try to find container \"9482d93707a5bc8c76ae88bceb753d082c6e0cda1b0e2c9642d1305cc6d0dc61\": not found" May 14 23:43:28.439866 kubelet[2627]: I0514 23:43:28.439852 2627 scope.go:117] "RemoveContainer" containerID="ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f" May 14 23:43:28.440057 containerd[1505]: time="2025-05-14T23:43:28.440024769Z" level=error msg="ContainerStatus for \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\": not found" May 14 23:43:28.440193 kubelet[2627]: E0514 23:43:28.440164 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\": not found" containerID="ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f" May 14 23:43:28.440235 kubelet[2627]: I0514 23:43:28.440194 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f"} err="failed to get container status \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec693cbb0c34ed8c7bc2a6ab130248a019e48cf74272d125bcf85247f6cd8b3f\": not found" May 14 23:43:28.440235 kubelet[2627]: I0514 23:43:28.440216 2627 scope.go:117] "RemoveContainer" containerID="e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2" May 14 23:43:28.440384 containerd[1505]: time="2025-05-14T23:43:28.440352536Z" level=error msg="ContainerStatus for \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\": not found" May 14 23:43:28.440477 kubelet[2627]: E0514 23:43:28.440454 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\": not found" containerID="e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2" May 14 23:43:28.440511 kubelet[2627]: I0514 23:43:28.440475 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2"} err="failed to get container status \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e51f36c60982fdcd110cd2718319aaeaef0c50bc8ac0f2a1a22bc985c12404b2\": not found" May 14 23:43:28.440511 kubelet[2627]: I0514 23:43:28.440490 2627 scope.go:117] "RemoveContainer" containerID="52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688" May 14 23:43:28.440644 containerd[1505]: time="2025-05-14T23:43:28.440610219Z" level=error msg="ContainerStatus for \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\": not found" May 14 23:43:28.440751 kubelet[2627]: E0514 23:43:28.440728 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\": not found" containerID="52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688" May 14 23:43:28.440833 kubelet[2627]: I0514 23:43:28.440752 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688"} err="failed to get container status \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\": rpc error: code = NotFound desc = an error occurred when try to find container \"52cdc15264e1d3db990acdf22872dea7a5a8fad242ccbeb8bfc7ed96bea81688\": not found" May 14 23:43:28.440833 kubelet[2627]: I0514 23:43:28.440765 2627 scope.go:117] "RemoveContainer" containerID="d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2" May 14 23:43:28.440967 containerd[1505]: time="2025-05-14T23:43:28.440933919Z" level=error msg="ContainerStatus for \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\": not found" May 14 23:43:28.441072 kubelet[2627]: E0514 23:43:28.441052 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\": not found" containerID="d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2" May 14 23:43:28.441107 kubelet[2627]: I0514 23:43:28.441073 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2"} err="failed to get container status \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\": rpc error: code = NotFound desc = an error occurred when try to find container \"d60f9fe631cf2fa2d2455fa1ee1db67e7678fdfbc09fd55bac1efdff3be4aea2\": not found" May 14 23:43:29.052525 sshd[4272]: Connection closed by 10.0.0.1 port 54726 May 14 23:43:29.053023 sshd-session[4269]: pam_unix(sshd:session): session closed for user core May 14 23:43:29.062113 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:54726.service: Deactivated successfully. May 14 23:43:29.064270 systemd[1]: session-26.scope: Deactivated successfully. May 14 23:43:29.066023 systemd-logind[1493]: Session 26 logged out. Waiting for processes to exit. May 14 23:43:29.067474 systemd[1]: Started sshd@26-10.0.0.54:22-10.0.0.1:57670.service - OpenSSH per-connection server daemon (10.0.0.1:57670). May 14 23:43:29.068529 systemd-logind[1493]: Removed session 26. May 14 23:43:29.122964 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 57670 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:43:29.124686 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:43:29.129310 systemd-logind[1493]: New session 27 of user core. May 14 23:43:29.139270 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 23:43:29.667178 sshd[4423]: Connection closed by 10.0.0.1 port 57670 May 14 23:43:29.667585 sshd-session[4420]: pam_unix(sshd:session): session closed for user core May 14 23:43:29.676136 systemd[1]: sshd@26-10.0.0.54:22-10.0.0.1:57670.service: Deactivated successfully. May 14 23:43:29.678401 systemd[1]: session-27.scope: Deactivated successfully. May 14 23:43:29.679979 systemd-logind[1493]: Session 27 logged out. Waiting for processes to exit. May 14 23:43:29.681413 systemd[1]: Started sshd@27-10.0.0.54:22-10.0.0.1:57678.service - OpenSSH per-connection server daemon (10.0.0.1:57678). May 14 23:43:29.682203 systemd-logind[1493]: Removed session 27. May 14 23:43:29.733962 sshd[4434]: Accepted publickey for core from 10.0.0.1 port 57678 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:43:29.736422 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:43:29.741384 systemd-logind[1493]: New session 28 of user core. May 14 23:43:29.746074 kubelet[2627]: I0514 23:43:29.746039 2627 memory_manager.go:355] "RemoveStaleState removing state" podUID="a83919b3-026a-4937-9e0a-677048345683" containerName="cilium-operator" May 14 23:43:29.746074 kubelet[2627]: I0514 23:43:29.746073 2627 memory_manager.go:355] "RemoveStaleState removing state" podUID="45122a94-3b40-4c2e-8c24-172ce26e8ea7" containerName="cilium-agent" May 14 23:43:29.748293 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 23:43:29.756770 systemd[1]: Created slice kubepods-burstable-podc340a81d_8b04_4691_9d78_f3eb3737d216.slice - libcontainer container kubepods-burstable-podc340a81d_8b04_4691_9d78_f3eb3737d216.slice. May 14 23:43:29.805265 sshd[4437]: Connection closed by 10.0.0.1 port 57678 May 14 23:43:29.805628 sshd-session[4434]: pam_unix(sshd:session): session closed for user core May 14 23:43:29.820671 systemd[1]: sshd@27-10.0.0.54:22-10.0.0.1:57678.service: Deactivated successfully. May 14 23:43:29.822711 systemd[1]: session-28.scope: Deactivated successfully. May 14 23:43:29.825068 systemd-logind[1493]: Session 28 logged out. Waiting for processes to exit. May 14 23:43:29.827876 systemd[1]: Started sshd@28-10.0.0.54:22-10.0.0.1:57682.service - OpenSSH per-connection server daemon (10.0.0.1:57682). May 14 23:43:29.829018 systemd-logind[1493]: Removed session 28. May 14 23:43:29.842461 kubelet[2627]: I0514 23:43:29.842366 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-host-proc-sys-net\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842461 kubelet[2627]: I0514 23:43:29.842430 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c340a81d-8b04-4691-9d78-f3eb3737d216-hubble-tls\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842461 kubelet[2627]: I0514 23:43:29.842447 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-host-proc-sys-kernel\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842461 kubelet[2627]: I0514 23:43:29.842462 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c340a81d-8b04-4691-9d78-f3eb3737d216-cilium-config-path\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842756 kubelet[2627]: I0514 23:43:29.842480 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-hostproc\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842756 kubelet[2627]: I0514 23:43:29.842533 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-cilium-cgroup\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842756 kubelet[2627]: I0514 23:43:29.842558 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-cni-path\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842756 kubelet[2627]: I0514 23:43:29.842579 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-xtables-lock\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842756 kubelet[2627]: I0514 23:43:29.842663 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqgvj\" (UniqueName: \"kubernetes.io/projected/c340a81d-8b04-4691-9d78-f3eb3737d216-kube-api-access-hqgvj\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842875 kubelet[2627]: I0514 23:43:29.842759 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-bpf-maps\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842875 kubelet[2627]: I0514 23:43:29.842787 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-lib-modules\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842875 kubelet[2627]: I0514 23:43:29.842819 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c340a81d-8b04-4691-9d78-f3eb3737d216-cilium-ipsec-secrets\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842875 kubelet[2627]: I0514 23:43:29.842849 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-cilium-run\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842966 kubelet[2627]: I0514 23:43:29.842875 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c340a81d-8b04-4691-9d78-f3eb3737d216-etc-cni-netd\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.842966 kubelet[2627]: I0514 23:43:29.842899 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c340a81d-8b04-4691-9d78-f3eb3737d216-clustermesh-secrets\") pod \"cilium-5nzqm\" (UID: \"c340a81d-8b04-4691-9d78-f3eb3737d216\") " pod="kube-system/cilium-5nzqm" May 14 23:43:29.878722 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 57682 ssh2: RSA SHA256:lk2TkYBEL43KPVbrGyh3Ro8IB8NGN6uTNXzFyrYR01I May 14 23:43:29.880410 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:43:29.885526 systemd-logind[1493]: New session 29 of user core. May 14 23:43:29.894272 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 23:43:29.929353 kubelet[2627]: I0514 23:43:29.928949 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45122a94-3b40-4c2e-8c24-172ce26e8ea7" path="/var/lib/kubelet/pods/45122a94-3b40-4c2e-8c24-172ce26e8ea7/volumes" May 14 23:43:29.930186 kubelet[2627]: I0514 23:43:29.929990 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83919b3-026a-4937-9e0a-677048345683" path="/var/lib/kubelet/pods/a83919b3-026a-4937-9e0a-677048345683/volumes" May 14 23:43:30.359869 kubelet[2627]: E0514 23:43:30.359837 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:30.360443 containerd[1505]: time="2025-05-14T23:43:30.360405165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nzqm,Uid:c340a81d-8b04-4691-9d78-f3eb3737d216,Namespace:kube-system,Attempt:0,}" May 14 23:43:30.567840 containerd[1505]: time="2025-05-14T23:43:30.567785754Z" level=info msg="connecting to shim ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a" address="unix:///run/containerd/s/2554cf2440c09a975e8b5f6629450322209f8ef7342093733ba4bf7242902252" namespace=k8s.io protocol=ttrpc version=3 May 14 23:43:30.590272 systemd[1]: Started cri-containerd-ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a.scope - libcontainer container ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a. May 14 23:43:30.630856 containerd[1505]: time="2025-05-14T23:43:30.630728544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nzqm,Uid:c340a81d-8b04-4691-9d78-f3eb3737d216,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\"" May 14 23:43:30.631409 kubelet[2627]: E0514 23:43:30.631369 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:30.632769 containerd[1505]: time="2025-05-14T23:43:30.632743837Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:43:30.790712 containerd[1505]: time="2025-05-14T23:43:30.790648066Z" level=info msg="Container 6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5: CDI devices from CRI Config.CDIDevices: []" May 14 23:43:30.992740 containerd[1505]: time="2025-05-14T23:43:30.992590628Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5\"" May 14 23:43:30.994008 containerd[1505]: time="2025-05-14T23:43:30.993969103Z" level=info msg="StartContainer for \"6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5\"" May 14 23:43:30.994858 containerd[1505]: time="2025-05-14T23:43:30.994837234Z" level=info msg="connecting to shim 6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5" address="unix:///run/containerd/s/2554cf2440c09a975e8b5f6629450322209f8ef7342093733ba4bf7242902252" protocol=ttrpc version=3 May 14 23:43:31.019365 systemd[1]: Started cri-containerd-6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5.scope - libcontainer container 6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5. May 14 23:43:31.058064 systemd[1]: cri-containerd-6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5.scope: Deactivated successfully. May 14 23:43:31.059717 containerd[1505]: time="2025-05-14T23:43:31.059671446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5\" id:\"6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5\" pid:4519 exited_at:{seconds:1747266211 nanos:59337688}" May 14 23:43:31.070258 containerd[1505]: time="2025-05-14T23:43:31.070210257Z" level=info msg="received exit event container_id:\"6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5\" id:\"6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5\" pid:4519 exited_at:{seconds:1747266211 nanos:59337688}" May 14 23:43:31.071083 containerd[1505]: time="2025-05-14T23:43:31.071062205Z" level=info msg="StartContainer for \"6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5\" returns successfully" May 14 23:43:31.090199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c5d51e11b11ed754ca04d0f507e9dbfc179a3b75dfd1eb65757df7d473f5bd5-rootfs.mount: Deactivated successfully. May 14 23:43:31.167314 kubelet[2627]: E0514 23:43:31.167265 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:31.970034 kubelet[2627]: E0514 23:43:31.969982 2627 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:43:32.170714 kubelet[2627]: E0514 23:43:32.170577 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:32.171978 containerd[1505]: time="2025-05-14T23:43:32.171940797Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:43:32.354178 containerd[1505]: time="2025-05-14T23:43:32.353175914Z" level=info msg="Container ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2: CDI devices from CRI Config.CDIDevices: []" May 14 23:43:32.490329 containerd[1505]: time="2025-05-14T23:43:32.490287934Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2\"" May 14 23:43:32.491011 containerd[1505]: time="2025-05-14T23:43:32.490782068Z" level=info msg="StartContainer for \"ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2\"" May 14 23:43:32.491791 containerd[1505]: time="2025-05-14T23:43:32.491754365Z" level=info msg="connecting to shim ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2" address="unix:///run/containerd/s/2554cf2440c09a975e8b5f6629450322209f8ef7342093733ba4bf7242902252" protocol=ttrpc version=3 May 14 23:43:32.515294 systemd[1]: Started cri-containerd-ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2.scope - libcontainer container ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2. May 14 23:43:32.548676 systemd[1]: cri-containerd-ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2.scope: Deactivated successfully. May 14 23:43:32.549183 containerd[1505]: time="2025-05-14T23:43:32.549039716Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2\" id:\"ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2\" pid:4563 exited_at:{seconds:1747266212 nanos:548816729}" May 14 23:43:32.626391 containerd[1505]: time="2025-05-14T23:43:32.626256210Z" level=info msg="received exit event container_id:\"ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2\" id:\"ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2\" pid:4563 exited_at:{seconds:1747266212 nanos:548816729}" May 14 23:43:32.627223 containerd[1505]: time="2025-05-14T23:43:32.627198569Z" level=info msg="StartContainer for \"ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2\" returns successfully" May 14 23:43:32.647054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba1f9094985bda3277d17623597b1824cd2be836970e96a28c3e15d4dfc4e0f2-rootfs.mount: Deactivated successfully. May 14 23:43:33.174339 kubelet[2627]: E0514 23:43:33.174303 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:33.176293 containerd[1505]: time="2025-05-14T23:43:33.176238373Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:43:33.288048 containerd[1505]: time="2025-05-14T23:43:33.287991306Z" level=info msg="Container b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848: CDI devices from CRI Config.CDIDevices: []" May 14 23:43:33.390728 containerd[1505]: time="2025-05-14T23:43:33.390332908Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848\"" May 14 23:43:33.392730 containerd[1505]: time="2025-05-14T23:43:33.392664330Z" level=info msg="StartContainer for \"b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848\"" May 14 23:43:33.394138 containerd[1505]: time="2025-05-14T23:43:33.394088079Z" level=info msg="connecting to shim b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848" address="unix:///run/containerd/s/2554cf2440c09a975e8b5f6629450322209f8ef7342093733ba4bf7242902252" protocol=ttrpc version=3 May 14 23:43:33.416306 systemd[1]: Started cri-containerd-b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848.scope - libcontainer container b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848. May 14 23:43:33.455416 systemd[1]: cri-containerd-b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848.scope: Deactivated successfully. May 14 23:43:33.456309 containerd[1505]: time="2025-05-14T23:43:33.456269539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848\" id:\"b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848\" pid:4608 exited_at:{seconds:1747266213 nanos:455965870}" May 14 23:43:33.543423 containerd[1505]: time="2025-05-14T23:43:33.543372746Z" level=info msg="received exit event container_id:\"b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848\" id:\"b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848\" pid:4608 exited_at:{seconds:1747266213 nanos:455965870}" May 14 23:43:33.552014 containerd[1505]: time="2025-05-14T23:43:33.551967526Z" level=info msg="StartContainer for \"b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848\" returns successfully" May 14 23:43:33.565043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b720e4935430f668d9459ace847095794b28f31e28b32f6ae6178e4590d34848-rootfs.mount: Deactivated successfully. May 14 23:43:33.576841 kubelet[2627]: I0514 23:43:33.576794 2627 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T23:43:33Z","lastTransitionTime":"2025-05-14T23:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 23:43:34.179509 kubelet[2627]: E0514 23:43:34.179461 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:34.181200 containerd[1505]: time="2025-05-14T23:43:34.181160384Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:43:34.335097 containerd[1505]: time="2025-05-14T23:43:34.335038516Z" level=info msg="Container 174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2: CDI devices from CRI Config.CDIDevices: []" May 14 23:43:34.446218 containerd[1505]: time="2025-05-14T23:43:34.446094377Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2\"" May 14 23:43:34.446822 containerd[1505]: time="2025-05-14T23:43:34.446649135Z" level=info msg="StartContainer for \"174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2\"" May 14 23:43:34.447492 containerd[1505]: time="2025-05-14T23:43:34.447467457Z" level=info msg="connecting to shim 174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2" address="unix:///run/containerd/s/2554cf2440c09a975e8b5f6629450322209f8ef7342093733ba4bf7242902252" protocol=ttrpc version=3 May 14 23:43:34.468260 systemd[1]: Started cri-containerd-174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2.scope - libcontainer container 174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2. May 14 23:43:34.495893 systemd[1]: cri-containerd-174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2.scope: Deactivated successfully. May 14 23:43:34.499721 containerd[1505]: time="2025-05-14T23:43:34.496371375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2\" id:\"174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2\" pid:4647 exited_at:{seconds:1747266214 nanos:496001309}" May 14 23:43:34.524264 containerd[1505]: time="2025-05-14T23:43:34.524214296Z" level=info msg="received exit event container_id:\"174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2\" id:\"174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2\" pid:4647 exited_at:{seconds:1747266214 nanos:496001309}" May 14 23:43:34.532016 containerd[1505]: time="2025-05-14T23:43:34.531989236Z" level=info msg="StartContainer for \"174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2\" returns successfully" May 14 23:43:34.544318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-174d274d52f7c0e473784f6b39573463742ee159f847559a4ceb8048a6db58a2-rootfs.mount: Deactivated successfully. May 14 23:43:35.184604 kubelet[2627]: E0514 23:43:35.184567 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:35.186657 containerd[1505]: time="2025-05-14T23:43:35.186471392Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:43:35.334467 containerd[1505]: time="2025-05-14T23:43:35.334419738Z" level=info msg="Container 02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75: CDI devices from CRI Config.CDIDevices: []" May 14 23:43:35.537208 containerd[1505]: time="2025-05-14T23:43:35.537070997Z" level=info msg="CreateContainer within sandbox \"ac53254b81667d033c59ce445033b9bab95c6a8dfc2e1db979d8919b3ca3874a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\"" May 14 23:43:35.537747 containerd[1505]: time="2025-05-14T23:43:35.537691121Z" level=info msg="StartContainer for \"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\"" May 14 23:43:35.538771 containerd[1505]: time="2025-05-14T23:43:35.538746274Z" level=info msg="connecting to shim 02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75" address="unix:///run/containerd/s/2554cf2440c09a975e8b5f6629450322209f8ef7342093733ba4bf7242902252" protocol=ttrpc version=3 May 14 23:43:35.562245 systemd[1]: Started cri-containerd-02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75.scope - libcontainer container 02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75. May 14 23:43:35.657952 containerd[1505]: time="2025-05-14T23:43:35.657887531Z" level=info msg="StartContainer for \"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\" returns successfully" May 14 23:43:35.721977 containerd[1505]: time="2025-05-14T23:43:35.721918971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\" id:\"c4f52572939b3082d5ea783d1af38ec3ab0f82cd5f6e936c8f7a7c5000ede5c1\" pid:4721 exited_at:{seconds:1747266215 nanos:721469935}" May 14 23:43:36.029189 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 14 23:43:36.191257 kubelet[2627]: E0514 23:43:36.191180 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:36.755051 containerd[1505]: time="2025-05-14T23:43:36.754987225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\" id:\"9bd4d52b6b1c41451c17a3ca58bf8603ac649bc2c3014dc35668437819baa39d\" pid:4824 exit_status:1 exited_at:{seconds:1747266216 nanos:754540052}" May 14 23:43:36.922933 kubelet[2627]: E0514 23:43:36.922867 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:37.193000 kubelet[2627]: E0514 23:43:37.192960 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:38.872519 containerd[1505]: time="2025-05-14T23:43:38.872464378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\" id:\"e78a146db65d344e819f64b123c6c70d7258204e1564ceb06692045115ad1f90\" pid:5173 exit_status:1 exited_at:{seconds:1747266218 nanos:872166902}" May 14 23:43:39.282932 systemd-networkd[1434]: lxc_health: Link UP May 14 23:43:39.286230 systemd-networkd[1434]: lxc_health: Gained carrier May 14 23:43:39.923376 kubelet[2627]: E0514 23:43:39.923330 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:40.362475 kubelet[2627]: E0514 23:43:40.362442 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:40.382784 kubelet[2627]: I0514 23:43:40.382458 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5nzqm" podStartSLOduration=11.382443675 podStartE2EDuration="11.382443675s" podCreationTimestamp="2025-05-14 23:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:43:36.290434015 +0000 UTC m=+94.458589960" watchObservedRunningTime="2025-05-14 23:43:40.382443675 +0000 UTC m=+98.550599620" May 14 23:43:40.423338 systemd-networkd[1434]: lxc_health: Gained IPv6LL May 14 23:43:40.963510 containerd[1505]: time="2025-05-14T23:43:40.963462738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\" id:\"c1d3b18b40dece69703280111481e2c383ff20c740434f081664d66c0cc74bc3\" pid:5311 exited_at:{seconds:1747266220 nanos:962981051}" May 14 23:43:41.202148 kubelet[2627]: E0514 23:43:41.200055 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 23:43:43.050915 containerd[1505]: time="2025-05-14T23:43:43.050842840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\" id:\"66a7473d55989d6b0f64c1d758dd31e40a4b727c24a93e3908eff67eeb6bf1fa\" pid:5344 exited_at:{seconds:1747266223 nanos:50482996}" May 14 23:43:45.130026 containerd[1505]: time="2025-05-14T23:43:45.129970448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\" id:\"bb92a8dc9004aa2c527a87037c46c38f86df712ec0bcef855dcd64cb5be6efba\" pid:5369 exited_at:{seconds:1747266225 nanos:129597559}" May 14 23:43:47.226903 containerd[1505]: time="2025-05-14T23:43:47.226828823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02d1f5f9b46cac15fd6d29e6945138915698f2e7db23fa7ea91dc88da177fd75\" id:\"9b533b88be25ca4d22535aa293628272adc106f6b2cc2fede7ef5e6612c47e90\" pid:5393 exited_at:{seconds:1747266227 nanos:226428944}" May 14 23:43:47.236356 sshd[4448]: Connection closed by 10.0.0.1 port 57682 May 14 23:43:47.236877 sshd-session[4443]: pam_unix(sshd:session): session closed for user core May 14 23:43:47.241517 systemd[1]: sshd@28-10.0.0.54:22-10.0.0.1:57682.service: Deactivated successfully. May 14 23:43:47.244072 systemd[1]: session-29.scope: Deactivated successfully. May 14 23:43:47.244869 systemd-logind[1493]: Session 29 logged out. Waiting for processes to exit. May 14 23:43:47.245723 systemd-logind[1493]: Removed session 29.