May 16 00:19:44.961731 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:19:35 -00 2025 May 16 00:19:44.961758 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:19:44.961774 kernel: BIOS-provided physical RAM map: May 16 00:19:44.961782 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:19:44.961791 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 00:19:44.961799 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 00:19:44.961810 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 16 00:19:44.961819 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 00:19:44.961828 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 00:19:44.961837 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 00:19:44.961849 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable May 16 00:19:44.961858 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 00:19:44.961866 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 00:19:44.961876 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 00:19:44.961887 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 00:19:44.961897 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 00:19:44.961910 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 16 00:19:44.961919 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 16 00:19:44.961929 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 16 00:19:44.961938 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 16 00:19:44.961948 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 00:19:44.961957 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 00:19:44.961967 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 00:19:44.961976 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:19:44.961985 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 00:19:44.961996 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:19:44.962007 kernel: NX (Execute Disable) protection: active May 16 00:19:44.962024 kernel: APIC: Static calls initialized May 16 00:19:44.962034 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 16 00:19:44.962043 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable May 16 00:19:44.962052 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 16 00:19:44.962061 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable May 16 00:19:44.962070 kernel: extended physical RAM map: May 16 00:19:44.962079 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 16 00:19:44.962088 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 16 00:19:44.962097 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 16 00:19:44.962106 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 16 00:19:44.962115 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 16 00:19:44.962127 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable May 16 00:19:44.962136 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS May 16 00:19:44.962149 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable May 16 00:19:44.962159 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable May 16 00:19:44.962168 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable May 16 00:19:44.962177 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable May 16 00:19:44.962187 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable May 16 00:19:44.962199 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved May 16 00:19:44.962209 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable May 16 00:19:44.962218 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved May 16 00:19:44.962228 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data May 16 00:19:44.962237 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 16 00:19:44.962247 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable May 16 00:19:44.962256 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved May 16 00:19:44.962291 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS May 16 00:19:44.962301 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable May 16 00:19:44.962315 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved May 16 00:19:44.962326 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 16 00:19:44.962336 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved May 16 00:19:44.962346 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 00:19:44.962357 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved May 16 00:19:44.962367 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 00:19:44.962377 kernel: efi: EFI v2.7 by EDK II May 16 00:19:44.962388 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 May 16 00:19:44.962399 kernel: random: crng init done May 16 00:19:44.962411 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map May 16 00:19:44.962421 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved May 16 00:19:44.962434 kernel: secureboot: Secure boot disabled May 16 00:19:44.962444 kernel: SMBIOS 2.8 present. May 16 00:19:44.962454 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 May 16 00:19:44.962464 kernel: Hypervisor detected: KVM May 16 00:19:44.962474 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 00:19:44.962485 kernel: kvm-clock: using sched offset of 3033552627 cycles May 16 00:19:44.962495 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 00:19:44.962506 kernel: tsc: Detected 2794.748 MHz processor May 16 00:19:44.962517 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 00:19:44.962528 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 00:19:44.962538 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 May 16 00:19:44.962551 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 16 00:19:44.962562 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 00:19:44.962571 kernel: Using GB pages for direct mapping May 16 00:19:44.962581 kernel: ACPI: Early table checksum verification disabled May 16 00:19:44.962591 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 16 00:19:44.962602 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 16 00:19:44.962613 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:44.962623 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:44.962633 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 16 00:19:44.962647 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:44.962658 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:44.962668 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:44.962679 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:19:44.962689 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 16 00:19:44.962708 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 16 00:19:44.962719 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 16 00:19:44.962729 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 16 00:19:44.962743 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 16 00:19:44.962754 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 16 00:19:44.962764 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 16 00:19:44.962774 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 16 00:19:44.962785 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 16 00:19:44.962795 kernel: No NUMA configuration found May 16 00:19:44.962805 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] May 16 00:19:44.962816 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] May 16 00:19:44.962826 kernel: Zone ranges: May 16 00:19:44.962837 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 00:19:44.962850 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] May 16 00:19:44.962861 kernel: Normal empty May 16 00:19:44.962871 kernel: Movable zone start for each node May 16 00:19:44.962881 kernel: Early memory node ranges May 16 00:19:44.962892 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 16 00:19:44.962903 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 16 00:19:44.962913 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 16 00:19:44.962923 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] May 16 00:19:44.962934 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] May 16 00:19:44.962947 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] May 16 00:19:44.962957 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] May 16 00:19:44.962967 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] May 16 00:19:44.962978 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] May 16 00:19:44.962989 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:19:44.962999 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 16 00:19:44.963017 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 16 00:19:44.963028 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 00:19:44.963037 kernel: On node 0, zone DMA: 239 pages in unavailable ranges May 16 00:19:44.963047 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges May 16 00:19:44.963056 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 16 00:19:44.963065 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges May 16 00:19:44.963077 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges May 16 00:19:44.963086 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 00:19:44.963095 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 00:19:44.963104 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 00:19:44.963113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 00:19:44.963125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 00:19:44.963134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 00:19:44.963143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 00:19:44.963152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 00:19:44.963161 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 00:19:44.963171 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 16 00:19:44.963180 kernel: TSC deadline timer available May 16 00:19:44.963189 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 16 00:19:44.963198 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 00:19:44.963210 kernel: kvm-guest: KVM setup pv remote TLB flush May 16 00:19:44.963219 kernel: kvm-guest: setup PV sched yield May 16 00:19:44.963228 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices May 16 00:19:44.963238 kernel: Booting paravirtualized kernel on KVM May 16 00:19:44.963247 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 00:19:44.963256 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 16 00:19:44.963291 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 16 00:19:44.963301 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 16 00:19:44.963310 kernel: pcpu-alloc: [0] 0 1 2 3 May 16 00:19:44.963321 kernel: kvm-guest: PV spinlocks enabled May 16 00:19:44.963328 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 16 00:19:44.963337 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:19:44.963345 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:19:44.963352 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:19:44.963359 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:19:44.963367 kernel: Fallback order for Node 0: 0 May 16 00:19:44.963374 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 May 16 00:19:44.963381 kernel: Policy zone: DMA32 May 16 00:19:44.963391 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:19:44.963399 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2295K rwdata, 22752K rodata, 42988K init, 2204K bss, 175776K reserved, 0K cma-reserved) May 16 00:19:44.963406 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:19:44.963413 kernel: ftrace: allocating 37950 entries in 149 pages May 16 00:19:44.963421 kernel: ftrace: allocated 149 pages with 4 groups May 16 00:19:44.963428 kernel: Dynamic Preempt: voluntary May 16 00:19:44.963435 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:19:44.963444 kernel: rcu: RCU event tracing is enabled. May 16 00:19:44.963451 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:19:44.963461 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:19:44.963469 kernel: Rude variant of Tasks RCU enabled. May 16 00:19:44.963476 kernel: Tracing variant of Tasks RCU enabled. May 16 00:19:44.963484 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:19:44.963493 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:19:44.963501 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 16 00:19:44.963508 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 00:19:44.963517 kernel: Console: colour dummy device 80x25 May 16 00:19:44.963525 kernel: printk: console [ttyS0] enabled May 16 00:19:44.963537 kernel: ACPI: Core revision 20230628 May 16 00:19:44.963545 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 16 00:19:44.963553 kernel: APIC: Switch to symmetric I/O mode setup May 16 00:19:44.963560 kernel: x2apic enabled May 16 00:19:44.963570 kernel: APIC: Switched APIC routing to: physical x2apic May 16 00:19:44.963581 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 16 00:19:44.963591 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 16 00:19:44.963602 kernel: kvm-guest: setup PV IPIs May 16 00:19:44.963610 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 00:19:44.963620 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 00:19:44.963628 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 16 00:19:44.963635 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 16 00:19:44.963643 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 16 00:19:44.963650 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 16 00:19:44.963657 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 00:19:44.963665 kernel: Spectre V2 : Mitigation: Retpolines May 16 00:19:44.963672 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 00:19:44.963680 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 16 00:19:44.963690 kernel: RETBleed: Mitigation: untrained return thunk May 16 00:19:44.963706 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 16 00:19:44.963714 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 16 00:19:44.963722 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 16 00:19:44.963731 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 16 00:19:44.963738 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 16 00:19:44.963746 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 16 00:19:44.963754 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 16 00:19:44.963764 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 16 00:19:44.963771 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 16 00:19:44.963778 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 16 00:19:44.963786 kernel: Freeing SMP alternatives memory: 32K May 16 00:19:44.963793 kernel: pid_max: default: 32768 minimum: 301 May 16 00:19:44.963801 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 00:19:44.963808 kernel: landlock: Up and running. May 16 00:19:44.963815 kernel: SELinux: Initializing. May 16 00:19:44.963823 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:19:44.963833 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:19:44.963840 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 16 00:19:44.963848 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:19:44.963855 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:19:44.963863 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 00:19:44.963872 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 16 00:19:44.963881 kernel: ... version: 0 May 16 00:19:44.963892 kernel: ... bit width: 48 May 16 00:19:44.963900 kernel: ... generic registers: 6 May 16 00:19:44.963912 kernel: ... value mask: 0000ffffffffffff May 16 00:19:44.963922 kernel: ... max period: 00007fffffffffff May 16 00:19:44.963933 kernel: ... fixed-purpose events: 0 May 16 00:19:44.963942 kernel: ... event mask: 000000000000003f May 16 00:19:44.963950 kernel: signal: max sigframe size: 1776 May 16 00:19:44.963957 kernel: rcu: Hierarchical SRCU implementation. May 16 00:19:44.963965 kernel: rcu: Max phase no-delay instances is 400. May 16 00:19:44.963972 kernel: smp: Bringing up secondary CPUs ... May 16 00:19:44.963979 kernel: smpboot: x86: Booting SMP configuration: May 16 00:19:44.963989 kernel: .... node #0, CPUs: #1 #2 #3 May 16 00:19:44.963996 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:19:44.964004 kernel: smpboot: Max logical packages: 1 May 16 00:19:44.964011 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 16 00:19:44.964018 kernel: devtmpfs: initialized May 16 00:19:44.964025 kernel: x86/mm: Memory block size: 128MB May 16 00:19:44.964033 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 16 00:19:44.964040 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 16 00:19:44.964048 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) May 16 00:19:44.964058 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 16 00:19:44.964065 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) May 16 00:19:44.964073 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 16 00:19:44.964080 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:19:44.964088 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:19:44.964095 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:19:44.964128 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:19:44.964136 kernel: audit: initializing netlink subsys (disabled) May 16 00:19:44.964143 kernel: audit: type=2000 audit(1747354784.155:1): state=initialized audit_enabled=0 res=1 May 16 00:19:44.964153 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:19:44.964160 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 00:19:44.964168 kernel: cpuidle: using governor menu May 16 00:19:44.964176 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:19:44.964183 kernel: dca service started, version 1.12.1 May 16 00:19:44.964191 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) May 16 00:19:44.964198 kernel: PCI: Using configuration type 1 for base access May 16 00:19:44.964205 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 00:19:44.964213 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:19:44.964223 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 16 00:19:44.964230 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:19:44.964237 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 00:19:44.964245 kernel: ACPI: Added _OSI(Module Device) May 16 00:19:44.964252 kernel: ACPI: Added _OSI(Processor Device) May 16 00:19:44.964284 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:19:44.964292 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:19:44.964299 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:19:44.964306 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 00:19:44.964317 kernel: ACPI: Interpreter enabled May 16 00:19:44.964324 kernel: ACPI: PM: (supports S0 S3 S5) May 16 00:19:44.964331 kernel: ACPI: Using IOAPIC for interrupt routing May 16 00:19:44.964339 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 00:19:44.964346 kernel: PCI: Using E820 reservations for host bridge windows May 16 00:19:44.964354 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 16 00:19:44.964361 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:19:44.964546 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:19:44.964687 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 16 00:19:44.964821 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 16 00:19:44.964831 kernel: PCI host bridge to bus 0000:00 May 16 00:19:44.964955 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 00:19:44.965069 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 00:19:44.965205 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 00:19:44.965335 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] May 16 00:19:44.965455 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] May 16 00:19:44.965564 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] May 16 00:19:44.965717 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:19:44.965859 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 16 00:19:44.966020 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 16 00:19:44.966149 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 16 00:19:44.966291 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 16 00:19:44.966445 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 16 00:19:44.966570 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 16 00:19:44.966750 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 00:19:44.966915 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:19:44.967065 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 16 00:19:44.967234 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 16 00:19:44.967409 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] May 16 00:19:44.967577 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 16 00:19:44.967744 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 16 00:19:44.967895 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 16 00:19:44.968045 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] May 16 00:19:44.968206 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 16 00:19:44.968378 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 16 00:19:44.968533 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 16 00:19:44.968682 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] May 16 00:19:44.968845 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 16 00:19:44.968994 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 16 00:19:44.969115 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 16 00:19:44.969242 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 16 00:19:44.969437 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 16 00:19:44.969555 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 16 00:19:44.969679 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 16 00:19:44.969810 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 16 00:19:44.969820 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 00:19:44.969828 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 00:19:44.969836 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 00:19:44.969843 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 00:19:44.969855 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 16 00:19:44.969862 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 16 00:19:44.969870 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 16 00:19:44.969877 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 16 00:19:44.969885 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 16 00:19:44.969892 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 16 00:19:44.969900 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 16 00:19:44.969907 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 16 00:19:44.969915 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 16 00:19:44.969925 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 16 00:19:44.969933 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 16 00:19:44.969940 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 16 00:19:44.969948 kernel: iommu: Default domain type: Translated May 16 00:19:44.969955 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 00:19:44.969963 kernel: efivars: Registered efivars operations May 16 00:19:44.969970 kernel: PCI: Using ACPI for IRQ routing May 16 00:19:44.969978 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 00:19:44.969985 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 16 00:19:44.969996 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] May 16 00:19:44.970003 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] May 16 00:19:44.970010 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] May 16 00:19:44.970018 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] May 16 00:19:44.970026 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] May 16 00:19:44.970035 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] May 16 00:19:44.970051 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] May 16 00:19:44.970209 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 16 00:19:44.970376 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 16 00:19:44.970532 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 00:19:44.970547 kernel: vgaarb: loaded May 16 00:19:44.970557 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 16 00:19:44.970567 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 16 00:19:44.970577 kernel: clocksource: Switched to clocksource kvm-clock May 16 00:19:44.970587 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:19:44.970597 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:19:44.970607 kernel: pnp: PnP ACPI init May 16 00:19:44.970786 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved May 16 00:19:44.970801 kernel: pnp: PnP ACPI: found 6 devices May 16 00:19:44.970812 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 00:19:44.970822 kernel: NET: Registered PF_INET protocol family May 16 00:19:44.970832 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:19:44.970865 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:19:44.970878 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:19:44.970889 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:19:44.970902 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 00:19:44.970913 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:19:44.970923 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:19:44.970934 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:19:44.970944 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:19:44.970954 kernel: NET: Registered PF_XDP protocol family May 16 00:19:44.971082 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 16 00:19:44.971201 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 16 00:19:44.971335 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 00:19:44.971444 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 00:19:44.971561 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 00:19:44.971686 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] May 16 00:19:44.971836 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] May 16 00:19:44.971970 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] May 16 00:19:44.971981 kernel: PCI: CLS 0 bytes, default 64 May 16 00:19:44.971989 kernel: Initialise system trusted keyrings May 16 00:19:44.971997 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:19:44.972009 kernel: Key type asymmetric registered May 16 00:19:44.972017 kernel: Asymmetric key parser 'x509' registered May 16 00:19:44.972025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 00:19:44.972033 kernel: io scheduler mq-deadline registered May 16 00:19:44.972040 kernel: io scheduler kyber registered May 16 00:19:44.972048 kernel: io scheduler bfq registered May 16 00:19:44.972056 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 00:19:44.972064 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 16 00:19:44.972072 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 16 00:19:44.972083 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 16 00:19:44.972093 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:19:44.972101 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 00:19:44.972109 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 00:19:44.972117 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 00:19:44.972125 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 00:19:44.972260 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 00:19:44.972293 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 00:19:44.972409 kernel: rtc_cmos 00:04: registered as rtc0 May 16 00:19:44.972523 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T00:19:44 UTC (1747354784) May 16 00:19:44.972636 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 00:19:44.972646 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 00:19:44.972654 kernel: efifb: probing for efifb May 16 00:19:44.972665 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 16 00:19:44.972673 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 16 00:19:44.972681 kernel: efifb: scrolling: redraw May 16 00:19:44.972690 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 16 00:19:44.972712 kernel: Console: switching to colour frame buffer device 160x50 May 16 00:19:44.972723 kernel: fb0: EFI VGA frame buffer device May 16 00:19:44.972734 kernel: pstore: Using crash dump compression: deflate May 16 00:19:44.972744 kernel: pstore: Registered efi_pstore as persistent store backend May 16 00:19:44.972755 kernel: NET: Registered PF_INET6 protocol family May 16 00:19:44.972772 kernel: Segment Routing with IPv6 May 16 00:19:44.972788 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:19:44.972797 kernel: NET: Registered PF_PACKET protocol family May 16 00:19:44.972805 kernel: Key type dns_resolver registered May 16 00:19:44.972812 kernel: IPI shorthand broadcast: enabled May 16 00:19:44.972821 kernel: sched_clock: Marking stable (691002986, 162395418)->(874821234, -21422830) May 16 00:19:44.972832 kernel: registered taskstats version 1 May 16 00:19:44.972842 kernel: Loading compiled-in X.509 certificates May 16 00:19:44.972853 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 563478d245b598189519397611f5bddee97f3fc1' May 16 00:19:44.972863 kernel: Key type .fscrypt registered May 16 00:19:44.972877 kernel: Key type fscrypt-provisioning registered May 16 00:19:44.972893 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:19:44.972901 kernel: ima: Allocated hash algorithm: sha1 May 16 00:19:44.972909 kernel: ima: No architecture policies found May 16 00:19:44.972916 kernel: clk: Disabling unused clocks May 16 00:19:44.972924 kernel: Freeing unused kernel image (initmem) memory: 42988K May 16 00:19:44.972932 kernel: Write protecting the kernel read-only data: 36864k May 16 00:19:44.972940 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K May 16 00:19:44.972950 kernel: Run /init as init process May 16 00:19:44.972958 kernel: with arguments: May 16 00:19:44.972966 kernel: /init May 16 00:19:44.972973 kernel: with environment: May 16 00:19:44.972981 kernel: HOME=/ May 16 00:19:44.972988 kernel: TERM=linux May 16 00:19:44.972996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:19:44.973006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:19:44.973018 systemd[1]: Detected virtualization kvm. May 16 00:19:44.973027 systemd[1]: Detected architecture x86-64. May 16 00:19:44.973035 systemd[1]: Running in initrd. May 16 00:19:44.973043 systemd[1]: No hostname configured, using default hostname. May 16 00:19:44.973050 systemd[1]: Hostname set to . May 16 00:19:44.973059 systemd[1]: Initializing machine ID from VM UUID. May 16 00:19:44.973067 systemd[1]: Queued start job for default target initrd.target. May 16 00:19:44.973075 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:19:44.973086 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:19:44.973095 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 00:19:44.973103 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:19:44.973112 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 00:19:44.973121 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 00:19:44.973130 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 00:19:44.973139 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 00:19:44.973150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:19:44.973158 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:19:44.973166 systemd[1]: Reached target paths.target - Path Units. May 16 00:19:44.973174 systemd[1]: Reached target slices.target - Slice Units. May 16 00:19:44.973183 systemd[1]: Reached target swap.target - Swaps. May 16 00:19:44.973199 systemd[1]: Reached target timers.target - Timer Units. May 16 00:19:44.973214 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:19:44.973225 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:19:44.973242 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 00:19:44.973261 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 16 00:19:44.973290 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:19:44.973298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:19:44.973307 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:19:44.973315 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:19:44.973323 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 00:19:44.973331 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:19:44.973340 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 00:19:44.973355 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:19:44.973366 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:19:44.973377 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:19:44.973388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:44.973400 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 00:19:44.973411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:19:44.973422 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:19:44.973438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:19:44.973474 systemd-journald[192]: Collecting audit messages is disabled. May 16 00:19:44.973504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:19:44.973516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:19:44.973527 systemd-journald[192]: Journal started May 16 00:19:44.973548 systemd-journald[192]: Runtime Journal (/run/log/journal/4076125a1b9b4908904cd81d8611fae6) is 6.0M, max 48.3M, 42.2M free. May 16 00:19:44.975301 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:19:44.976852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:44.981055 systemd-modules-load[195]: Inserted module 'overlay' May 16 00:19:44.984457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:19:44.988179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:19:44.988601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:19:45.003839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:19:45.005646 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:45.012486 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 00:19:45.027745 dracut-cmdline[224]: dracut-dracut-053 May 16 00:19:45.032118 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3eac1ac065bd62ee8513964addbc130593421d288f32dda9b1fb7c667f95e96b May 16 00:19:45.039008 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:19:45.042627 systemd-modules-load[195]: Inserted module 'br_netfilter' May 16 00:19:45.043872 kernel: Bridge firewalling registered May 16 00:19:45.045731 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:19:45.052514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:19:45.066636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:19:45.074434 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:19:45.107978 systemd-resolved[271]: Positive Trust Anchors: May 16 00:19:45.108002 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:19:45.108033 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:19:45.110595 systemd-resolved[271]: Defaulting to hostname 'linux'. May 16 00:19:45.111685 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:19:45.120436 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:19:45.137302 kernel: SCSI subsystem initialized May 16 00:19:45.146322 kernel: Loading iSCSI transport class v2.0-870. May 16 00:19:45.157329 kernel: iscsi: registered transport (tcp) May 16 00:19:45.184614 kernel: iscsi: registered transport (qla4xxx) May 16 00:19:45.184727 kernel: QLogic iSCSI HBA Driver May 16 00:19:45.250090 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 00:19:45.261490 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 00:19:45.291349 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:19:45.291431 kernel: device-mapper: uevent: version 1.0.3 May 16 00:19:45.292491 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 00:19:45.338322 kernel: raid6: avx2x4 gen() 28020 MB/s May 16 00:19:45.355340 kernel: raid6: avx2x2 gen() 23274 MB/s May 16 00:19:45.372446 kernel: raid6: avx2x1 gen() 22677 MB/s May 16 00:19:45.372528 kernel: raid6: using algorithm avx2x4 gen() 28020 MB/s May 16 00:19:45.390411 kernel: raid6: .... xor() 7415 MB/s, rmw enabled May 16 00:19:45.390512 kernel: raid6: using avx2x2 recovery algorithm May 16 00:19:45.411319 kernel: xor: automatically using best checksumming function avx May 16 00:19:45.573314 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 00:19:45.590698 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 00:19:45.604479 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:19:45.622725 systemd-udevd[414]: Using default interface naming scheme 'v255'. May 16 00:19:45.629085 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:19:45.641609 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 00:19:45.659629 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation May 16 00:19:45.698725 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:19:45.711556 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:19:45.788175 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:19:45.804660 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 00:19:45.817581 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 00:19:45.822415 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:19:45.827496 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 16 00:19:45.826074 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:19:45.829996 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:19:45.838424 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:19:45.840957 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:19:45.842443 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 00:19:45.850148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:19:45.863459 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:19:45.863502 kernel: GPT:9289727 != 19775487 May 16 00:19:45.863521 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:19:45.863555 kernel: GPT:9289727 != 19775487 May 16 00:19:45.863572 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:19:45.863589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:19:45.850275 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:45.850637 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:19:45.870316 kernel: libata version 3.00 loaded. May 16 00:19:45.851000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:19:45.851139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:45.851706 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:45.854991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:45.864756 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 00:19:45.883540 kernel: AVX2 version of gcm_enc/dec engaged. May 16 00:19:45.883568 kernel: AES CTR mode by8 optimization enabled May 16 00:19:45.883579 kernel: ahci 0000:00:1f.2: version 3.0 May 16 00:19:45.884380 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 16 00:19:45.871505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:19:45.888570 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 16 00:19:45.888768 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 16 00:19:45.871646 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:45.890706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:45.895627 kernel: scsi host0: ahci May 16 00:19:45.897310 kernel: scsi host1: ahci May 16 00:19:45.903292 kernel: scsi host2: ahci May 16 00:19:45.905358 kernel: scsi host3: ahci May 16 00:19:45.907289 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (476) May 16 00:19:45.909289 kernel: BTRFS: device fsid da1480a3-a7d8-4e12-bbe1-1257540eb9ae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (472) May 16 00:19:45.910299 kernel: scsi host4: ahci May 16 00:19:45.914482 kernel: scsi host5: ahci May 16 00:19:45.914724 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 16 00:19:45.914742 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 16 00:19:45.914757 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 16 00:19:45.914771 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 16 00:19:45.914786 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 16 00:19:45.914807 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 16 00:19:45.918798 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 00:19:45.930324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 00:19:45.930696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:45.938417 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 00:19:45.938488 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 00:19:45.947065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:19:45.962442 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 00:19:45.964368 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 00:19:45.976276 disk-uuid[559]: Primary Header is updated. May 16 00:19:45.976276 disk-uuid[559]: Secondary Entries is updated. May 16 00:19:45.976276 disk-uuid[559]: Secondary Header is updated. May 16 00:19:45.981293 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:19:45.985343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:19:45.987811 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:46.223295 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.223381 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.224281 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.225293 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 16 00:19:46.226295 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.226317 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 16 00:19:46.227365 kernel: ata3.00: applying bridge limits May 16 00:19:46.228314 kernel: ata3.00: configured for UDMA/100 May 16 00:19:46.228388 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 16 00:19:46.232297 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 16 00:19:46.274315 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 16 00:19:46.274673 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 16 00:19:46.288341 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 16 00:19:47.015290 disk-uuid[562]: The operation has completed successfully. May 16 00:19:47.016796 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:19:47.046321 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:19:47.046480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 00:19:47.079444 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 00:19:47.083979 sh[595]: Success May 16 00:19:47.098296 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 16 00:19:47.131765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 00:19:47.139814 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 00:19:47.142709 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 00:19:47.163290 kernel: BTRFS info (device dm-0): first mount of filesystem da1480a3-a7d8-4e12-bbe1-1257540eb9ae May 16 00:19:47.163334 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:47.165187 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 00:19:47.165199 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 00:19:47.165945 kernel: BTRFS info (device dm-0): using free space tree May 16 00:19:47.171420 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 00:19:47.172209 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 00:19:47.186549 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 00:19:47.188840 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 00:19:47.196469 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:47.196496 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:47.196507 kernel: BTRFS info (device vda6): using free space tree May 16 00:19:47.199293 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:19:47.207916 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:19:47.212405 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:47.293704 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:19:47.301426 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:19:47.322970 systemd-networkd[773]: lo: Link UP May 16 00:19:47.322981 systemd-networkd[773]: lo: Gained carrier May 16 00:19:47.324556 systemd-networkd[773]: Enumeration completed May 16 00:19:47.324646 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:19:47.324959 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:19:47.324963 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:19:47.342872 systemd-networkd[773]: eth0: Link UP May 16 00:19:47.342876 systemd-networkd[773]: eth0: Gained carrier May 16 00:19:47.342883 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:19:47.344456 systemd[1]: Reached target network.target - Network. May 16 00:19:47.369336 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:19:47.455871 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 00:19:47.464457 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 00:19:47.520032 ignition[778]: Ignition 2.20.0 May 16 00:19:47.520046 ignition[778]: Stage: fetch-offline May 16 00:19:47.520093 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 16 00:19:47.520103 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:47.520234 ignition[778]: parsed url from cmdline: "" May 16 00:19:47.520240 ignition[778]: no config URL provided May 16 00:19:47.520247 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:19:47.520279 ignition[778]: no config at "/usr/lib/ignition/user.ign" May 16 00:19:47.520318 ignition[778]: op(1): [started] loading QEMU firmware config module May 16 00:19:47.520325 ignition[778]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:19:47.528904 ignition[778]: op(1): [finished] loading QEMU firmware config module May 16 00:19:47.567360 ignition[778]: parsing config with SHA512: b5e7d3f316463a4ba8ca8e1301cf6ef422b4291fc2026f41357aebb9276d8637e71e4169881ae0c752bd023b8dcacf0fa2d02fca272f4f22aeef3e0bf31dbd39 May 16 00:19:47.572234 unknown[778]: fetched base config from "system" May 16 00:19:47.572246 unknown[778]: fetched user config from "qemu" May 16 00:19:47.572597 ignition[778]: fetch-offline: fetch-offline passed May 16 00:19:47.573567 systemd-resolved[271]: Detected conflict on linux IN A 10.0.0.14 May 16 00:19:47.572677 ignition[778]: Ignition finished successfully May 16 00:19:47.573577 systemd-resolved[271]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 16 00:19:47.575375 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:19:47.576824 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:19:47.601131 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 00:19:47.617113 ignition[788]: Ignition 2.20.0 May 16 00:19:47.617129 ignition[788]: Stage: kargs May 16 00:19:47.617368 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 16 00:19:47.617385 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:47.618538 ignition[788]: kargs: kargs passed May 16 00:19:47.618595 ignition[788]: Ignition finished successfully May 16 00:19:47.622471 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 00:19:47.636504 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 00:19:47.648428 ignition[797]: Ignition 2.20.0 May 16 00:19:47.648445 ignition[797]: Stage: disks May 16 00:19:47.652607 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 00:19:47.648671 ignition[797]: no configs at "/usr/lib/ignition/base.d" May 16 00:19:47.656114 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 00:19:47.648688 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:47.657573 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 00:19:47.649900 ignition[797]: disks: disks passed May 16 00:19:47.659774 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:19:47.649957 ignition[797]: Ignition finished successfully May 16 00:19:47.660857 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:19:47.662757 systemd[1]: Reached target basic.target - Basic System. May 16 00:19:47.674527 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 00:19:47.724772 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 16 00:19:48.040571 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 00:19:48.062485 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 00:19:48.177285 kernel: EXT4-fs (vda9): mounted filesystem 13a141f5-2ff0-46d9-bee3-974c86536128 r/w with ordered data mode. Quota mode: none. May 16 00:19:48.177797 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 00:19:48.179363 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 00:19:48.195377 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:19:48.197584 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 00:19:48.198969 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 00:19:48.199010 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:19:48.212393 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) May 16 00:19:48.212420 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:48.212434 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:48.212447 kernel: BTRFS info (device vda6): using free space tree May 16 00:19:48.212460 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:19:48.199031 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:19:48.205619 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 00:19:48.213511 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:19:48.239751 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 00:19:48.276693 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:19:48.281808 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory May 16 00:19:48.286801 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:19:48.291311 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:19:48.378739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 00:19:48.416406 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 00:19:48.419689 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 00:19:48.425963 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 00:19:48.427308 kernel: BTRFS info (device vda6): last unmount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:48.461956 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 00:19:48.563526 ignition[934]: INFO : Ignition 2.20.0 May 16 00:19:48.563526 ignition[934]: INFO : Stage: mount May 16 00:19:48.565442 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:19:48.565442 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:48.568221 ignition[934]: INFO : mount: mount passed May 16 00:19:48.569027 ignition[934]: INFO : Ignition finished successfully May 16 00:19:48.572010 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 00:19:48.585384 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 00:19:49.189701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 00:19:49.199331 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) May 16 00:19:49.202373 kernel: BTRFS info (device vda6): first mount of filesystem 3387f2c6-46d4-43a5-af69-bf48427d85c5 May 16 00:19:49.202414 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 00:19:49.202429 kernel: BTRFS info (device vda6): using free space tree May 16 00:19:49.206286 kernel: BTRFS info (device vda6): auto enabling async discard May 16 00:19:49.207794 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 00:19:49.236777 ignition[961]: INFO : Ignition 2.20.0 May 16 00:19:49.236777 ignition[961]: INFO : Stage: files May 16 00:19:49.238820 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:19:49.238820 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:49.241868 ignition[961]: DEBUG : files: compiled without relabeling support, skipping May 16 00:19:49.243407 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:19:49.243407 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:19:49.246855 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:19:49.248641 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:19:49.250629 unknown[961]: wrote ssh authorized keys file for user: core May 16 00:19:49.252079 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:19:49.253728 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 00:19:49.253728 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 16 00:19:49.280471 systemd-networkd[773]: eth0: Gained IPv6LL May 16 00:19:49.326658 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:19:49.454360 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 16 00:19:49.454360 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:19:49.458785 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 00:19:49.956820 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 00:19:50.033023 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:19:50.033023 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:19:50.042234 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 16 00:19:50.556005 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 00:19:51.316590 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 16 00:19:51.316590 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 00:19:51.543333 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:19:51.545876 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:19:51.545876 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 00:19:51.545876 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 00:19:51.545876 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:19:51.545876 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:19:51.545876 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 00:19:51.545876 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:19:51.583327 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:19:51.590499 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:19:51.592393 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:19:51.592393 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 16 00:19:51.592393 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:19:51.592393 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:19:51.592393 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:19:51.592393 ignition[961]: INFO : files: files passed May 16 00:19:51.592393 ignition[961]: INFO : Ignition finished successfully May 16 00:19:51.594288 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 00:19:51.616519 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 00:19:51.619152 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 00:19:51.621385 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:19:51.621542 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 00:19:51.630846 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory May 16 00:19:51.633711 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:19:51.635587 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 00:19:51.637201 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:19:51.636579 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:19:51.639233 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 00:19:51.656434 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 00:19:51.682046 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:19:51.682214 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 00:19:51.696625 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 00:19:51.696865 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 00:19:51.697571 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 00:19:51.698592 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 00:19:51.722511 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:19:51.732591 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 00:19:51.743354 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 00:19:51.744815 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:19:51.747427 systemd[1]: Stopped target timers.target - Timer Units. May 16 00:19:51.747616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:19:51.747773 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 00:19:51.748659 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 00:19:51.749071 systemd[1]: Stopped target basic.target - Basic System. May 16 00:19:51.749535 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 00:19:51.749917 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 00:19:51.750372 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 00:19:51.750772 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 00:19:51.751182 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 00:19:51.751628 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 00:19:51.752018 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 00:19:51.752618 systemd[1]: Stopped target swap.target - Swaps. May 16 00:19:51.752982 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:19:51.753149 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 00:19:51.753778 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 00:19:51.853759 ignition[1016]: INFO : Ignition 2.20.0 May 16 00:19:51.853759 ignition[1016]: INFO : Stage: umount May 16 00:19:51.853759 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:19:51.853759 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:19:51.853759 ignition[1016]: INFO : umount: umount passed May 16 00:19:51.853759 ignition[1016]: INFO : Ignition finished successfully May 16 00:19:51.754139 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:19:51.754697 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 00:19:51.754811 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:19:51.755078 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:19:51.755182 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 00:19:51.755838 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:19:51.755970 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 00:19:51.756748 systemd[1]: Stopped target paths.target - Path Units. May 16 00:19:51.757014 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:19:51.758335 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:19:51.758879 systemd[1]: Stopped target slices.target - Slice Units. May 16 00:19:51.759232 systemd[1]: Stopped target sockets.target - Socket Units. May 16 00:19:51.759641 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:19:51.759754 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 00:19:51.760235 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:19:51.760365 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 00:19:51.760864 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:19:51.761004 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 00:19:51.761630 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:19:51.761759 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 00:19:51.801487 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 00:19:51.809619 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 00:19:51.812724 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:19:51.815350 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:19:51.816876 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:19:51.817252 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 00:19:51.823714 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:19:51.823875 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 00:19:51.846944 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:19:51.847104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 00:19:51.852561 systemd[1]: Stopped target network.target - Network. May 16 00:19:51.854011 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:19:51.854107 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 00:19:51.860202 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:19:51.860316 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 00:19:51.862609 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:19:51.862675 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 00:19:51.863964 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 00:19:51.864028 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 00:19:51.870801 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 00:19:51.876297 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 00:19:51.881787 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:19:51.882643 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:19:51.882781 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 00:19:51.887346 systemd-networkd[773]: eth0: DHCPv6 lease lost May 16 00:19:51.888338 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:19:51.888408 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 00:19:51.900361 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:19:51.900540 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 00:19:51.904501 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:19:51.904599 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 00:19:51.950388 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 00:19:51.951936 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:19:51.952023 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 00:19:51.954624 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:19:51.958847 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:19:51.958981 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 00:19:51.999557 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:19:51.999737 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:19:52.003601 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:19:52.003755 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 00:19:52.006315 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:19:52.006406 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 00:19:52.008030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:19:52.008081 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:19:52.010226 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:19:52.010304 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 00:19:52.012580 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:19:52.012640 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 00:19:52.014799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:19:52.014861 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 00:19:52.025775 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 00:19:52.027594 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:19:52.027649 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:19:52.029683 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:19:52.029731 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 00:19:52.031836 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:19:52.031884 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:19:52.034108 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 00:19:52.034158 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:19:52.036372 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:19:52.036429 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:19:52.038576 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 00:19:52.038623 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:19:52.040899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:19:52.040946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:52.043349 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:19:52.043452 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 00:19:52.045661 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 00:19:52.057488 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 00:19:52.067149 systemd[1]: Switching root. May 16 00:19:52.101403 systemd-journald[192]: Journal stopped May 16 00:19:54.179754 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). May 16 00:19:54.179844 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:19:54.179861 kernel: SELinux: policy capability open_perms=1 May 16 00:19:54.179877 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:19:54.179896 kernel: SELinux: policy capability always_check_network=0 May 16 00:19:54.179914 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:19:54.179935 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:19:54.179950 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:19:54.179965 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:19:54.179986 kernel: audit: type=1403 audit(1747354793.270:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:19:54.180002 systemd[1]: Successfully loaded SELinux policy in 74.487ms. May 16 00:19:54.180020 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.692ms. May 16 00:19:54.180037 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 16 00:19:54.180061 systemd[1]: Detected virtualization kvm. May 16 00:19:54.180077 systemd[1]: Detected architecture x86-64. May 16 00:19:54.180092 systemd[1]: Detected first boot. May 16 00:19:54.180108 systemd[1]: Initializing machine ID from VM UUID. May 16 00:19:54.180123 zram_generator::config[1061]: No configuration found. May 16 00:19:54.180146 systemd[1]: Populated /etc with preset unit settings. May 16 00:19:54.180163 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:19:54.180178 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 00:19:54.180197 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:19:54.180214 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 00:19:54.180230 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 00:19:54.180245 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 00:19:54.180275 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 00:19:54.180293 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 00:19:54.180310 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 00:19:54.180325 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 00:19:54.180341 systemd[1]: Created slice user.slice - User and Session Slice. May 16 00:19:54.180360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 00:19:54.180376 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 00:19:54.180411 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 00:19:54.180427 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 00:19:54.180444 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 00:19:54.180460 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 00:19:54.180476 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 00:19:54.180491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 00:19:54.180507 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 00:19:54.180525 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 00:19:54.180541 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 00:19:54.180557 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 00:19:54.180573 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 00:19:54.180589 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 00:19:54.180604 systemd[1]: Reached target slices.target - Slice Units. May 16 00:19:54.180620 systemd[1]: Reached target swap.target - Swaps. May 16 00:19:54.180636 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 00:19:54.180657 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 00:19:54.180672 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 00:19:54.180688 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 00:19:54.180704 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 00:19:54.180719 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 00:19:54.180735 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 00:19:54.180751 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 00:19:54.180766 systemd[1]: Mounting media.mount - External Media Directory... May 16 00:19:54.180782 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:54.180802 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 00:19:54.180818 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 00:19:54.180834 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 00:19:54.180850 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:19:54.180866 systemd[1]: Reached target machines.target - Containers. May 16 00:19:54.180881 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 00:19:54.180897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:19:54.180913 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 00:19:54.180932 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 00:19:54.180951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:54.180966 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:19:54.180982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:54.181000 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 00:19:54.181015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:54.181032 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:19:54.181047 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:19:54.181066 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 00:19:54.181082 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:19:54.181098 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:19:54.181113 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 00:19:54.181129 kernel: fuse: init (API version 7.39) May 16 00:19:54.181144 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 00:19:54.181158 kernel: loop: module loaded May 16 00:19:54.181194 systemd-journald[1124]: Collecting audit messages is disabled. May 16 00:19:54.181224 systemd-journald[1124]: Journal started May 16 00:19:54.181252 systemd-journald[1124]: Runtime Journal (/run/log/journal/4076125a1b9b4908904cd81d8611fae6) is 6.0M, max 48.3M, 42.2M free. May 16 00:19:53.924196 systemd[1]: Queued start job for default target multi-user.target. May 16 00:19:53.945060 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 00:19:53.945759 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:19:54.185335 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 00:19:54.194072 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 00:19:54.199899 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 00:19:54.207362 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:19:54.207426 systemd[1]: Stopped verity-setup.service. May 16 00:19:54.207450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:54.207471 systemd[1]: Started systemd-journald.service - Journal Service. May 16 00:19:54.210309 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 00:19:54.212053 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 00:19:54.213771 systemd[1]: Mounted media.mount - External Media Directory. May 16 00:19:54.215307 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 00:19:54.216882 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 00:19:54.218519 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 00:19:54.220331 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 00:19:54.222788 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:19:54.223040 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 00:19:54.225404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:54.225651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:54.227742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:54.228039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:54.230592 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:19:54.230793 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 00:19:54.232326 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:54.232660 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:54.234935 kernel: ACPI: bus type drm_connector registered May 16 00:19:54.235721 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:19:54.236024 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:19:54.237664 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 00:19:54.239353 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 00:19:54.241173 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 00:19:54.255445 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 00:19:54.257718 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 00:19:54.269401 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 00:19:54.272367 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 00:19:54.273645 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:19:54.273731 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 00:19:54.275885 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 16 00:19:54.278315 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 00:19:54.283376 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 00:19:54.284759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:54.287467 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 00:19:54.289843 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 00:19:54.291085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:19:54.298487 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 00:19:54.300721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:19:54.308427 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:19:54.313512 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 00:19:54.322465 systemd-journald[1124]: Time spent on flushing to /var/log/journal/4076125a1b9b4908904cd81d8611fae6 is 29.708ms for 1047 entries. May 16 00:19:54.322465 systemd-journald[1124]: System Journal (/var/log/journal/4076125a1b9b4908904cd81d8611fae6) is 8.0M, max 195.6M, 187.6M free. May 16 00:19:54.399689 systemd-journald[1124]: Received client request to flush runtime journal. May 16 00:19:54.316797 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 00:19:54.320427 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 00:19:54.346633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 00:19:54.348207 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 00:19:54.350315 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 00:19:54.375744 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 00:19:54.402299 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 00:19:54.404506 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 00:19:54.407474 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 00:19:54.412306 kernel: loop0: detected capacity change from 0 to 140992 May 16 00:19:54.416598 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 16 00:19:54.418620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:19:54.422199 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. May 16 00:19:54.422226 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. May 16 00:19:54.423243 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 16 00:19:54.431833 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 00:19:54.441859 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 00:19:54.450314 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:19:54.457677 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:19:54.458591 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 16 00:19:54.478232 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 00:19:54.482291 kernel: loop1: detected capacity change from 0 to 221472 May 16 00:19:54.487500 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 00:19:54.539868 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 16 00:19:54.540329 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. May 16 00:19:54.559306 kernel: loop2: detected capacity change from 0 to 138184 May 16 00:19:54.584971 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 00:19:54.639309 kernel: loop3: detected capacity change from 0 to 140992 May 16 00:19:54.655083 kernel: loop4: detected capacity change from 0 to 221472 May 16 00:19:54.666292 kernel: loop5: detected capacity change from 0 to 138184 May 16 00:19:54.673674 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 00:19:54.674416 (sd-merge)[1203]: Merged extensions into '/usr'. May 16 00:19:54.681442 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... May 16 00:19:54.681458 systemd[1]: Reloading... May 16 00:19:54.773297 zram_generator::config[1231]: No configuration found. May 16 00:19:54.962537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:19:54.964751 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:19:55.014420 systemd[1]: Reloading finished in 332 ms. May 16 00:19:55.100710 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 00:19:55.102863 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 00:19:55.128608 systemd[1]: Starting ensure-sysext.service... May 16 00:19:55.131547 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 00:19:55.139165 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... May 16 00:19:55.139185 systemd[1]: Reloading... May 16 00:19:55.210981 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:19:55.211498 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 00:19:55.212815 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:19:55.213224 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 16 00:19:55.219029 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. May 16 00:19:55.225904 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:19:55.227066 systemd-tmpfiles[1267]: Skipping /boot May 16 00:19:55.234294 zram_generator::config[1294]: No configuration found. May 16 00:19:55.244555 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. May 16 00:19:55.244572 systemd-tmpfiles[1267]: Skipping /boot May 16 00:19:55.394050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:19:55.444170 systemd[1]: Reloading finished in 304 ms. May 16 00:19:55.466917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 00:19:55.495651 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:19:55.500010 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 00:19:55.506607 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 00:19:55.511144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 00:19:55.517718 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 00:19:55.526466 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.526678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:19:55.530532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:55.556673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:55.560630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:55.563488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:55.569500 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 00:19:55.570817 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.572570 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 00:19:55.575252 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 00:19:55.577853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:55.578087 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:55.579964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:55.580181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:55.582095 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:55.582305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:55.597257 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 00:19:55.598767 augenrules[1364]: No rules May 16 00:19:55.599768 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:19:55.600039 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:19:55.605133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.605406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:19:55.611776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:55.619387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:55.622562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:55.624010 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:55.625858 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 00:19:55.629926 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 00:19:55.631020 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.632014 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 00:19:55.633930 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 00:19:55.635941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:55.636249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:55.636856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:55.637023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:55.637971 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:55.638142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:55.647459 systemd[1]: Finished ensure-sysext.service. May 16 00:19:55.653142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.670506 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:19:55.671795 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 00:19:55.673211 systemd-udevd[1379]: Using default interface naming scheme 'v255'. May 16 00:19:55.673844 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 00:19:55.677538 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 00:19:55.681481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 00:19:55.691295 systemd-resolved[1335]: Positive Trust Anchors: May 16 00:19:55.691315 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:19:55.691365 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 00:19:55.695918 systemd-resolved[1335]: Defaulting to hostname 'linux'. May 16 00:19:55.697041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 00:19:55.698714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 00:19:55.700406 augenrules[1386]: /sbin/augenrules: No change May 16 00:19:55.701869 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 00:19:55.703418 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:19:55.703469 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 00:19:55.704178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 00:19:55.706183 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 00:19:55.708371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:19:55.708641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 00:19:55.709555 augenrules[1408]: No rules May 16 00:19:55.710474 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:19:55.710754 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:19:55.712509 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:19:55.712720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 00:19:55.714625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:19:55.714843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 00:19:55.716531 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 00:19:55.718566 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:19:55.718869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 00:19:55.737210 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 00:19:55.767493 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 00:19:55.769057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:19:55.769148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 00:19:55.787594 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 00:19:55.820304 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1425) May 16 00:19:55.894360 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 00:19:55.939317 kernel: ACPI: button: Power Button [PWRF] May 16 00:19:55.942743 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 00:19:55.961343 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 16 00:19:55.963844 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 16 00:19:55.971684 systemd-networkd[1442]: lo: Link UP May 16 00:19:55.971697 systemd-networkd[1442]: lo: Gained carrier May 16 00:19:55.982723 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 16 00:19:55.983661 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 16 00:19:55.983825 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 00:19:55.987872 systemd-networkd[1442]: Enumeration completed May 16 00:19:55.992552 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 00:19:55.994204 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:19:55.994219 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:19:55.995564 systemd-networkd[1442]: eth0: Link UP May 16 00:19:55.995577 systemd-networkd[1442]: eth0: Gained carrier May 16 00:19:55.995599 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 00:19:56.010609 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 00:19:56.013411 systemd[1]: Reached target network.target - Network. May 16 00:19:56.014713 systemd[1]: Reached target time-set.target - System Time Set. May 16 00:19:56.016427 systemd-networkd[1442]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:19:56.019211 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. May 16 00:19:56.020571 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:19:56.020700 systemd-timesyncd[1402]: Initial clock synchronization to Fri 2025-05-16 00:19:56.400392 UTC. May 16 00:19:56.059934 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 00:19:56.063379 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 00:19:56.112418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:56.115295 kernel: mousedev: PS/2 mouse device common for all mice May 16 00:19:56.133664 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 00:19:56.171727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:19:56.172480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:56.192601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 00:19:56.206899 kernel: kvm_amd: TSC scaling supported May 16 00:19:56.207013 kernel: kvm_amd: Nested Virtualization enabled May 16 00:19:56.207032 kernel: kvm_amd: Nested Paging enabled May 16 00:19:56.207060 kernel: kvm_amd: LBR virtualization supported May 16 00:19:56.207838 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 16 00:19:56.207941 kernel: kvm_amd: Virtual GIF supported May 16 00:19:56.234303 kernel: EDAC MC: Ver: 3.0.0 May 16 00:19:56.271129 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 00:19:56.279914 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 00:19:56.281992 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 00:19:56.292759 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:19:56.341530 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 00:19:56.343721 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 00:19:56.345227 systemd[1]: Reached target sysinit.target - System Initialization. May 16 00:19:56.346616 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 00:19:56.348294 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 00:19:56.350347 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 00:19:56.351945 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 00:19:56.353624 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 00:19:56.355597 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:19:56.355656 systemd[1]: Reached target paths.target - Path Units. May 16 00:19:56.358479 systemd[1]: Reached target timers.target - Timer Units. May 16 00:19:56.362629 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 00:19:56.366140 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 00:19:56.381098 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 00:19:56.384791 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 00:19:56.387749 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 00:19:56.389555 systemd[1]: Reached target sockets.target - Socket Units. May 16 00:19:56.390893 systemd[1]: Reached target basic.target - Basic System. May 16 00:19:56.391037 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 00:19:56.391074 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 00:19:56.392512 systemd[1]: Starting containerd.service - containerd container runtime... May 16 00:19:56.394876 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 00:19:56.399401 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 00:19:56.401859 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:19:56.405512 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 00:19:56.406878 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 00:19:56.409821 jq[1475]: false May 16 00:19:56.410551 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 00:19:56.414658 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 00:19:56.419439 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 00:19:56.425224 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 00:19:56.432180 extend-filesystems[1476]: Found loop3 May 16 00:19:56.432180 extend-filesystems[1476]: Found loop4 May 16 00:19:56.432180 extend-filesystems[1476]: Found loop5 May 16 00:19:56.432180 extend-filesystems[1476]: Found sr0 May 16 00:19:56.432180 extend-filesystems[1476]: Found vda May 16 00:19:56.432180 extend-filesystems[1476]: Found vda1 May 16 00:19:56.432180 extend-filesystems[1476]: Found vda2 May 16 00:19:56.432180 extend-filesystems[1476]: Found vda3 May 16 00:19:56.432180 extend-filesystems[1476]: Found usr May 16 00:19:56.482709 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:19:56.460969 dbus-daemon[1474]: [system] SELinux support is enabled May 16 00:19:56.483014 extend-filesystems[1476]: Found vda4 May 16 00:19:56.483014 extend-filesystems[1476]: Found vda6 May 16 00:19:56.483014 extend-filesystems[1476]: Found vda7 May 16 00:19:56.483014 extend-filesystems[1476]: Found vda9 May 16 00:19:56.483014 extend-filesystems[1476]: Checking size of /dev/vda9 May 16 00:19:56.483014 extend-filesystems[1476]: Resized partition /dev/vda9 May 16 00:19:56.433599 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 00:19:56.506725 update_engine[1487]: I20250516 00:19:56.476821 1487 main.cc:92] Flatcar Update Engine starting May 16 00:19:56.506725 update_engine[1487]: I20250516 00:19:56.494107 1487 update_check_scheduler.cc:74] Next update check in 7m50s May 16 00:19:56.507106 extend-filesystems[1497]: resize2fs 1.47.1 (20-May-2024) May 16 00:19:56.438027 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:19:56.438956 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:19:56.511809 jq[1496]: true May 16 00:19:56.448153 systemd[1]: Starting update-engine.service - Update Engine... May 16 00:19:56.451999 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 00:19:56.456415 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 00:19:56.513519 jq[1501]: true May 16 00:19:56.552908 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:19:56.552975 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1418) May 16 00:19:56.461569 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:19:56.553226 tar[1499]: linux-amd64/helm May 16 00:19:56.461836 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 00:19:56.461986 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 00:19:56.468931 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:19:56.469198 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 00:19:56.477850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:19:56.478603 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 00:19:56.523446 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 00:19:56.535945 systemd[1]: Started update-engine.service - Update Engine. May 16 00:19:56.560184 extend-filesystems[1497]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:19:56.560184 extend-filesystems[1497]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:19:56.560184 extend-filesystems[1497]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:19:56.562166 extend-filesystems[1476]: Resized filesystem in /dev/vda9 May 16 00:19:56.560516 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:19:56.560549 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 00:19:56.562050 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:19:56.562072 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 00:19:56.572684 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 00:19:56.574735 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:19:56.574990 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 00:19:56.583739 systemd-logind[1482]: Watching system buttons on /dev/input/event1 (Power Button) May 16 00:19:56.584099 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 00:19:56.587201 systemd-logind[1482]: New seat seat0. May 16 00:19:56.596396 systemd[1]: Started systemd-logind.service - User Login Management. May 16 00:19:56.668426 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:19:56.753978 bash[1530]: Updated "/home/core/.ssh/authorized_keys" May 16 00:19:56.755979 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 00:19:56.759544 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 00:19:56.885534 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:19:56.951909 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 00:19:56.964042 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 00:19:57.011900 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:19:57.012228 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 00:19:57.048231 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 00:19:57.067905 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 00:19:57.097458 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 00:19:57.102475 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 00:19:57.118717 systemd[1]: Reached target getty.target - Login Prompts. May 16 00:19:57.131338 containerd[1502]: time="2025-05-16T00:19:57.130366892Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 00:19:57.164171 containerd[1502]: time="2025-05-16T00:19:57.163991074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.166350 containerd[1502]: time="2025-05-16T00:19:57.166245481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.166350 containerd[1502]: time="2025-05-16T00:19:57.166323457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:19:57.166472 containerd[1502]: time="2025-05-16T00:19:57.166356704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:19:57.166638 containerd[1502]: time="2025-05-16T00:19:57.166598709Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 00:19:57.166673 containerd[1502]: time="2025-05-16T00:19:57.166635744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.166769 containerd[1502]: time="2025-05-16T00:19:57.166742653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.166802 containerd[1502]: time="2025-05-16T00:19:57.166768196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.167096 containerd[1502]: time="2025-05-16T00:19:57.167055812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.167096 containerd[1502]: time="2025-05-16T00:19:57.167087023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.167160 containerd[1502]: time="2025-05-16T00:19:57.167111569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.167160 containerd[1502]: time="2025-05-16T00:19:57.167123670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.167288 containerd[1502]: time="2025-05-16T00:19:57.167252197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.167636 containerd[1502]: time="2025-05-16T00:19:57.167599023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:19:57.167791 containerd[1502]: time="2025-05-16T00:19:57.167757691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:19:57.167791 containerd[1502]: time="2025-05-16T00:19:57.167779477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:19:57.167952 containerd[1502]: time="2025-05-16T00:19:57.167917094Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:19:57.168008 containerd[1502]: time="2025-05-16T00:19:57.167987334Z" level=info msg="metadata content store policy set" policy=shared May 16 00:19:57.255441 tar[1499]: linux-amd64/LICENSE May 16 00:19:57.255572 tar[1499]: linux-amd64/README.md May 16 00:19:57.289650 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 00:19:57.475621 containerd[1502]: time="2025-05-16T00:19:57.475165681Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:19:57.475621 containerd[1502]: time="2025-05-16T00:19:57.475312699Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:19:57.475621 containerd[1502]: time="2025-05-16T00:19:57.475383034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 00:19:57.475621 containerd[1502]: time="2025-05-16T00:19:57.475413227Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 00:19:57.475621 containerd[1502]: time="2025-05-16T00:19:57.475437155Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:19:57.475853 containerd[1502]: time="2025-05-16T00:19:57.475709501Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:19:57.476244 containerd[1502]: time="2025-05-16T00:19:57.476153874Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:19:57.476589 containerd[1502]: time="2025-05-16T00:19:57.476544745Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 00:19:57.476589 containerd[1502]: time="2025-05-16T00:19:57.476578758Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 00:19:57.476666 containerd[1502]: time="2025-05-16T00:19:57.476602150Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 00:19:57.476666 containerd[1502]: time="2025-05-16T00:19:57.476624022Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:19:57.476666 containerd[1502]: time="2025-05-16T00:19:57.476642344Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:19:57.476666 containerd[1502]: time="2025-05-16T00:19:57.476660921Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:19:57.476791 containerd[1502]: time="2025-05-16T00:19:57.476681301Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:19:57.476791 containerd[1502]: time="2025-05-16T00:19:57.476703549Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:19:57.476791 containerd[1502]: time="2025-05-16T00:19:57.476723741Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:19:57.476791 containerd[1502]: time="2025-05-16T00:19:57.476742243Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:19:57.476791 containerd[1502]: time="2025-05-16T00:19:57.476758321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:19:57.476927 containerd[1502]: time="2025-05-16T00:19:57.476806197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:19:57.476927 containerd[1502]: time="2025-05-16T00:19:57.476828834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:19:57.476927 containerd[1502]: time="2025-05-16T00:19:57.476847441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:19:57.476927 containerd[1502]: time="2025-05-16T00:19:57.476865061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:19:57.476927 containerd[1502]: time="2025-05-16T00:19:57.476894478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:19:57.476927 containerd[1502]: time="2025-05-16T00:19:57.476914481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:19:57.476927 containerd[1502]: time="2025-05-16T00:19:57.476931251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477124 containerd[1502]: time="2025-05-16T00:19:57.476952492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477124 containerd[1502]: time="2025-05-16T00:19:57.476971224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477124 containerd[1502]: time="2025-05-16T00:19:57.477005237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477124 containerd[1502]: time="2025-05-16T00:19:57.477022553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477124 containerd[1502]: time="2025-05-16T00:19:57.477040311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477124 containerd[1502]: time="2025-05-16T00:19:57.477058592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477124 containerd[1502]: time="2025-05-16T00:19:57.477079361Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 00:19:57.477124 containerd[1502]: time="2025-05-16T00:19:57.477108001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477124 containerd[1502]: time="2025-05-16T00:19:57.477127541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477399 containerd[1502]: time="2025-05-16T00:19:57.477144322Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:19:57.477399 containerd[1502]: time="2025-05-16T00:19:57.477230494Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:19:57.477399 containerd[1502]: time="2025-05-16T00:19:57.477257664Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 00:19:57.477399 containerd[1502]: time="2025-05-16T00:19:57.477272073Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:19:57.477399 containerd[1502]: time="2025-05-16T00:19:57.477288214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 00:19:57.477399 containerd[1502]: time="2025-05-16T00:19:57.477301962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477399 containerd[1502]: time="2025-05-16T00:19:57.477340257Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 00:19:57.477399 containerd[1502]: time="2025-05-16T00:19:57.477356797Z" level=info msg="NRI interface is disabled by configuration." May 16 00:19:57.477399 containerd[1502]: time="2025-05-16T00:19:57.477371804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:19:57.477960 containerd[1502]: time="2025-05-16T00:19:57.477806563Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:19:57.477960 containerd[1502]: time="2025-05-16T00:19:57.477960131Z" level=info msg="Connect containerd service" May 16 00:19:57.478182 containerd[1502]: time="2025-05-16T00:19:57.478003096Z" level=info msg="using legacy CRI server" May 16 00:19:57.478182 containerd[1502]: time="2025-05-16T00:19:57.478014451Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 00:19:57.478182 containerd[1502]: time="2025-05-16T00:19:57.478164189Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:19:57.478999 containerd[1502]: time="2025-05-16T00:19:57.478943906Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:19:57.479165 containerd[1502]: time="2025-05-16T00:19:57.479117266Z" level=info msg="Start subscribing containerd event" May 16 00:19:57.479326 containerd[1502]: time="2025-05-16T00:19:57.479179027Z" level=info msg="Start recovering state" May 16 00:19:57.479326 containerd[1502]: time="2025-05-16T00:19:57.479256309Z" level=info msg="Start event monitor" May 16 00:19:57.480081 containerd[1502]: time="2025-05-16T00:19:57.479847186Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:19:57.480081 containerd[1502]: time="2025-05-16T00:19:57.479946717Z" level=info msg="Start snapshots syncer" May 16 00:19:57.480081 containerd[1502]: time="2025-05-16T00:19:57.479958786Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:19:57.480081 containerd[1502]: time="2025-05-16T00:19:57.479999767Z" level=info msg="Start cni network conf syncer for default" May 16 00:19:57.480192 containerd[1502]: time="2025-05-16T00:19:57.480082905Z" level=info msg="Start streaming server" May 16 00:19:57.480312 containerd[1502]: time="2025-05-16T00:19:57.480268376Z" level=info msg="containerd successfully booted in 0.351057s" May 16 00:19:57.480431 systemd[1]: Started containerd.service - containerd container runtime. May 16 00:19:57.602341 systemd-networkd[1442]: eth0: Gained IPv6LL May 16 00:19:57.606953 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 00:19:57.609267 systemd[1]: Reached target network-online.target - Network is Online. May 16 00:19:57.619789 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 00:19:57.642579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:19:57.645537 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 00:19:57.672808 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 00:19:57.676872 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:19:57.677112 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 00:19:57.680539 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 00:19:57.819573 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 00:19:57.872675 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:59938.service - OpenSSH per-connection server daemon (10.0.0.1:59938). May 16 00:19:57.955328 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 59938 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:19:57.957188 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:19:57.966167 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 00:19:58.002789 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 00:19:58.008291 systemd-logind[1482]: New session 1 of user core. May 16 00:19:58.027530 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 00:19:58.086943 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 00:19:58.092703 (systemd)[1586]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:19:58.295572 systemd[1586]: Queued start job for default target default.target. May 16 00:19:58.315896 systemd[1586]: Created slice app.slice - User Application Slice. May 16 00:19:58.315928 systemd[1586]: Reached target paths.target - Paths. May 16 00:19:58.315943 systemd[1586]: Reached target timers.target - Timers. May 16 00:19:58.317988 systemd[1586]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 00:19:58.334067 systemd[1586]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 00:19:58.334235 systemd[1586]: Reached target sockets.target - Sockets. May 16 00:19:58.334259 systemd[1586]: Reached target basic.target - Basic System. May 16 00:19:58.334334 systemd[1586]: Reached target default.target - Main User Target. May 16 00:19:58.334375 systemd[1586]: Startup finished in 217ms. May 16 00:19:58.348157 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 00:19:58.359488 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 00:19:58.426497 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:59940.service - OpenSSH per-connection server daemon (10.0.0.1:59940). May 16 00:19:58.493582 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 59940 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:19:58.495656 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:19:58.500738 systemd-logind[1482]: New session 2 of user core. May 16 00:19:58.510591 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 00:19:58.573402 sshd[1599]: Connection closed by 10.0.0.1 port 59940 May 16 00:19:58.574017 sshd-session[1597]: pam_unix(sshd:session): session closed for user core May 16 00:19:58.587766 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:59940.service: Deactivated successfully. May 16 00:19:58.589584 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:19:58.591348 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. May 16 00:19:58.592528 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:59950.service - OpenSSH per-connection server daemon (10.0.0.1:59950). May 16 00:19:58.601345 systemd-logind[1482]: Removed session 2. May 16 00:19:58.640548 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 59950 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:19:58.642160 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:19:58.646193 systemd-logind[1482]: New session 3 of user core. May 16 00:19:58.659515 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 00:19:58.723093 sshd[1606]: Connection closed by 10.0.0.1 port 59950 May 16 00:19:58.723570 sshd-session[1604]: pam_unix(sshd:session): session closed for user core May 16 00:19:58.726782 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:59950.service: Deactivated successfully. May 16 00:19:58.728841 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:19:58.730929 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. May 16 00:19:58.731972 systemd-logind[1482]: Removed session 3. May 16 00:19:59.191897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:19:59.193972 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 00:19:59.196407 systemd[1]: Startup finished in 859ms (kernel) + 8.508s (initrd) + 5.984s (userspace) = 15.352s. May 16 00:19:59.202830 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:19:59.676922 kubelet[1615]: E0516 00:19:59.676774 1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:19:59.681191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:19:59.681481 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:19:59.681863 systemd[1]: kubelet.service: Consumed 1.666s CPU time. May 16 00:20:08.955598 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:47176.service - OpenSSH per-connection server daemon (10.0.0.1:47176). May 16 00:20:08.994517 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 47176 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:08.996319 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.000143 systemd-logind[1482]: New session 4 of user core. May 16 00:20:09.016499 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 00:20:09.071867 sshd[1630]: Connection closed by 10.0.0.1 port 47176 May 16 00:20:09.072314 sshd-session[1628]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.086421 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:47176.service: Deactivated successfully. May 16 00:20:09.088478 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:20:09.090390 systemd-logind[1482]: Session 4 logged out. Waiting for processes to exit. May 16 00:20:09.091783 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:47192.service - OpenSSH per-connection server daemon (10.0.0.1:47192). May 16 00:20:09.092735 systemd-logind[1482]: Removed session 4. May 16 00:20:09.141805 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 47192 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.143551 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.147679 systemd-logind[1482]: New session 5 of user core. May 16 00:20:09.158483 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 00:20:09.209168 sshd[1637]: Connection closed by 10.0.0.1 port 47192 May 16 00:20:09.209848 sshd-session[1635]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.221265 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:47192.service: Deactivated successfully. May 16 00:20:09.223168 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:20:09.225379 systemd-logind[1482]: Session 5 logged out. Waiting for processes to exit. May 16 00:20:09.226995 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:47208.service - OpenSSH per-connection server daemon (10.0.0.1:47208). May 16 00:20:09.227883 systemd-logind[1482]: Removed session 5. May 16 00:20:09.270345 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 47208 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.272222 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.277484 systemd-logind[1482]: New session 6 of user core. May 16 00:20:09.291464 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 00:20:09.348343 sshd[1644]: Connection closed by 10.0.0.1 port 47208 May 16 00:20:09.348762 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.356297 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:47208.service: Deactivated successfully. May 16 00:20:09.358319 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:20:09.359972 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. May 16 00:20:09.361322 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:47222.service - OpenSSH per-connection server daemon (10.0.0.1:47222). May 16 00:20:09.362119 systemd-logind[1482]: Removed session 6. May 16 00:20:09.399595 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 47222 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.401214 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.405510 systemd-logind[1482]: New session 7 of user core. May 16 00:20:09.412410 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 00:20:09.473218 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 00:20:09.473651 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:20:09.499446 sudo[1652]: pam_unix(sudo:session): session closed for user root May 16 00:20:09.501241 sshd[1651]: Connection closed by 10.0.0.1 port 47222 May 16 00:20:09.501745 sshd-session[1649]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.513619 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:47222.service: Deactivated successfully. May 16 00:20:09.515451 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:20:09.517239 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. May 16 00:20:09.518631 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:47234.service - OpenSSH per-connection server daemon (10.0.0.1:47234). May 16 00:20:09.519431 systemd-logind[1482]: Removed session 7. May 16 00:20:09.575703 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 47234 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.577629 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.582167 systemd-logind[1482]: New session 8 of user core. May 16 00:20:09.591430 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 00:20:09.646913 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 00:20:09.647361 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:20:09.651368 sudo[1661]: pam_unix(sudo:session): session closed for user root May 16 00:20:09.659056 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 00:20:09.659423 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:20:09.679818 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 00:20:09.715678 augenrules[1683]: No rules May 16 00:20:09.718037 systemd[1]: audit-rules.service: Deactivated successfully. May 16 00:20:09.718345 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 00:20:09.719704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:20:09.719727 sudo[1660]: pam_unix(sudo:session): session closed for user root May 16 00:20:09.721640 sshd[1659]: Connection closed by 10.0.0.1 port 47234 May 16 00:20:09.722068 sshd-session[1657]: pam_unix(sshd:session): session closed for user core May 16 00:20:09.726541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:09.727067 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:47234.service: Deactivated successfully. May 16 00:20:09.729558 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:20:09.730311 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. May 16 00:20:09.738631 systemd-logind[1482]: Removed session 8. May 16 00:20:09.740190 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:47248.service - OpenSSH per-connection server daemon (10.0.0.1:47248). May 16 00:20:09.778172 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 47248 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:20:09.779925 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:20:09.783957 systemd-logind[1482]: New session 9 of user core. May 16 00:20:09.793472 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 00:20:09.849608 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:20:09.850070 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 00:20:09.966504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:09.972607 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:10.055988 kubelet[1712]: E0516 00:20:10.055760 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:10.064209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:10.064528 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:10.538648 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 00:20:10.538840 (dockerd)[1732]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 00:20:11.224460 dockerd[1732]: time="2025-05-16T00:20:11.224377466Z" level=info msg="Starting up" May 16 00:20:12.451539 dockerd[1732]: time="2025-05-16T00:20:12.451423707Z" level=info msg="Loading containers: start." May 16 00:20:12.882320 kernel: Initializing XFRM netlink socket May 16 00:20:12.993953 systemd-networkd[1442]: docker0: Link UP May 16 00:20:13.193022 dockerd[1732]: time="2025-05-16T00:20:13.192965057Z" level=info msg="Loading containers: done." May 16 00:20:13.227878 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2741078330-merged.mount: Deactivated successfully. May 16 00:20:13.237307 dockerd[1732]: time="2025-05-16T00:20:13.237222576Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:20:13.237454 dockerd[1732]: time="2025-05-16T00:20:13.237412641Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 16 00:20:13.237664 dockerd[1732]: time="2025-05-16T00:20:13.237624276Z" level=info msg="Daemon has completed initialization" May 16 00:20:13.505400 dockerd[1732]: time="2025-05-16T00:20:13.505320568Z" level=info msg="API listen on /run/docker.sock" May 16 00:20:13.505588 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 00:20:14.750284 containerd[1502]: time="2025-05-16T00:20:14.750224245Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 00:20:17.515176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788886074.mount: Deactivated successfully. May 16 00:20:20.160114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 00:20:20.169463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:20.345013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:20.352469 (kubelet)[1954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:20.459514 kubelet[1954]: E0516 00:20:20.459260 1954 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:20.464390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:20.464614 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:23.979139 containerd[1502]: time="2025-05-16T00:20:23.979065807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:24.041638 containerd[1502]: time="2025-05-16T00:20:24.041529746Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 16 00:20:24.129570 containerd[1502]: time="2025-05-16T00:20:24.129474320Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:24.164303 containerd[1502]: time="2025-05-16T00:20:24.164220570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:24.165642 containerd[1502]: time="2025-05-16T00:20:24.165599169Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 9.415333641s" May 16 00:20:24.165642 containerd[1502]: time="2025-05-16T00:20:24.165636237Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 16 00:20:24.166635 containerd[1502]: time="2025-05-16T00:20:24.166611621Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 00:20:28.481419 containerd[1502]: time="2025-05-16T00:20:28.481311879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:28.494478 containerd[1502]: time="2025-05-16T00:20:28.494385921Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 16 00:20:28.508652 containerd[1502]: time="2025-05-16T00:20:28.508583401Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:28.534711 containerd[1502]: time="2025-05-16T00:20:28.534636464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:28.536134 containerd[1502]: time="2025-05-16T00:20:28.536061862Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 4.369414967s" May 16 00:20:28.536339 containerd[1502]: time="2025-05-16T00:20:28.536178102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 16 00:20:28.536817 containerd[1502]: time="2025-05-16T00:20:28.536751940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 00:20:30.660166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 16 00:20:30.669490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:30.841681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:30.846147 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:32.116199 kubelet[2011]: E0516 00:20:32.116143 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:32.120533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:32.120856 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:32.957283 containerd[1502]: time="2025-05-16T00:20:32.957194804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:33.050013 containerd[1502]: time="2025-05-16T00:20:33.049912039Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 16 00:20:33.095950 containerd[1502]: time="2025-05-16T00:20:33.095827492Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:33.145928 containerd[1502]: time="2025-05-16T00:20:33.145844000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:33.147624 containerd[1502]: time="2025-05-16T00:20:33.147583358Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 4.610774282s" May 16 00:20:33.147707 containerd[1502]: time="2025-05-16T00:20:33.147625151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 16 00:20:33.148195 containerd[1502]: time="2025-05-16T00:20:33.148171844Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 00:20:35.206346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2443606039.mount: Deactivated successfully. May 16 00:20:38.168645 containerd[1502]: time="2025-05-16T00:20:38.168560918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:38.199777 containerd[1502]: time="2025-05-16T00:20:38.199677752Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 16 00:20:38.243764 containerd[1502]: time="2025-05-16T00:20:38.243705188Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:38.272117 containerd[1502]: time="2025-05-16T00:20:38.272055312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:38.272741 containerd[1502]: time="2025-05-16T00:20:38.272709936Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 5.124507962s" May 16 00:20:38.272789 containerd[1502]: time="2025-05-16T00:20:38.272742559Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 16 00:20:38.273244 containerd[1502]: time="2025-05-16T00:20:38.273205820Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:20:39.809705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444693731.mount: Deactivated successfully. May 16 00:20:41.876173 update_engine[1487]: I20250516 00:20:41.874534 1487 update_attempter.cc:509] Updating boot flags... May 16 00:20:42.160310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 16 00:20:42.182508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:43.774321 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2053) May 16 00:20:44.949307 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2053) May 16 00:20:44.986176 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2053) May 16 00:20:45.183626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:45.188039 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:45.694058 kubelet[2067]: E0516 00:20:45.693989 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:45.698653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:45.698873 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:48.139306 containerd[1502]: time="2025-05-16T00:20:48.139202133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:48.166130 containerd[1502]: time="2025-05-16T00:20:48.166038052Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 16 00:20:48.201723 containerd[1502]: time="2025-05-16T00:20:48.201637608Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:48.231935 containerd[1502]: time="2025-05-16T00:20:48.231837637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:48.233477 containerd[1502]: time="2025-05-16T00:20:48.233407537Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 9.960153661s" May 16 00:20:48.233477 containerd[1502]: time="2025-05-16T00:20:48.233475381Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 00:20:48.234187 containerd[1502]: time="2025-05-16T00:20:48.234142859Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:20:49.498431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339511746.mount: Deactivated successfully. May 16 00:20:49.562296 containerd[1502]: time="2025-05-16T00:20:49.562209336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:49.570912 containerd[1502]: time="2025-05-16T00:20:49.570829707Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 16 00:20:49.584294 containerd[1502]: time="2025-05-16T00:20:49.584207402Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:49.680603 containerd[1502]: time="2025-05-16T00:20:49.680511207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:49.681457 containerd[1502]: time="2025-05-16T00:20:49.681373213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.447185623s" May 16 00:20:49.681457 containerd[1502]: time="2025-05-16T00:20:49.681436235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 00:20:49.682380 containerd[1502]: time="2025-05-16T00:20:49.682096004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 00:20:52.917649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303595022.mount: Deactivated successfully. May 16 00:20:55.910120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 16 00:20:55.927611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:20:56.105589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:20:56.162661 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 00:20:56.237012 kubelet[2167]: E0516 00:20:56.236953 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:20:56.241400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:20:56.241617 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:20:59.544783 containerd[1502]: time="2025-05-16T00:20:59.544700839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:59.574788 containerd[1502]: time="2025-05-16T00:20:59.574692079Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 16 00:20:59.599963 containerd[1502]: time="2025-05-16T00:20:59.599877502Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:59.640409 containerd[1502]: time="2025-05-16T00:20:59.640320575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:20:59.642046 containerd[1502]: time="2025-05-16T00:20:59.642011099Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 9.959875908s" May 16 00:20:59.642046 containerd[1502]: time="2025-05-16T00:20:59.642048489Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 16 00:21:02.071688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:02.086509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:02.112037 systemd[1]: Reloading requested from client PID 2220 ('systemctl') (unit session-9.scope)... May 16 00:21:02.112055 systemd[1]: Reloading... May 16 00:21:02.198359 zram_generator::config[2262]: No configuration found. May 16 00:21:02.881580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:21:02.972927 systemd[1]: Reloading finished in 860 ms. May 16 00:21:03.032107 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 00:21:03.032254 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 00:21:03.032608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:03.034709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:03.222490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:03.228106 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:21:03.306355 kubelet[2307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:03.306355 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:21:03.306355 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:03.306867 kubelet[2307]: I0516 00:21:03.306399 2307 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:21:04.372914 kubelet[2307]: I0516 00:21:04.372797 2307 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:21:04.372914 kubelet[2307]: I0516 00:21:04.372840 2307 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:21:04.373553 kubelet[2307]: I0516 00:21:04.373139 2307 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:21:04.590800 kubelet[2307]: E0516 00:21:04.590718 2307 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:04.591369 kubelet[2307]: I0516 00:21:04.591311 2307 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:21:04.597297 kubelet[2307]: E0516 00:21:04.597198 2307 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:21:04.597297 kubelet[2307]: I0516 00:21:04.597232 2307 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:21:04.604366 kubelet[2307]: I0516 00:21:04.604325 2307 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:21:04.607099 kubelet[2307]: I0516 00:21:04.607077 2307 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:21:04.607257 kubelet[2307]: I0516 00:21:04.607226 2307 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:21:04.607422 kubelet[2307]: I0516 00:21:04.607254 2307 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:21:04.607520 kubelet[2307]: I0516 00:21:04.607437 2307 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:21:04.607520 kubelet[2307]: I0516 00:21:04.607447 2307 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:21:04.607566 kubelet[2307]: I0516 00:21:04.607559 2307 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:04.617118 kubelet[2307]: I0516 00:21:04.617092 2307 kubelet.go:408] "Attempting to sync node with API server" May 16 00:21:04.617118 kubelet[2307]: I0516 00:21:04.617116 2307 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:21:04.617180 kubelet[2307]: I0516 00:21:04.617165 2307 kubelet.go:314] "Adding apiserver pod source" May 16 00:21:04.617334 kubelet[2307]: I0516 00:21:04.617209 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:21:04.623980 kubelet[2307]: W0516 00:21:04.623875 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:04.623980 kubelet[2307]: E0516 00:21:04.623950 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:04.625219 kubelet[2307]: I0516 00:21:04.625194 2307 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:21:04.625664 kubelet[2307]: I0516 00:21:04.625630 2307 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:21:04.634623 kubelet[2307]: W0516 00:21:04.634565 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:04.634623 kubelet[2307]: E0516 00:21:04.634614 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:04.638453 kubelet[2307]: W0516 00:21:04.638415 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:21:04.657487 kubelet[2307]: I0516 00:21:04.657450 2307 server.go:1274] "Started kubelet" May 16 00:21:04.658159 kubelet[2307]: I0516 00:21:04.657669 2307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:21:04.658159 kubelet[2307]: I0516 00:21:04.657921 2307 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:21:04.658159 kubelet[2307]: I0516 00:21:04.658053 2307 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:21:04.658981 kubelet[2307]: I0516 00:21:04.658960 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:21:04.659065 kubelet[2307]: I0516 00:21:04.659045 2307 server.go:449] "Adding debug handlers to kubelet server" May 16 00:21:04.660436 kubelet[2307]: I0516 00:21:04.660410 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:21:04.663718 kubelet[2307]: E0516 00:21:04.662862 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.663718 kubelet[2307]: I0516 00:21:04.662898 2307 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:21:04.663718 kubelet[2307]: I0516 00:21:04.663035 2307 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:21:04.663718 kubelet[2307]: I0516 00:21:04.663083 2307 reconciler.go:26] "Reconciler: start to sync state" May 16 00:21:04.663718 kubelet[2307]: E0516 00:21:04.663329 2307 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:21:04.663718 kubelet[2307]: W0516 00:21:04.663421 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:04.663718 kubelet[2307]: E0516 00:21:04.663457 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:04.663718 kubelet[2307]: I0516 00:21:04.663577 2307 factory.go:221] Registration of the systemd container factory successfully May 16 00:21:04.663718 kubelet[2307]: E0516 00:21:04.663605 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" May 16 00:21:04.663718 kubelet[2307]: I0516 00:21:04.663639 2307 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:21:04.669175 kubelet[2307]: E0516 00:21:04.667167 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd9fdd474c694 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:21:04.657417876 +0000 UTC m=+1.415069068,LastTimestamp:2025-05-16 00:21:04.657417876 +0000 UTC m=+1.415069068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:21:04.671477 kubelet[2307]: I0516 00:21:04.671452 2307 factory.go:221] Registration of the containerd container factory successfully May 16 00:21:04.681853 kubelet[2307]: I0516 00:21:04.681712 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:21:04.725048 kubelet[2307]: I0516 00:21:04.725007 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:21:04.725191 kubelet[2307]: I0516 00:21:04.725180 2307 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:21:04.725352 kubelet[2307]: I0516 00:21:04.725341 2307 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:21:04.725481 kubelet[2307]: E0516 00:21:04.725460 2307 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:21:04.726175 kubelet[2307]: W0516 00:21:04.725729 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:04.726175 kubelet[2307]: E0516 00:21:04.725766 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:04.726774 kubelet[2307]: I0516 00:21:04.726744 2307 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:21:04.726774 kubelet[2307]: I0516 00:21:04.726771 2307 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:21:04.726852 kubelet[2307]: I0516 00:21:04.726789 2307 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:04.763127 kubelet[2307]: E0516 00:21:04.763097 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.826548 kubelet[2307]: E0516 00:21:04.826463 2307 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:21:04.863917 kubelet[2307]: E0516 00:21:04.863854 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:04.864367 kubelet[2307]: E0516 00:21:04.864314 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" May 16 00:21:04.965010 kubelet[2307]: E0516 00:21:04.964964 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.027306 kubelet[2307]: E0516 00:21:05.027229 2307 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:21:05.065743 kubelet[2307]: E0516 00:21:05.065691 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.166760 kubelet[2307]: E0516 00:21:05.166705 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.265720 kubelet[2307]: E0516 00:21:05.265558 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" May 16 00:21:05.267691 kubelet[2307]: E0516 00:21:05.267643 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.368411 kubelet[2307]: E0516 00:21:05.368330 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.427663 kubelet[2307]: E0516 00:21:05.427584 2307 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:21:05.469120 kubelet[2307]: E0516 00:21:05.469048 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.507851 kubelet[2307]: W0516 00:21:05.507760 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:05.507851 kubelet[2307]: E0516 00:21:05.507838 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:05.569557 kubelet[2307]: E0516 00:21:05.569387 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.670366 kubelet[2307]: E0516 00:21:05.670301 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.670662 kubelet[2307]: W0516 00:21:05.670612 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:05.670694 kubelet[2307]: E0516 00:21:05.670665 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:05.738333 kubelet[2307]: I0516 00:21:05.738298 2307 policy_none.go:49] "None policy: Start" May 16 00:21:05.739115 kubelet[2307]: I0516 00:21:05.739076 2307 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:21:05.739115 kubelet[2307]: I0516 00:21:05.739116 2307 state_mem.go:35] "Initializing new in-memory state store" May 16 00:21:05.762325 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 00:21:05.771378 kubelet[2307]: E0516 00:21:05.771322 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:21:05.778315 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 00:21:05.781951 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 00:21:05.791298 kubelet[2307]: I0516 00:21:05.791201 2307 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:21:05.791530 kubelet[2307]: I0516 00:21:05.791500 2307 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:21:05.791572 kubelet[2307]: I0516 00:21:05.791520 2307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:21:05.792066 kubelet[2307]: I0516 00:21:05.792008 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:21:05.793709 kubelet[2307]: E0516 00:21:05.793684 2307 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:21:05.893354 kubelet[2307]: I0516 00:21:05.893227 2307 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:21:05.893949 kubelet[2307]: E0516 00:21:05.893711 2307 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" May 16 00:21:06.012162 kubelet[2307]: W0516 00:21:06.012066 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:06.012311 kubelet[2307]: E0516 00:21:06.012168 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:06.066574 kubelet[2307]: E0516 00:21:06.066513 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" May 16 00:21:06.095675 kubelet[2307]: I0516 00:21:06.095610 2307 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:21:06.096018 kubelet[2307]: E0516 00:21:06.095966 2307 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" May 16 00:21:06.212818 kubelet[2307]: W0516 00:21:06.212719 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:06.212818 kubelet[2307]: E0516 00:21:06.212812 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:06.237129 systemd[1]: Created slice kubepods-burstable-pod34d6f5cad1e65a9cad9c1a5d013edb36.slice - libcontainer container kubepods-burstable-pod34d6f5cad1e65a9cad9c1a5d013edb36.slice. May 16 00:21:06.251780 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 16 00:21:06.262920 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 16 00:21:06.273321 kubelet[2307]: I0516 00:21:06.273253 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:06.273513 kubelet[2307]: I0516 00:21:06.273370 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:06.273513 kubelet[2307]: I0516 00:21:06.273422 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34d6f5cad1e65a9cad9c1a5d013edb36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"34d6f5cad1e65a9cad9c1a5d013edb36\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:06.273513 kubelet[2307]: I0516 00:21:06.273448 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34d6f5cad1e65a9cad9c1a5d013edb36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"34d6f5cad1e65a9cad9c1a5d013edb36\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:06.273513 kubelet[2307]: I0516 00:21:06.273469 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:06.273513 kubelet[2307]: I0516 00:21:06.273488 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:21:06.273677 kubelet[2307]: I0516 00:21:06.273507 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34d6f5cad1e65a9cad9c1a5d013edb36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"34d6f5cad1e65a9cad9c1a5d013edb36\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:06.273677 kubelet[2307]: I0516 00:21:06.273527 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:06.273677 kubelet[2307]: I0516 00:21:06.273547 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:06.497873 kubelet[2307]: I0516 00:21:06.497737 2307 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:21:06.498549 kubelet[2307]: E0516 00:21:06.498506 2307 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" May 16 00:21:06.550818 kubelet[2307]: E0516 00:21:06.550782 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:06.551510 containerd[1502]: time="2025-05-16T00:21:06.551468470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:34d6f5cad1e65a9cad9c1a5d013edb36,Namespace:kube-system,Attempt:0,}" May 16 00:21:06.560829 kubelet[2307]: E0516 00:21:06.560797 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:06.561135 containerd[1502]: time="2025-05-16T00:21:06.561108421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 00:21:06.565692 kubelet[2307]: E0516 00:21:06.565645 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:06.566068 containerd[1502]: time="2025-05-16T00:21:06.566034538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 00:21:06.694172 kubelet[2307]: E0516 00:21:06.694099 2307 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:07.300848 kubelet[2307]: I0516 00:21:07.300782 2307 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:21:07.301276 kubelet[2307]: E0516 00:21:07.301224 2307 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" May 16 00:21:07.540798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717857384.mount: Deactivated successfully. May 16 00:21:07.552900 containerd[1502]: time="2025-05-16T00:21:07.552785564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.556018 containerd[1502]: time="2025-05-16T00:21:07.555965898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 16 00:21:07.557145 containerd[1502]: time="2025-05-16T00:21:07.557113692Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.560834 containerd[1502]: time="2025-05-16T00:21:07.560799506Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.562131 containerd[1502]: time="2025-05-16T00:21:07.562061477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:21:07.563949 containerd[1502]: time="2025-05-16T00:21:07.563887808Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.564865 containerd[1502]: time="2025-05-16T00:21:07.564834445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 00:21:07.566223 containerd[1502]: time="2025-05-16T00:21:07.566180570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 00:21:07.567585 containerd[1502]: time="2025-05-16T00:21:07.567527607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.006352627s" May 16 00:21:07.572882 containerd[1502]: time="2025-05-16T00:21:07.572849218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.021289718s" May 16 00:21:07.574025 containerd[1502]: time="2025-05-16T00:21:07.573982531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.007873107s" May 16 00:21:07.667536 kubelet[2307]: E0516 00:21:07.667469 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="3.2s" May 16 00:21:08.025297 containerd[1502]: time="2025-05-16T00:21:08.024811603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:08.025297 containerd[1502]: time="2025-05-16T00:21:08.024972436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:08.025297 containerd[1502]: time="2025-05-16T00:21:08.025026638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.025297 containerd[1502]: time="2025-05-16T00:21:08.025170236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.099310 containerd[1502]: time="2025-05-16T00:21:08.093337002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:08.099310 containerd[1502]: time="2025-05-16T00:21:08.094753476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:08.099310 containerd[1502]: time="2025-05-16T00:21:08.094777145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.099310 containerd[1502]: time="2025-05-16T00:21:08.095574165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.116645 containerd[1502]: time="2025-05-16T00:21:08.116513200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:08.116645 containerd[1502]: time="2025-05-16T00:21:08.116591121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:08.116645 containerd[1502]: time="2025-05-16T00:21:08.116609750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.117026 containerd[1502]: time="2025-05-16T00:21:08.116951669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:08.141235 kubelet[2307]: W0516 00:21:08.141179 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:08.141483 kubelet[2307]: E0516 00:21:08.141443 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:08.157211 kubelet[2307]: W0516 00:21:08.157127 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:08.157211 kubelet[2307]: E0516 00:21:08.157183 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:08.258703 systemd[1]: Started cri-containerd-d7740f93d61737556363d3c54cdec25247f37d4e488d5cc9460bab2a5777b1e0.scope - libcontainer container d7740f93d61737556363d3c54cdec25247f37d4e488d5cc9460bab2a5777b1e0. May 16 00:21:08.263712 systemd[1]: Started cri-containerd-b627087dc4a04399e4b40349464bf0b4bf589ef2681a94d1844b224f3422de86.scope - libcontainer container b627087dc4a04399e4b40349464bf0b4bf589ef2681a94d1844b224f3422de86. May 16 00:21:08.269513 systemd[1]: Started cri-containerd-06da9a972805a1b113e0e95060a7f3b7a7168c20c38ecd6f319192d3c7c0230c.scope - libcontainer container 06da9a972805a1b113e0e95060a7f3b7a7168c20c38ecd6f319192d3c7c0230c. May 16 00:21:08.406107 containerd[1502]: time="2025-05-16T00:21:08.405897341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7740f93d61737556363d3c54cdec25247f37d4e488d5cc9460bab2a5777b1e0\"" May 16 00:21:08.410473 kubelet[2307]: E0516 00:21:08.410434 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:08.411476 containerd[1502]: time="2025-05-16T00:21:08.411441897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b627087dc4a04399e4b40349464bf0b4bf589ef2681a94d1844b224f3422de86\"" May 16 00:21:08.414386 kubelet[2307]: E0516 00:21:08.414351 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:08.415181 containerd[1502]: time="2025-05-16T00:21:08.415000709Z" level=info msg="CreateContainer within sandbox \"d7740f93d61737556363d3c54cdec25247f37d4e488d5cc9460bab2a5777b1e0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:21:08.416490 containerd[1502]: time="2025-05-16T00:21:08.416462135Z" level=info msg="CreateContainer within sandbox \"b627087dc4a04399e4b40349464bf0b4bf589ef2681a94d1844b224f3422de86\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:21:08.433589 containerd[1502]: time="2025-05-16T00:21:08.433532596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:34d6f5cad1e65a9cad9c1a5d013edb36,Namespace:kube-system,Attempt:0,} returns sandbox id \"06da9a972805a1b113e0e95060a7f3b7a7168c20c38ecd6f319192d3c7c0230c\"" May 16 00:21:08.434541 kubelet[2307]: E0516 00:21:08.434488 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:08.437847 containerd[1502]: time="2025-05-16T00:21:08.437772450Z" level=info msg="CreateContainer within sandbox \"06da9a972805a1b113e0e95060a7f3b7a7168c20c38ecd6f319192d3c7c0230c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:21:08.443389 containerd[1502]: time="2025-05-16T00:21:08.443327987Z" level=info msg="CreateContainer within sandbox \"b627087dc4a04399e4b40349464bf0b4bf589ef2681a94d1844b224f3422de86\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"09a09e1669f49a6f79a27299611170f9c8a4f88d4e8018ef738dacfe575c25e5\"" May 16 00:21:08.445513 containerd[1502]: time="2025-05-16T00:21:08.445469774Z" level=info msg="CreateContainer within sandbox \"d7740f93d61737556363d3c54cdec25247f37d4e488d5cc9460bab2a5777b1e0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"77916ddd2f6650fc4ca67319333a3eca28efbd4255ae4a586959c3f164686579\"" May 16 00:21:08.445804 containerd[1502]: time="2025-05-16T00:21:08.445769916Z" level=info msg="StartContainer for \"09a09e1669f49a6f79a27299611170f9c8a4f88d4e8018ef738dacfe575c25e5\"" May 16 00:21:08.456750 containerd[1502]: time="2025-05-16T00:21:08.456689926Z" level=info msg="StartContainer for \"77916ddd2f6650fc4ca67319333a3eca28efbd4255ae4a586959c3f164686579\"" May 16 00:21:08.464497 containerd[1502]: time="2025-05-16T00:21:08.464450212Z" level=info msg="CreateContainer within sandbox \"06da9a972805a1b113e0e95060a7f3b7a7168c20c38ecd6f319192d3c7c0230c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"176a92c4d0254a44890b389fc1086ad0a20493f8c7fb97095a4e1c02b8080fbf\"" May 16 00:21:08.466303 containerd[1502]: time="2025-05-16T00:21:08.465880494Z" level=info msg="StartContainer for \"176a92c4d0254a44890b389fc1086ad0a20493f8c7fb97095a4e1c02b8080fbf\"" May 16 00:21:08.604493 systemd[1]: Started cri-containerd-09a09e1669f49a6f79a27299611170f9c8a4f88d4e8018ef738dacfe575c25e5.scope - libcontainer container 09a09e1669f49a6f79a27299611170f9c8a4f88d4e8018ef738dacfe575c25e5. May 16 00:21:08.614408 systemd[1]: run-containerd-runc-k8s.io-77916ddd2f6650fc4ca67319333a3eca28efbd4255ae4a586959c3f164686579-runc.4NR31x.mount: Deactivated successfully. May 16 00:21:08.632577 systemd[1]: Started cri-containerd-176a92c4d0254a44890b389fc1086ad0a20493f8c7fb97095a4e1c02b8080fbf.scope - libcontainer container 176a92c4d0254a44890b389fc1086ad0a20493f8c7fb97095a4e1c02b8080fbf. May 16 00:21:08.634843 systemd[1]: Started cri-containerd-77916ddd2f6650fc4ca67319333a3eca28efbd4255ae4a586959c3f164686579.scope - libcontainer container 77916ddd2f6650fc4ca67319333a3eca28efbd4255ae4a586959c3f164686579. May 16 00:21:08.744468 kubelet[2307]: W0516 00:21:08.744426 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused May 16 00:21:08.745079 kubelet[2307]: E0516 00:21:08.744481 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" May 16 00:21:08.910224 kubelet[2307]: I0516 00:21:08.910190 2307 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:21:08.914556 containerd[1502]: time="2025-05-16T00:21:08.914501309Z" level=info msg="StartContainer for \"176a92c4d0254a44890b389fc1086ad0a20493f8c7fb97095a4e1c02b8080fbf\" returns successfully" May 16 00:21:08.914952 containerd[1502]: time="2025-05-16T00:21:08.914730765Z" level=info msg="StartContainer for \"77916ddd2f6650fc4ca67319333a3eca28efbd4255ae4a586959c3f164686579\" returns successfully" May 16 00:21:08.915111 containerd[1502]: time="2025-05-16T00:21:08.915048573Z" level=info msg="StartContainer for \"09a09e1669f49a6f79a27299611170f9c8a4f88d4e8018ef738dacfe575c25e5\" returns successfully" May 16 00:21:08.925591 kubelet[2307]: E0516 00:21:08.925568 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:09.933302 kubelet[2307]: E0516 00:21:09.928018 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:09.933302 kubelet[2307]: E0516 00:21:09.928620 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:10.767795 kubelet[2307]: E0516 00:21:10.767641 2307 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183fd9fdd474c694 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:21:04.657417876 +0000 UTC m=+1.415069068,LastTimestamp:2025-05-16 00:21:04.657417876 +0000 UTC m=+1.415069068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:21:10.901811 kubelet[2307]: E0516 00:21:10.901687 2307 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183fd9fdd4ced5c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:21:04.663320005 +0000 UTC m=+1.420971197,LastTimestamp:2025-05-16 00:21:04.663320005 +0000 UTC m=+1.420971197,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:21:10.903838 kubelet[2307]: I0516 00:21:10.903788 2307 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:21:10.903838 kubelet[2307]: E0516 00:21:10.903831 2307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 00:21:10.956529 kubelet[2307]: E0516 00:21:10.956455 2307 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 00:21:10.957083 kubelet[2307]: E0516 00:21:10.956683 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:11.238674 kubelet[2307]: E0516 00:21:11.238577 2307 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 00:21:11.238854 kubelet[2307]: E0516 00:21:11.238817 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:11.621388 kubelet[2307]: I0516 00:21:11.621230 2307 apiserver.go:52] "Watching apiserver" May 16 00:21:11.664317 kubelet[2307]: I0516 00:21:11.664195 2307 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:21:12.045468 kubelet[2307]: E0516 00:21:12.044388 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:12.932399 kubelet[2307]: E0516 00:21:12.932358 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:17.273959 kubelet[2307]: I0516 00:21:17.273853 2307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.273825013 podStartE2EDuration="6.273825013s" podCreationTimestamp="2025-05-16 00:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:14.939709445 +0000 UTC m=+11.697360657" watchObservedRunningTime="2025-05-16 00:21:17.273825013 +0000 UTC m=+14.031476205" May 16 00:21:17.274781 kubelet[2307]: E0516 00:21:17.274736 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:17.942557 kubelet[2307]: E0516 00:21:17.942515 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:19.705535 systemd[1]: Reloading requested from client PID 2594 ('systemctl') (unit session-9.scope)... May 16 00:21:19.705552 systemd[1]: Reloading... May 16 00:21:19.833299 zram_generator::config[2633]: No configuration found. May 16 00:21:19.911961 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:21:20.005477 systemd[1]: Reloading finished in 299 ms. May 16 00:21:20.053774 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:20.102299 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:21:20.102648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:20.102727 systemd[1]: kubelet.service: Consumed 1.577s CPU time, 136.5M memory peak, 0B memory swap peak. May 16 00:21:20.116869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 00:21:20.311632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 00:21:20.317536 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 00:21:20.358793 kubelet[2678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:20.359573 kubelet[2678]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:21:20.359573 kubelet[2678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:21:20.359573 kubelet[2678]: I0516 00:21:20.359395 2678 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:21:20.366411 kubelet[2678]: I0516 00:21:20.366371 2678 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:21:20.366411 kubelet[2678]: I0516 00:21:20.366397 2678 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:21:20.366641 kubelet[2678]: I0516 00:21:20.366619 2678 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:21:20.367847 kubelet[2678]: I0516 00:21:20.367817 2678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:21:20.369981 kubelet[2678]: I0516 00:21:20.369953 2678 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:21:20.375497 kubelet[2678]: E0516 00:21:20.375446 2678 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:21:20.375497 kubelet[2678]: I0516 00:21:20.375490 2678 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:21:20.381083 kubelet[2678]: I0516 00:21:20.381040 2678 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:21:20.381299 kubelet[2678]: I0516 00:21:20.381249 2678 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:21:20.381423 kubelet[2678]: I0516 00:21:20.381390 2678 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:21:20.381596 kubelet[2678]: I0516 00:21:20.381419 2678 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:21:20.381761 kubelet[2678]: I0516 00:21:20.381606 2678 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:21:20.381761 kubelet[2678]: I0516 00:21:20.381616 2678 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:21:20.381761 kubelet[2678]: I0516 00:21:20.381646 2678 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:20.381851 kubelet[2678]: I0516 00:21:20.381776 2678 kubelet.go:408] "Attempting to sync node with API server" May 16 00:21:20.381851 kubelet[2678]: I0516 00:21:20.381794 2678 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:21:20.381851 kubelet[2678]: I0516 00:21:20.381820 2678 kubelet.go:314] "Adding apiserver pod source" May 16 00:21:20.382788 kubelet[2678]: I0516 00:21:20.382716 2678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:21:20.385435 kubelet[2678]: I0516 00:21:20.385386 2678 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 00:21:20.385885 kubelet[2678]: I0516 00:21:20.385856 2678 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:21:20.388443 kubelet[2678]: I0516 00:21:20.388419 2678 server.go:1274] "Started kubelet" May 16 00:21:20.389426 kubelet[2678]: I0516 00:21:20.389375 2678 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:21:20.390174 kubelet[2678]: I0516 00:21:20.390132 2678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:21:20.391868 kubelet[2678]: I0516 00:21:20.390897 2678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:21:20.391868 kubelet[2678]: I0516 00:21:20.391113 2678 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:21:20.391868 kubelet[2678]: I0516 00:21:20.391635 2678 server.go:449] "Adding debug handlers to kubelet server" May 16 00:21:20.394117 kubelet[2678]: I0516 00:21:20.393136 2678 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:21:20.394117 kubelet[2678]: I0516 00:21:20.393529 2678 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:21:20.394117 kubelet[2678]: I0516 00:21:20.393640 2678 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:21:20.394990 kubelet[2678]: I0516 00:21:20.394950 2678 reconciler.go:26] "Reconciler: start to sync state" May 16 00:21:20.396074 kubelet[2678]: I0516 00:21:20.396050 2678 factory.go:221] Registration of the systemd container factory successfully May 16 00:21:20.396232 kubelet[2678]: I0516 00:21:20.396197 2678 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:21:20.400256 kubelet[2678]: I0516 00:21:20.400211 2678 factory.go:221] Registration of the containerd container factory successfully May 16 00:21:20.412069 kubelet[2678]: I0516 00:21:20.411670 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:21:20.413940 kubelet[2678]: I0516 00:21:20.413901 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:21:20.414066 kubelet[2678]: I0516 00:21:20.414042 2678 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:21:20.414139 kubelet[2678]: I0516 00:21:20.414072 2678 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:21:20.415560 kubelet[2678]: E0516 00:21:20.414222 2678 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:21:20.458285 kubelet[2678]: I0516 00:21:20.458241 2678 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:21:20.458285 kubelet[2678]: I0516 00:21:20.458290 2678 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:21:20.458510 kubelet[2678]: I0516 00:21:20.458322 2678 state_mem.go:36] "Initialized new in-memory state store" May 16 00:21:20.458556 kubelet[2678]: I0516 00:21:20.458505 2678 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:21:20.458556 kubelet[2678]: I0516 00:21:20.458521 2678 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:21:20.458556 kubelet[2678]: I0516 00:21:20.458541 2678 policy_none.go:49] "None policy: Start" May 16 00:21:20.459977 kubelet[2678]: I0516 00:21:20.459827 2678 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:21:20.461300 kubelet[2678]: I0516 00:21:20.460634 2678 state_mem.go:35] "Initializing new in-memory state store" May 16 00:21:20.461597 kubelet[2678]: I0516 00:21:20.461562 2678 state_mem.go:75] "Updated machine memory state" May 16 00:21:20.468400 kubelet[2678]: I0516 00:21:20.467531 2678 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:21:20.468400 kubelet[2678]: I0516 00:21:20.467794 2678 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:21:20.468400 kubelet[2678]: I0516 00:21:20.467809 2678 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:21:20.468400 kubelet[2678]: I0516 00:21:20.468121 2678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:21:20.499599 sudo[2713]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 00:21:20.500020 sudo[2713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 00:21:20.536594 kubelet[2678]: E0516 00:21:20.536506 2678 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:21:20.536763 kubelet[2678]: E0516 00:21:20.536678 2678 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 00:21:20.579227 kubelet[2678]: I0516 00:21:20.579101 2678 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:21:20.595397 kubelet[2678]: I0516 00:21:20.595352 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:20.595543 kubelet[2678]: I0516 00:21:20.595420 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34d6f5cad1e65a9cad9c1a5d013edb36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"34d6f5cad1e65a9cad9c1a5d013edb36\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:20.595543 kubelet[2678]: I0516 00:21:20.595445 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34d6f5cad1e65a9cad9c1a5d013edb36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"34d6f5cad1e65a9cad9c1a5d013edb36\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:20.595543 kubelet[2678]: I0516 00:21:20.595470 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:20.595543 kubelet[2678]: I0516 00:21:20.595491 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:20.595543 kubelet[2678]: I0516 00:21:20.595510 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34d6f5cad1e65a9cad9c1a5d013edb36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"34d6f5cad1e65a9cad9c1a5d013edb36\") " pod="kube-system/kube-apiserver-localhost" May 16 00:21:20.595717 kubelet[2678]: I0516 00:21:20.595550 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:20.595717 kubelet[2678]: I0516 00:21:20.595595 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:21:20.595717 kubelet[2678]: I0516 00:21:20.595621 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:21:20.645793 kubelet[2678]: I0516 00:21:20.645754 2678 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 00:21:20.645945 kubelet[2678]: I0516 00:21:20.645845 2678 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:21:20.837231 kubelet[2678]: E0516 00:21:20.837010 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:20.837231 kubelet[2678]: E0516 00:21:20.837073 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:20.837231 kubelet[2678]: E0516 00:21:20.837010 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:21.098149 sudo[2713]: pam_unix(sudo:session): session closed for user root May 16 00:21:21.383888 kubelet[2678]: I0516 00:21:21.383702 2678 apiserver.go:52] "Watching apiserver" May 16 00:21:21.393788 kubelet[2678]: I0516 00:21:21.393747 2678 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:21:21.429282 kubelet[2678]: E0516 00:21:21.429187 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:21.429505 kubelet[2678]: E0516 00:21:21.429480 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:21.494170 kubelet[2678]: E0516 00:21:21.494119 2678 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 00:21:21.494356 kubelet[2678]: E0516 00:21:21.494329 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:21.575701 kubelet[2678]: I0516 00:21:21.575621 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5756015300000001 podStartE2EDuration="1.57560153s" podCreationTimestamp="2025-05-16 00:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:21.575300069 +0000 UTC m=+1.253338113" watchObservedRunningTime="2025-05-16 00:21:21.57560153 +0000 UTC m=+1.253639575" May 16 00:21:22.431672 kubelet[2678]: E0516 00:21:22.431616 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:22.983449 sudo[1697]: pam_unix(sudo:session): session closed for user root May 16 00:21:22.985363 sshd[1696]: Connection closed by 10.0.0.1 port 47248 May 16 00:21:22.991104 sshd-session[1694]: pam_unix(sshd:session): session closed for user core May 16 00:21:23.003137 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:47248.service: Deactivated successfully. May 16 00:21:23.006118 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:21:23.006404 systemd[1]: session-9.scope: Consumed 5.351s CPU time, 151.9M memory peak, 0B memory swap peak. May 16 00:21:23.007048 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. May 16 00:21:23.009178 systemd-logind[1482]: Removed session 9. May 16 00:21:23.434112 kubelet[2678]: E0516 00:21:23.434052 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:24.172455 kubelet[2678]: I0516 00:21:24.172418 2678 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:21:24.172831 containerd[1502]: time="2025-05-16T00:21:24.172753055Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:21:24.173241 kubelet[2678]: I0516 00:21:24.173065 2678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:21:24.806300 kubelet[2678]: I0516 00:21:24.803255 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.803233592 podStartE2EDuration="8.803233592s" podCreationTimestamp="2025-05-16 00:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:21.601420939 +0000 UTC m=+1.279458983" watchObservedRunningTime="2025-05-16 00:21:24.803233592 +0000 UTC m=+4.481271636" May 16 00:21:24.817502 systemd[1]: Created slice kubepods-burstable-pod4297356c_9b1c_4b33_a55f_ab4a3bdb244e.slice - libcontainer container kubepods-burstable-pod4297356c_9b1c_4b33_a55f_ab4a3bdb244e.slice. May 16 00:21:24.830203 systemd[1]: Created slice kubepods-besteffort-pod92006dd8_a2c2_4939_bd1c_569bc5147edd.slice - libcontainer container kubepods-besteffort-pod92006dd8_a2c2_4939_bd1c_569bc5147edd.slice. May 16 00:21:24.868940 systemd[1]: Created slice kubepods-besteffort-pod9d7e2df9_2deb_4729_8b96_e49cf44d4178.slice - libcontainer container kubepods-besteffort-pod9d7e2df9_2deb_4729_8b96_e49cf44d4178.slice. May 16 00:21:24.923292 kubelet[2678]: I0516 00:21:24.923201 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cni-path\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923292 kubelet[2678]: I0516 00:21:24.923294 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-cgroup\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923495 kubelet[2678]: I0516 00:21:24.923324 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-lib-modules\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923495 kubelet[2678]: I0516 00:21:24.923338 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-xtables-lock\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923495 kubelet[2678]: I0516 00:21:24.923352 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-bpf-maps\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923495 kubelet[2678]: I0516 00:21:24.923367 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-config-path\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923495 kubelet[2678]: I0516 00:21:24.923381 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-host-proc-sys-net\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923495 kubelet[2678]: I0516 00:21:24.923395 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzjnq\" (UniqueName: \"kubernetes.io/projected/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-kube-api-access-gzjnq\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923707 kubelet[2678]: I0516 00:21:24.923450 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-clustermesh-secrets\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923707 kubelet[2678]: I0516 00:21:24.923525 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-host-proc-sys-kernel\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923707 kubelet[2678]: I0516 00:21:24.923560 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92006dd8-a2c2-4939-bd1c-569bc5147edd-xtables-lock\") pod \"kube-proxy-t4rwr\" (UID: \"92006dd8-a2c2-4939-bd1c-569bc5147edd\") " pod="kube-system/kube-proxy-t4rwr" May 16 00:21:24.923707 kubelet[2678]: I0516 00:21:24.923584 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-hostproc\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923707 kubelet[2678]: I0516 00:21:24.923613 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-hubble-tls\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923707 kubelet[2678]: I0516 00:21:24.923645 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92006dd8-a2c2-4939-bd1c-569bc5147edd-lib-modules\") pod \"kube-proxy-t4rwr\" (UID: \"92006dd8-a2c2-4939-bd1c-569bc5147edd\") " pod="kube-system/kube-proxy-t4rwr" May 16 00:21:24.923902 kubelet[2678]: I0516 00:21:24.923674 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndftd\" (UniqueName: \"kubernetes.io/projected/92006dd8-a2c2-4939-bd1c-569bc5147edd-kube-api-access-ndftd\") pod \"kube-proxy-t4rwr\" (UID: \"92006dd8-a2c2-4939-bd1c-569bc5147edd\") " pod="kube-system/kube-proxy-t4rwr" May 16 00:21:24.923902 kubelet[2678]: I0516 00:21:24.923697 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-run\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923902 kubelet[2678]: I0516 00:21:24.923738 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-etc-cni-netd\") pod \"cilium-v5hps\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " pod="kube-system/cilium-v5hps" May 16 00:21:24.923902 kubelet[2678]: I0516 00:21:24.923757 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92006dd8-a2c2-4939-bd1c-569bc5147edd-kube-proxy\") pod \"kube-proxy-t4rwr\" (UID: \"92006dd8-a2c2-4939-bd1c-569bc5147edd\") " pod="kube-system/kube-proxy-t4rwr" May 16 00:21:25.024591 kubelet[2678]: I0516 00:21:25.024511 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvz65\" (UniqueName: \"kubernetes.io/projected/9d7e2df9-2deb-4729-8b96-e49cf44d4178-kube-api-access-tvz65\") pod \"cilium-operator-5d85765b45-xpbzh\" (UID: \"9d7e2df9-2deb-4729-8b96-e49cf44d4178\") " pod="kube-system/cilium-operator-5d85765b45-xpbzh" May 16 00:21:25.024757 kubelet[2678]: I0516 00:21:25.024662 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d7e2df9-2deb-4729-8b96-e49cf44d4178-cilium-config-path\") pod \"cilium-operator-5d85765b45-xpbzh\" (UID: \"9d7e2df9-2deb-4729-8b96-e49cf44d4178\") " pod="kube-system/cilium-operator-5d85765b45-xpbzh" May 16 00:21:25.424130 kubelet[2678]: E0516 00:21:25.424078 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:25.424859 containerd[1502]: time="2025-05-16T00:21:25.424820682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5hps,Uid:4297356c-9b1c-4b33-a55f-ab4a3bdb244e,Namespace:kube-system,Attempt:0,}" May 16 00:21:25.438165 kubelet[2678]: E0516 00:21:25.438135 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:25.438571 containerd[1502]: time="2025-05-16T00:21:25.438532685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4rwr,Uid:92006dd8-a2c2-4939-bd1c-569bc5147edd,Namespace:kube-system,Attempt:0,}" May 16 00:21:25.472787 kubelet[2678]: E0516 00:21:25.472745 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:25.473124 containerd[1502]: time="2025-05-16T00:21:25.473088739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xpbzh,Uid:9d7e2df9-2deb-4729-8b96-e49cf44d4178,Namespace:kube-system,Attempt:0,}" May 16 00:21:25.875848 containerd[1502]: time="2025-05-16T00:21:25.875628065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:25.875848 containerd[1502]: time="2025-05-16T00:21:25.875718408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:25.875848 containerd[1502]: time="2025-05-16T00:21:25.875730321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:25.877395 containerd[1502]: time="2025-05-16T00:21:25.877305003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:25.880821 containerd[1502]: time="2025-05-16T00:21:25.880217848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:25.880821 containerd[1502]: time="2025-05-16T00:21:25.880722127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:25.880936 containerd[1502]: time="2025-05-16T00:21:25.880863743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:25.883767 containerd[1502]: time="2025-05-16T00:21:25.881950719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:25.885889 containerd[1502]: time="2025-05-16T00:21:25.885571354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:21:25.885889 containerd[1502]: time="2025-05-16T00:21:25.885635593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:21:25.885889 containerd[1502]: time="2025-05-16T00:21:25.885649582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:25.885889 containerd[1502]: time="2025-05-16T00:21:25.885742199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:21:25.901516 systemd[1]: Started cri-containerd-7cac17f6e5cf33a9b7e6e9f37a2bcb6484e6c8b845c3ae53e5c93a29dee7b37b.scope - libcontainer container 7cac17f6e5cf33a9b7e6e9f37a2bcb6484e6c8b845c3ae53e5c93a29dee7b37b. May 16 00:21:25.922621 systemd[1]: Started cri-containerd-2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960.scope - libcontainer container 2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960. May 16 00:21:25.924948 systemd[1]: Started cri-containerd-7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085.scope - libcontainer container 7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085. May 16 00:21:25.950664 containerd[1502]: time="2025-05-16T00:21:25.950616244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4rwr,Uid:92006dd8-a2c2-4939-bd1c-569bc5147edd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cac17f6e5cf33a9b7e6e9f37a2bcb6484e6c8b845c3ae53e5c93a29dee7b37b\"" May 16 00:21:25.951677 kubelet[2678]: E0516 00:21:25.951654 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:25.956880 containerd[1502]: time="2025-05-16T00:21:25.956767082Z" level=info msg="CreateContainer within sandbox \"7cac17f6e5cf33a9b7e6e9f37a2bcb6484e6c8b845c3ae53e5c93a29dee7b37b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:21:25.963879 containerd[1502]: time="2025-05-16T00:21:25.963843879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5hps,Uid:4297356c-9b1c-4b33-a55f-ab4a3bdb244e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\"" May 16 00:21:25.965214 kubelet[2678]: E0516 00:21:25.964441 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:25.967212 containerd[1502]: time="2025-05-16T00:21:25.967186452Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:21:25.981064 containerd[1502]: time="2025-05-16T00:21:25.981007695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xpbzh,Uid:9d7e2df9-2deb-4729-8b96-e49cf44d4178,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085\"" May 16 00:21:25.982150 kubelet[2678]: E0516 00:21:25.981898 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:25.985413 containerd[1502]: time="2025-05-16T00:21:25.985341601Z" level=info msg="CreateContainer within sandbox \"7cac17f6e5cf33a9b7e6e9f37a2bcb6484e6c8b845c3ae53e5c93a29dee7b37b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"937d4b85c615517b6b8e1ccfc4c9cb34d975be124f52f63f9f677170dfeba8f5\"" May 16 00:21:25.987343 containerd[1502]: time="2025-05-16T00:21:25.986008689Z" level=info msg="StartContainer for \"937d4b85c615517b6b8e1ccfc4c9cb34d975be124f52f63f9f677170dfeba8f5\"" May 16 00:21:26.018582 systemd[1]: Started cri-containerd-937d4b85c615517b6b8e1ccfc4c9cb34d975be124f52f63f9f677170dfeba8f5.scope - libcontainer container 937d4b85c615517b6b8e1ccfc4c9cb34d975be124f52f63f9f677170dfeba8f5. May 16 00:21:26.057512 containerd[1502]: time="2025-05-16T00:21:26.057438470Z" level=info msg="StartContainer for \"937d4b85c615517b6b8e1ccfc4c9cb34d975be124f52f63f9f677170dfeba8f5\" returns successfully" May 16 00:21:26.440962 kubelet[2678]: E0516 00:21:26.440934 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:26.454382 kubelet[2678]: I0516 00:21:26.454307 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t4rwr" podStartSLOduration=2.45428594 podStartE2EDuration="2.45428594s" podCreationTimestamp="2025-05-16 00:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:21:26.453920181 +0000 UTC m=+6.131958225" watchObservedRunningTime="2025-05-16 00:21:26.45428594 +0000 UTC m=+6.132323984" May 16 00:21:27.328982 kubelet[2678]: E0516 00:21:27.328575 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:27.447503 kubelet[2678]: E0516 00:21:27.447456 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:27.605657 kubelet[2678]: E0516 00:21:27.605033 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:28.449381 kubelet[2678]: E0516 00:21:28.449328 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:28.990317 kubelet[2678]: E0516 00:21:28.990213 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:29.451744 kubelet[2678]: E0516 00:21:29.451697 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:30.452950 kubelet[2678]: E0516 00:21:30.452902 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:37.349233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331454940.mount: Deactivated successfully. May 16 00:21:46.569405 containerd[1502]: time="2025-05-16T00:21:46.569325527Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:46.632624 containerd[1502]: time="2025-05-16T00:21:46.632534712Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 00:21:46.694805 containerd[1502]: time="2025-05-16T00:21:46.694720855Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:21:46.696950 containerd[1502]: time="2025-05-16T00:21:46.696913399Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 20.72940608s" May 16 00:21:46.696950 containerd[1502]: time="2025-05-16T00:21:46.696946520Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 00:21:46.715738 containerd[1502]: time="2025-05-16T00:21:46.715565280Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:21:46.748630 containerd[1502]: time="2025-05-16T00:21:46.748581668Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:21:47.952730 containerd[1502]: time="2025-05-16T00:21:47.952665083Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\"" May 16 00:21:47.953374 containerd[1502]: time="2025-05-16T00:21:47.953253793Z" level=info msg="StartContainer for \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\"" May 16 00:21:47.987459 systemd[1]: Started cri-containerd-ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f.scope - libcontainer container ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f. May 16 00:21:48.028455 systemd[1]: cri-containerd-ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f.scope: Deactivated successfully. May 16 00:21:48.147539 containerd[1502]: time="2025-05-16T00:21:48.147472889Z" level=info msg="StartContainer for \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\" returns successfully" May 16 00:21:48.167458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f-rootfs.mount: Deactivated successfully. May 16 00:21:48.693036 kubelet[2678]: E0516 00:21:48.693001 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:49.694246 kubelet[2678]: E0516 00:21:49.694198 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:50.406416 containerd[1502]: time="2025-05-16T00:21:50.406110318Z" level=info msg="shim disconnected" id=ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f namespace=k8s.io May 16 00:21:50.406416 containerd[1502]: time="2025-05-16T00:21:50.406190848Z" level=warning msg="cleaning up after shim disconnected" id=ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f namespace=k8s.io May 16 00:21:50.406416 containerd[1502]: time="2025-05-16T00:21:50.406202410Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:21:50.701173 kubelet[2678]: E0516 00:21:50.698594 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:50.703968 containerd[1502]: time="2025-05-16T00:21:50.703933045Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:21:51.243813 containerd[1502]: time="2025-05-16T00:21:51.243719327Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\"" May 16 00:21:51.244471 containerd[1502]: time="2025-05-16T00:21:51.244432357Z" level=info msg="StartContainer for \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\"" May 16 00:21:51.274413 systemd[1]: Started cri-containerd-2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24.scope - libcontainer container 2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24. May 16 00:21:51.313496 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:21:51.314025 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 00:21:51.314108 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 00:21:51.322566 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 00:21:51.322817 systemd[1]: cri-containerd-2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24.scope: Deactivated successfully. May 16 00:21:51.357477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 00:21:51.461595 containerd[1502]: time="2025-05-16T00:21:51.461514561Z" level=info msg="StartContainer for \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\" returns successfully" May 16 00:21:51.480720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24-rootfs.mount: Deactivated successfully. May 16 00:21:51.687705 containerd[1502]: time="2025-05-16T00:21:51.687637761Z" level=info msg="shim disconnected" id=2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24 namespace=k8s.io May 16 00:21:51.687705 containerd[1502]: time="2025-05-16T00:21:51.687699777Z" level=warning msg="cleaning up after shim disconnected" id=2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24 namespace=k8s.io May 16 00:21:51.687705 containerd[1502]: time="2025-05-16T00:21:51.687712931Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:21:51.702712 kubelet[2678]: E0516 00:21:51.702667 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:52.705801 kubelet[2678]: E0516 00:21:52.705773 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:52.709745 containerd[1502]: time="2025-05-16T00:21:52.708616248Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:21:54.242711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162367917.mount: Deactivated successfully. May 16 00:21:54.321929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731900409.mount: Deactivated successfully. May 16 00:21:55.637690 containerd[1502]: time="2025-05-16T00:21:55.637631808Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\"" May 16 00:21:55.638383 containerd[1502]: time="2025-05-16T00:21:55.638333586Z" level=info msg="StartContainer for \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\"" May 16 00:21:55.671435 systemd[1]: Started cri-containerd-ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90.scope - libcontainer container ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90. May 16 00:21:55.718931 systemd[1]: cri-containerd-ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90.scope: Deactivated successfully. May 16 00:21:55.974068 containerd[1502]: time="2025-05-16T00:21:55.974014627Z" level=info msg="StartContainer for \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\" returns successfully" May 16 00:21:55.994087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90-rootfs.mount: Deactivated successfully. May 16 00:21:56.716011 kubelet[2678]: E0516 00:21:56.715980 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:56.830569 containerd[1502]: time="2025-05-16T00:21:56.830510096Z" level=info msg="shim disconnected" id=ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90 namespace=k8s.io May 16 00:21:56.830569 containerd[1502]: time="2025-05-16T00:21:56.830567655Z" level=warning msg="cleaning up after shim disconnected" id=ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90 namespace=k8s.io May 16 00:21:56.831015 containerd[1502]: time="2025-05-16T00:21:56.830582894Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:21:57.720538 kubelet[2678]: E0516 00:21:57.720489 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:21:57.726308 containerd[1502]: time="2025-05-16T00:21:57.723866419Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:21:58.445291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167371903.mount: Deactivated successfully. May 16 00:21:59.066814 containerd[1502]: time="2025-05-16T00:21:59.066762839Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\"" May 16 00:21:59.067451 containerd[1502]: time="2025-05-16T00:21:59.067392058Z" level=info msg="StartContainer for \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\"" May 16 00:21:59.094750 systemd[1]: run-containerd-runc-k8s.io-322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7-runc.Fd5lb7.mount: Deactivated successfully. May 16 00:21:59.108509 systemd[1]: Started cri-containerd-322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7.scope - libcontainer container 322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7. May 16 00:21:59.433305 systemd[1]: cri-containerd-322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7.scope: Deactivated successfully. May 16 00:21:59.981838 containerd[1502]: time="2025-05-16T00:21:59.981733962Z" level=info msg="StartContainer for \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\" returns successfully" May 16 00:22:00.002192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7-rootfs.mount: Deactivated successfully. May 16 00:22:00.852682 containerd[1502]: time="2025-05-16T00:22:00.852610241Z" level=info msg="shim disconnected" id=322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7 namespace=k8s.io May 16 00:22:00.852682 containerd[1502]: time="2025-05-16T00:22:00.852676477Z" level=warning msg="cleaning up after shim disconnected" id=322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7 namespace=k8s.io May 16 00:22:00.852682 containerd[1502]: time="2025-05-16T00:22:00.852685765Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:22:00.862976 containerd[1502]: time="2025-05-16T00:22:00.862922942Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:00.910907 containerd[1502]: time="2025-05-16T00:22:00.910794170Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 00:22:00.939704 containerd[1502]: time="2025-05-16T00:22:00.939626403Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 00:22:00.941386 containerd[1502]: time="2025-05-16T00:22:00.941322243Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 14.225710126s" May 16 00:22:00.941386 containerd[1502]: time="2025-05-16T00:22:00.941378660Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 00:22:00.943741 containerd[1502]: time="2025-05-16T00:22:00.943701206Z" level=info msg="CreateContainer within sandbox \"7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:22:00.989954 kubelet[2678]: E0516 00:22:00.989900 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:00.991798 containerd[1502]: time="2025-05-16T00:22:00.991759539Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:22:01.300446 containerd[1502]: time="2025-05-16T00:22:01.300159726Z" level=info msg="CreateContainer within sandbox \"7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4\"" May 16 00:22:01.301161 containerd[1502]: time="2025-05-16T00:22:01.300942551Z" level=info msg="StartContainer for \"0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4\"" May 16 00:22:01.323591 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:36846.service - OpenSSH per-connection server daemon (10.0.0.1:36846). May 16 00:22:01.329643 systemd[1]: Started cri-containerd-0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4.scope - libcontainer container 0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4. May 16 00:22:01.449488 containerd[1502]: time="2025-05-16T00:22:01.449434747Z" level=info msg="StartContainer for \"0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4\" returns successfully" May 16 00:22:01.449648 sshd[3342]: Accepted publickey for core from 10.0.0.1 port 36846 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:01.453200 sshd-session[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:01.463509 systemd-logind[1482]: New session 10 of user core. May 16 00:22:01.467735 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 00:22:01.528053 containerd[1502]: time="2025-05-16T00:22:01.528000567Z" level=info msg="CreateContainer within sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\"" May 16 00:22:01.532705 containerd[1502]: time="2025-05-16T00:22:01.531246509Z" level=info msg="StartContainer for \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\"" May 16 00:22:01.578543 systemd[1]: Started cri-containerd-e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8.scope - libcontainer container e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8. May 16 00:22:01.777304 containerd[1502]: time="2025-05-16T00:22:01.776115501Z" level=info msg="StartContainer for \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\" returns successfully" May 16 00:22:01.932029 kubelet[2678]: I0516 00:22:01.931961 2678 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 00:22:02.029538 kubelet[2678]: E0516 00:22:02.029501 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:02.030179 kubelet[2678]: E0516 00:22:02.029501 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:02.188951 sshd[3370]: Connection closed by 10.0.0.1 port 36846 May 16 00:22:02.191657 sshd-session[3342]: pam_unix(sshd:session): session closed for user core May 16 00:22:02.196031 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:36846.service: Deactivated successfully. May 16 00:22:02.198610 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:22:02.200662 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. May 16 00:22:02.202143 systemd-logind[1482]: Removed session 10. May 16 00:22:02.715298 kubelet[2678]: I0516 00:22:02.714939 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xpbzh" podStartSLOduration=3.755348904 podStartE2EDuration="38.714914813s" podCreationTimestamp="2025-05-16 00:21:24 +0000 UTC" firstStartedPulling="2025-05-16 00:21:25.98274246 +0000 UTC m=+5.660780504" lastFinishedPulling="2025-05-16 00:22:00.942308369 +0000 UTC m=+40.620346413" observedRunningTime="2025-05-16 00:22:02.46352386 +0000 UTC m=+42.141561904" watchObservedRunningTime="2025-05-16 00:22:02.714914813 +0000 UTC m=+42.392952857" May 16 00:22:02.728640 systemd[1]: Created slice kubepods-burstable-pod5decce39_3f9f_45f4_a3e9_a47732116978.slice - libcontainer container kubepods-burstable-pod5decce39_3f9f_45f4_a3e9_a47732116978.slice. May 16 00:22:02.733501 systemd[1]: Created slice kubepods-burstable-pod5320a2d4_ec7b_4bcc_8716_4ac77c68463f.slice - libcontainer container kubepods-burstable-pod5320a2d4_ec7b_4bcc_8716_4ac77c68463f.slice. May 16 00:22:02.741329 kubelet[2678]: I0516 00:22:02.740729 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v5hps" podStartSLOduration=17.991203839 podStartE2EDuration="38.740708249s" podCreationTimestamp="2025-05-16 00:21:24 +0000 UTC" firstStartedPulling="2025-05-16 00:21:25.965900995 +0000 UTC m=+5.643939049" lastFinishedPulling="2025-05-16 00:21:46.715405395 +0000 UTC m=+26.393443459" observedRunningTime="2025-05-16 00:22:02.740511425 +0000 UTC m=+42.418549469" watchObservedRunningTime="2025-05-16 00:22:02.740708249 +0000 UTC m=+42.418746293" May 16 00:22:02.864821 kubelet[2678]: I0516 00:22:02.864745 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5320a2d4-ec7b-4bcc-8716-4ac77c68463f-config-volume\") pod \"coredns-7c65d6cfc9-4l4b6\" (UID: \"5320a2d4-ec7b-4bcc-8716-4ac77c68463f\") " pod="kube-system/coredns-7c65d6cfc9-4l4b6" May 16 00:22:02.864821 kubelet[2678]: I0516 00:22:02.864803 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tktn\" (UniqueName: \"kubernetes.io/projected/5decce39-3f9f-45f4-a3e9-a47732116978-kube-api-access-6tktn\") pod \"coredns-7c65d6cfc9-dmcr2\" (UID: \"5decce39-3f9f-45f4-a3e9-a47732116978\") " pod="kube-system/coredns-7c65d6cfc9-dmcr2" May 16 00:22:02.864821 kubelet[2678]: I0516 00:22:02.864834 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5decce39-3f9f-45f4-a3e9-a47732116978-config-volume\") pod \"coredns-7c65d6cfc9-dmcr2\" (UID: \"5decce39-3f9f-45f4-a3e9-a47732116978\") " pod="kube-system/coredns-7c65d6cfc9-dmcr2" May 16 00:22:02.865021 kubelet[2678]: I0516 00:22:02.864855 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5672k\" (UniqueName: \"kubernetes.io/projected/5320a2d4-ec7b-4bcc-8716-4ac77c68463f-kube-api-access-5672k\") pod \"coredns-7c65d6cfc9-4l4b6\" (UID: \"5320a2d4-ec7b-4bcc-8716-4ac77c68463f\") " pod="kube-system/coredns-7c65d6cfc9-4l4b6" May 16 00:22:03.019352 kubelet[2678]: E0516 00:22:03.019033 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:03.019352 kubelet[2678]: E0516 00:22:03.019126 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:03.332883 kubelet[2678]: E0516 00:22:03.332710 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:03.336314 kubelet[2678]: E0516 00:22:03.336081 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:03.339045 containerd[1502]: time="2025-05-16T00:22:03.339001889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4l4b6,Uid:5320a2d4-ec7b-4bcc-8716-4ac77c68463f,Namespace:kube-system,Attempt:0,}" May 16 00:22:03.340801 containerd[1502]: time="2025-05-16T00:22:03.340751494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmcr2,Uid:5decce39-3f9f-45f4-a3e9-a47732116978,Namespace:kube-system,Attempt:0,}" May 16 00:22:04.021104 kubelet[2678]: E0516 00:22:04.021072 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:05.022433 kubelet[2678]: E0516 00:22:05.022392 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:06.321612 systemd-networkd[1442]: cilium_host: Link UP May 16 00:22:06.321776 systemd-networkd[1442]: cilium_net: Link UP May 16 00:22:06.321961 systemd-networkd[1442]: cilium_net: Gained carrier May 16 00:22:06.322131 systemd-networkd[1442]: cilium_host: Gained carrier May 16 00:22:06.434054 systemd-networkd[1442]: cilium_vxlan: Link UP May 16 00:22:06.434066 systemd-networkd[1442]: cilium_vxlan: Gained carrier May 16 00:22:06.697290 kernel: NET: Registered PF_ALG protocol family May 16 00:22:06.800503 systemd-networkd[1442]: cilium_net: Gained IPv6LL May 16 00:22:07.202419 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:36858.service - OpenSSH per-connection server daemon (10.0.0.1:36858). May 16 00:22:07.257654 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 36858 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:07.259626 sshd-session[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:07.266107 systemd-logind[1482]: New session 11 of user core. May 16 00:22:07.271621 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 00:22:07.329583 systemd-networkd[1442]: cilium_host: Gained IPv6LL May 16 00:22:07.408582 sshd[3795]: Connection closed by 10.0.0.1 port 36858 May 16 00:22:07.408995 sshd-session[3756]: pam_unix(sshd:session): session closed for user core May 16 00:22:07.414172 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:36858.service: Deactivated successfully. May 16 00:22:07.416918 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:22:07.418086 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. May 16 00:22:07.419573 systemd-logind[1482]: Removed session 11. May 16 00:22:07.464032 systemd-networkd[1442]: lxc_health: Link UP May 16 00:22:07.471200 systemd-networkd[1442]: lxc_health: Gained carrier May 16 00:22:07.842301 systemd-networkd[1442]: lxcc951bf784546: Link UP May 16 00:22:07.850298 kernel: eth0: renamed from tmp88df9 May 16 00:22:07.857740 systemd-networkd[1442]: lxc41e781401d67: Link UP May 16 00:22:07.882343 kernel: eth0: renamed from tmp263f8 May 16 00:22:07.886590 systemd-networkd[1442]: lxcc951bf784546: Gained carrier May 16 00:22:07.886836 systemd-networkd[1442]: lxc41e781401d67: Gained carrier May 16 00:22:08.480558 systemd-networkd[1442]: cilium_vxlan: Gained IPv6LL May 16 00:22:09.120450 systemd-networkd[1442]: lxc_health: Gained IPv6LL May 16 00:22:09.248628 systemd-networkd[1442]: lxc41e781401d67: Gained IPv6LL May 16 00:22:09.426301 kubelet[2678]: E0516 00:22:09.426237 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:09.568439 systemd-networkd[1442]: lxcc951bf784546: Gained IPv6LL May 16 00:22:10.033048 kubelet[2678]: E0516 00:22:10.032501 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:11.974628 containerd[1502]: time="2025-05-16T00:22:11.974478318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:22:11.974628 containerd[1502]: time="2025-05-16T00:22:11.974539396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:22:11.974628 containerd[1502]: time="2025-05-16T00:22:11.974570755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:22:11.975125 containerd[1502]: time="2025-05-16T00:22:11.974746813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:22:11.975125 containerd[1502]: time="2025-05-16T00:22:11.974844360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:22:11.975125 containerd[1502]: time="2025-05-16T00:22:11.974862585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:22:11.975125 containerd[1502]: time="2025-05-16T00:22:11.974992283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:22:11.975417 containerd[1502]: time="2025-05-16T00:22:11.975363655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:22:11.994239 systemd[1]: run-containerd-runc-k8s.io-88df97ab443392f35fd7ac4ea5420823ab74ae9b9be0a81a0c1e2225a31a1ddf-runc.HxCz4t.mount: Deactivated successfully. May 16 00:22:12.005054 systemd[1]: Started cri-containerd-88df97ab443392f35fd7ac4ea5420823ab74ae9b9be0a81a0c1e2225a31a1ddf.scope - libcontainer container 88df97ab443392f35fd7ac4ea5420823ab74ae9b9be0a81a0c1e2225a31a1ddf. May 16 00:22:12.012660 systemd[1]: Started cri-containerd-263f8ecb9c42d8863040d0b315d9c00d0a5531c6335a590b8e1cd5114b4fd29b.scope - libcontainer container 263f8ecb9c42d8863040d0b315d9c00d0a5531c6335a590b8e1cd5114b4fd29b. May 16 00:22:12.023223 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:12.031665 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:22:12.063811 containerd[1502]: time="2025-05-16T00:22:12.063751242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4l4b6,Uid:5320a2d4-ec7b-4bcc-8716-4ac77c68463f,Namespace:kube-system,Attempt:0,} returns sandbox id \"88df97ab443392f35fd7ac4ea5420823ab74ae9b9be0a81a0c1e2225a31a1ddf\"" May 16 00:22:12.066149 kubelet[2678]: E0516 00:22:12.066043 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:12.069043 containerd[1502]: time="2025-05-16T00:22:12.068997763Z" level=info msg="CreateContainer within sandbox \"88df97ab443392f35fd7ac4ea5420823ab74ae9b9be0a81a0c1e2225a31a1ddf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:22:12.069953 containerd[1502]: time="2025-05-16T00:22:12.069922746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmcr2,Uid:5decce39-3f9f-45f4-a3e9-a47732116978,Namespace:kube-system,Attempt:0,} returns sandbox id \"263f8ecb9c42d8863040d0b315d9c00d0a5531c6335a590b8e1cd5114b4fd29b\"" May 16 00:22:12.070767 kubelet[2678]: E0516 00:22:12.070728 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:12.074027 containerd[1502]: time="2025-05-16T00:22:12.073974226Z" level=info msg="CreateContainer within sandbox \"263f8ecb9c42d8863040d0b315d9c00d0a5531c6335a590b8e1cd5114b4fd29b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:22:12.100641 containerd[1502]: time="2025-05-16T00:22:12.100554369Z" level=info msg="CreateContainer within sandbox \"88df97ab443392f35fd7ac4ea5420823ab74ae9b9be0a81a0c1e2225a31a1ddf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"53713eb6f012275a7b6d81888ab8c7cc53a596002787834c893ea5c3a8cf7fca\"" May 16 00:22:12.102812 containerd[1502]: time="2025-05-16T00:22:12.102756632Z" level=info msg="StartContainer for \"53713eb6f012275a7b6d81888ab8c7cc53a596002787834c893ea5c3a8cf7fca\"" May 16 00:22:12.114288 containerd[1502]: time="2025-05-16T00:22:12.114138029Z" level=info msg="CreateContainer within sandbox \"263f8ecb9c42d8863040d0b315d9c00d0a5531c6335a590b8e1cd5114b4fd29b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"410c47eedc264e9b71be1bde7a233c1161249e9cf23f223ce0359abd58251e4d\"" May 16 00:22:12.114750 containerd[1502]: time="2025-05-16T00:22:12.114719584Z" level=info msg="StartContainer for \"410c47eedc264e9b71be1bde7a233c1161249e9cf23f223ce0359abd58251e4d\"" May 16 00:22:12.133571 systemd[1]: Started cri-containerd-53713eb6f012275a7b6d81888ab8c7cc53a596002787834c893ea5c3a8cf7fca.scope - libcontainer container 53713eb6f012275a7b6d81888ab8c7cc53a596002787834c893ea5c3a8cf7fca. May 16 00:22:12.146432 systemd[1]: Started cri-containerd-410c47eedc264e9b71be1bde7a233c1161249e9cf23f223ce0359abd58251e4d.scope - libcontainer container 410c47eedc264e9b71be1bde7a233c1161249e9cf23f223ce0359abd58251e4d. May 16 00:22:12.176843 containerd[1502]: time="2025-05-16T00:22:12.176729430Z" level=info msg="StartContainer for \"53713eb6f012275a7b6d81888ab8c7cc53a596002787834c893ea5c3a8cf7fca\" returns successfully" May 16 00:22:12.186090 containerd[1502]: time="2025-05-16T00:22:12.186029656Z" level=info msg="StartContainer for \"410c47eedc264e9b71be1bde7a233c1161249e9cf23f223ce0359abd58251e4d\" returns successfully" May 16 00:22:12.424734 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:33510.service - OpenSSH per-connection server daemon (10.0.0.1:33510). May 16 00:22:12.470612 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 33510 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:12.472595 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:12.477513 systemd-logind[1482]: New session 12 of user core. May 16 00:22:12.485459 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 00:22:12.629548 sshd[4082]: Connection closed by 10.0.0.1 port 33510 May 16 00:22:12.629994 sshd-session[4080]: pam_unix(sshd:session): session closed for user core May 16 00:22:12.635025 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:33510.service: Deactivated successfully. May 16 00:22:12.637873 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:22:12.638915 systemd-logind[1482]: Session 12 logged out. Waiting for processes to exit. May 16 00:22:12.640066 systemd-logind[1482]: Removed session 12. May 16 00:22:12.981063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1900478949.mount: Deactivated successfully. May 16 00:22:13.048931 kubelet[2678]: E0516 00:22:13.048621 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:13.050539 kubelet[2678]: E0516 00:22:13.050495 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:13.512498 kubelet[2678]: I0516 00:22:13.512427 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dmcr2" podStartSLOduration=49.512405971 podStartE2EDuration="49.512405971s" podCreationTimestamp="2025-05-16 00:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:22:13.324597572 +0000 UTC m=+53.002635626" watchObservedRunningTime="2025-05-16 00:22:13.512405971 +0000 UTC m=+53.190444015" May 16 00:22:14.053135 kubelet[2678]: E0516 00:22:14.052979 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:14.053829 kubelet[2678]: E0516 00:22:14.053765 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:14.238237 kubelet[2678]: I0516 00:22:14.238089 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4l4b6" podStartSLOduration=50.238050246 podStartE2EDuration="50.238050246s" podCreationTimestamp="2025-05-16 00:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:22:13.512929436 +0000 UTC m=+53.190967480" watchObservedRunningTime="2025-05-16 00:22:14.238050246 +0000 UTC m=+53.916088300" May 16 00:22:15.054883 kubelet[2678]: E0516 00:22:15.054840 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:15.055538 kubelet[2678]: E0516 00:22:15.055126 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:17.644139 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:33518.service - OpenSSH per-connection server daemon (10.0.0.1:33518). May 16 00:22:17.694011 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 33518 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:17.696280 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:17.702562 systemd-logind[1482]: New session 13 of user core. May 16 00:22:17.709706 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 00:22:17.832059 sshd[4116]: Connection closed by 10.0.0.1 port 33518 May 16 00:22:17.832477 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 16 00:22:17.836811 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:33518.service: Deactivated successfully. May 16 00:22:17.838968 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:22:17.839775 systemd-logind[1482]: Session 13 logged out. Waiting for processes to exit. May 16 00:22:17.841028 systemd-logind[1482]: Removed session 13. May 16 00:22:22.851526 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:45080.service - OpenSSH per-connection server daemon (10.0.0.1:45080). May 16 00:22:22.893255 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 45080 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:22.895302 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:22.900181 systemd-logind[1482]: New session 14 of user core. May 16 00:22:22.911453 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 00:22:23.044063 sshd[4134]: Connection closed by 10.0.0.1 port 45080 May 16 00:22:23.044516 sshd-session[4132]: pam_unix(sshd:session): session closed for user core May 16 00:22:23.049127 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:45080.service: Deactivated successfully. May 16 00:22:23.051252 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:22:23.052076 systemd-logind[1482]: Session 14 logged out. Waiting for processes to exit. May 16 00:22:23.053130 systemd-logind[1482]: Removed session 14. May 16 00:22:28.057373 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:51454.service - OpenSSH per-connection server daemon (10.0.0.1:51454). May 16 00:22:28.279762 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 51454 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:28.281435 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:28.285311 systemd-logind[1482]: New session 15 of user core. May 16 00:22:28.299434 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 00:22:28.434552 sshd[4151]: Connection closed by 10.0.0.1 port 51454 May 16 00:22:28.434928 sshd-session[4149]: pam_unix(sshd:session): session closed for user core May 16 00:22:28.438321 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:51454.service: Deactivated successfully. May 16 00:22:28.440040 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:22:28.440614 systemd-logind[1482]: Session 15 logged out. Waiting for processes to exit. May 16 00:22:28.441392 systemd-logind[1482]: Removed session 15. May 16 00:22:31.415139 kubelet[2678]: E0516 00:22:31.415093 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:33.446618 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:51464.service - OpenSSH per-connection server daemon (10.0.0.1:51464). May 16 00:22:33.487047 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 51464 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:33.488662 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:33.493320 systemd-logind[1482]: New session 16 of user core. May 16 00:22:33.498438 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 00:22:33.620759 sshd[4166]: Connection closed by 10.0.0.1 port 51464 May 16 00:22:33.621419 sshd-session[4164]: pam_unix(sshd:session): session closed for user core May 16 00:22:33.625207 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:51464.service: Deactivated successfully. May 16 00:22:33.627472 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:22:33.628092 systemd-logind[1482]: Session 16 logged out. Waiting for processes to exit. May 16 00:22:33.629141 systemd-logind[1482]: Removed session 16. May 16 00:22:38.415856 kubelet[2678]: E0516 00:22:38.415809 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:38.640489 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:39750.service - OpenSSH per-connection server daemon (10.0.0.1:39750). May 16 00:22:38.700720 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 39750 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:38.702330 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:38.706989 systemd-logind[1482]: New session 17 of user core. May 16 00:22:38.719483 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 00:22:38.848940 sshd[4181]: Connection closed by 10.0.0.1 port 39750 May 16 00:22:38.849472 sshd-session[4179]: pam_unix(sshd:session): session closed for user core May 16 00:22:38.866018 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:39750.service: Deactivated successfully. May 16 00:22:38.868300 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:22:38.870721 systemd-logind[1482]: Session 17 logged out. Waiting for processes to exit. May 16 00:22:38.872421 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:39766.service - OpenSSH per-connection server daemon (10.0.0.1:39766). May 16 00:22:38.873302 systemd-logind[1482]: Removed session 17. May 16 00:22:38.919910 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 39766 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:38.921959 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:38.926558 systemd-logind[1482]: New session 18 of user core. May 16 00:22:38.940470 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 00:22:39.414808 sshd[4196]: Connection closed by 10.0.0.1 port 39766 May 16 00:22:39.415211 sshd-session[4194]: pam_unix(sshd:session): session closed for user core May 16 00:22:39.427415 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:39766.service: Deactivated successfully. May 16 00:22:39.429095 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:22:39.430855 systemd-logind[1482]: Session 18 logged out. Waiting for processes to exit. May 16 00:22:39.439769 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:39780.service - OpenSSH per-connection server daemon (10.0.0.1:39780). May 16 00:22:39.441431 systemd-logind[1482]: Removed session 18. May 16 00:22:39.480039 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 39780 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:39.481785 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:39.485960 systemd-logind[1482]: New session 19 of user core. May 16 00:22:39.495411 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 00:22:39.625312 sshd[4208]: Connection closed by 10.0.0.1 port 39780 May 16 00:22:39.625742 sshd-session[4206]: pam_unix(sshd:session): session closed for user core May 16 00:22:39.628884 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:39780.service: Deactivated successfully. May 16 00:22:39.632249 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:22:39.635087 systemd-logind[1482]: Session 19 logged out. Waiting for processes to exit. May 16 00:22:39.636938 systemd-logind[1482]: Removed session 19. May 16 00:22:44.638178 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:39794.service - OpenSSH per-connection server daemon (10.0.0.1:39794). May 16 00:22:44.678087 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 39794 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:44.680125 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:44.684804 systemd-logind[1482]: New session 20 of user core. May 16 00:22:44.693470 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 00:22:44.819574 sshd[4224]: Connection closed by 10.0.0.1 port 39794 May 16 00:22:44.819992 sshd-session[4222]: pam_unix(sshd:session): session closed for user core May 16 00:22:44.824843 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:39794.service: Deactivated successfully. May 16 00:22:44.827158 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:22:44.828026 systemd-logind[1482]: Session 20 logged out. Waiting for processes to exit. May 16 00:22:44.829038 systemd-logind[1482]: Removed session 20. May 16 00:22:46.415313 kubelet[2678]: E0516 00:22:46.415186 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:49.415218 kubelet[2678]: E0516 00:22:49.415141 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:22:49.832860 systemd[1]: Started sshd@20-10.0.0.14:22-10.0.0.1:45984.service - OpenSSH per-connection server daemon (10.0.0.1:45984). May 16 00:22:49.871422 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 45984 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:49.873331 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:49.877668 systemd-logind[1482]: New session 21 of user core. May 16 00:22:49.890487 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 00:22:50.093107 sshd[4238]: Connection closed by 10.0.0.1 port 45984 May 16 00:22:50.093498 sshd-session[4236]: pam_unix(sshd:session): session closed for user core May 16 00:22:50.098298 systemd[1]: sshd@20-10.0.0.14:22-10.0.0.1:45984.service: Deactivated successfully. May 16 00:22:50.100157 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:22:50.100834 systemd-logind[1482]: Session 21 logged out. Waiting for processes to exit. May 16 00:22:50.102142 systemd-logind[1482]: Removed session 21. May 16 00:22:55.105899 systemd[1]: Started sshd@21-10.0.0.14:22-10.0.0.1:45994.service - OpenSSH per-connection server daemon (10.0.0.1:45994). May 16 00:22:55.143137 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 45994 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:55.144863 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:55.148904 systemd-logind[1482]: New session 22 of user core. May 16 00:22:55.159558 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 00:22:55.270191 sshd[4253]: Connection closed by 10.0.0.1 port 45994 May 16 00:22:55.270573 sshd-session[4251]: pam_unix(sshd:session): session closed for user core May 16 00:22:55.280256 systemd[1]: sshd@21-10.0.0.14:22-10.0.0.1:45994.service: Deactivated successfully. May 16 00:22:55.282176 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:22:55.283933 systemd-logind[1482]: Session 22 logged out. Waiting for processes to exit. May 16 00:22:55.291515 systemd[1]: Started sshd@22-10.0.0.14:22-10.0.0.1:46000.service - OpenSSH per-connection server daemon (10.0.0.1:46000). May 16 00:22:55.292342 systemd-logind[1482]: Removed session 22. May 16 00:22:55.324123 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 46000 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:55.325563 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:55.329963 systemd-logind[1482]: New session 23 of user core. May 16 00:22:55.338389 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 00:22:55.905487 sshd[4268]: Connection closed by 10.0.0.1 port 46000 May 16 00:22:55.906093 sshd-session[4266]: pam_unix(sshd:session): session closed for user core May 16 00:22:55.916966 systemd[1]: sshd@22-10.0.0.14:22-10.0.0.1:46000.service: Deactivated successfully. May 16 00:22:55.919105 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:22:55.921126 systemd-logind[1482]: Session 23 logged out. Waiting for processes to exit. May 16 00:22:55.922722 systemd[1]: Started sshd@23-10.0.0.14:22-10.0.0.1:46016.service - OpenSSH per-connection server daemon (10.0.0.1:46016). May 16 00:22:55.923822 systemd-logind[1482]: Removed session 23. May 16 00:22:55.980778 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 46016 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:55.982974 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:55.988191 systemd-logind[1482]: New session 24 of user core. May 16 00:22:55.998546 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 00:22:57.759091 sshd[4281]: Connection closed by 10.0.0.1 port 46016 May 16 00:22:57.759596 sshd-session[4279]: pam_unix(sshd:session): session closed for user core May 16 00:22:57.770028 systemd[1]: sshd@23-10.0.0.14:22-10.0.0.1:46016.service: Deactivated successfully. May 16 00:22:57.772417 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:22:57.775851 systemd-logind[1482]: Session 24 logged out. Waiting for processes to exit. May 16 00:22:57.783821 systemd[1]: Started sshd@24-10.0.0.14:22-10.0.0.1:46024.service - OpenSSH per-connection server daemon (10.0.0.1:46024). May 16 00:22:57.786188 systemd-logind[1482]: Removed session 24. May 16 00:22:57.821025 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 46024 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:57.823375 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:57.828833 systemd-logind[1482]: New session 25 of user core. May 16 00:22:57.838598 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 00:22:58.519950 sshd[4313]: Connection closed by 10.0.0.1 port 46024 May 16 00:22:58.520406 sshd-session[4311]: pam_unix(sshd:session): session closed for user core May 16 00:22:58.530109 systemd[1]: sshd@24-10.0.0.14:22-10.0.0.1:46024.service: Deactivated successfully. May 16 00:22:58.531748 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:22:58.533319 systemd-logind[1482]: Session 25 logged out. Waiting for processes to exit. May 16 00:22:58.534568 systemd[1]: Started sshd@25-10.0.0.14:22-10.0.0.1:51390.service - OpenSSH per-connection server daemon (10.0.0.1:51390). May 16 00:22:58.535445 systemd-logind[1482]: Removed session 25. May 16 00:22:58.572216 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 51390 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:22:58.574159 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:22:58.579173 systemd-logind[1482]: New session 26 of user core. May 16 00:22:58.585606 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 00:22:58.858482 sshd[4326]: Connection closed by 10.0.0.1 port 51390 May 16 00:22:58.861734 sshd-session[4324]: pam_unix(sshd:session): session closed for user core May 16 00:22:58.868211 systemd[1]: sshd@25-10.0.0.14:22-10.0.0.1:51390.service: Deactivated successfully. May 16 00:22:58.871423 systemd[1]: session-26.scope: Deactivated successfully. May 16 00:22:58.872638 systemd-logind[1482]: Session 26 logged out. Waiting for processes to exit. May 16 00:22:58.875075 systemd-logind[1482]: Removed session 26. May 16 00:23:03.869761 systemd[1]: Started sshd@26-10.0.0.14:22-10.0.0.1:51402.service - OpenSSH per-connection server daemon (10.0.0.1:51402). May 16 00:23:03.909846 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 51402 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:23:03.911651 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:03.916020 systemd-logind[1482]: New session 27 of user core. May 16 00:23:03.921412 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 00:23:04.096673 sshd[4340]: Connection closed by 10.0.0.1 port 51402 May 16 00:23:04.097059 sshd-session[4338]: pam_unix(sshd:session): session closed for user core May 16 00:23:04.100561 systemd[1]: sshd@26-10.0.0.14:22-10.0.0.1:51402.service: Deactivated successfully. May 16 00:23:04.102609 systemd[1]: session-27.scope: Deactivated successfully. May 16 00:23:04.103319 systemd-logind[1482]: Session 27 logged out. Waiting for processes to exit. May 16 00:23:04.104299 systemd-logind[1482]: Removed session 27. May 16 00:23:09.110140 systemd[1]: Started sshd@27-10.0.0.14:22-10.0.0.1:36320.service - OpenSSH per-connection server daemon (10.0.0.1:36320). May 16 00:23:09.152674 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 36320 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:23:09.154888 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:09.160001 systemd-logind[1482]: New session 28 of user core. May 16 00:23:09.167498 systemd[1]: Started session-28.scope - Session 28 of User core. May 16 00:23:09.288922 sshd[4359]: Connection closed by 10.0.0.1 port 36320 May 16 00:23:09.290094 sshd-session[4356]: pam_unix(sshd:session): session closed for user core May 16 00:23:09.295696 systemd[1]: sshd@27-10.0.0.14:22-10.0.0.1:36320.service: Deactivated successfully. May 16 00:23:09.297901 systemd[1]: session-28.scope: Deactivated successfully. May 16 00:23:09.298789 systemd-logind[1482]: Session 28 logged out. Waiting for processes to exit. May 16 00:23:09.300361 systemd-logind[1482]: Removed session 28. May 16 00:23:14.309723 systemd[1]: Started sshd@28-10.0.0.14:22-10.0.0.1:36332.service - OpenSSH per-connection server daemon (10.0.0.1:36332). May 16 00:23:14.346981 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 36332 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:23:14.348585 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:14.352695 systemd-logind[1482]: New session 29 of user core. May 16 00:23:14.367428 systemd[1]: Started session-29.scope - Session 29 of User core. May 16 00:23:14.479502 sshd[4375]: Connection closed by 10.0.0.1 port 36332 May 16 00:23:14.479898 sshd-session[4372]: pam_unix(sshd:session): session closed for user core May 16 00:23:14.484105 systemd[1]: sshd@28-10.0.0.14:22-10.0.0.1:36332.service: Deactivated successfully. May 16 00:23:14.486450 systemd[1]: session-29.scope: Deactivated successfully. May 16 00:23:14.487320 systemd-logind[1482]: Session 29 logged out. Waiting for processes to exit. May 16 00:23:14.488389 systemd-logind[1482]: Removed session 29. May 16 00:23:19.492387 systemd[1]: Started sshd@29-10.0.0.14:22-10.0.0.1:59402.service - OpenSSH per-connection server daemon (10.0.0.1:59402). May 16 00:23:19.529516 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 59402 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:23:19.530962 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:19.534927 systemd-logind[1482]: New session 30 of user core. May 16 00:23:19.544387 systemd[1]: Started session-30.scope - Session 30 of User core. May 16 00:23:19.649420 sshd[4389]: Connection closed by 10.0.0.1 port 59402 May 16 00:23:19.649805 sshd-session[4387]: pam_unix(sshd:session): session closed for user core May 16 00:23:19.660149 systemd[1]: sshd@29-10.0.0.14:22-10.0.0.1:59402.service: Deactivated successfully. May 16 00:23:19.662048 systemd[1]: session-30.scope: Deactivated successfully. May 16 00:23:19.663680 systemd-logind[1482]: Session 30 logged out. Waiting for processes to exit. May 16 00:23:19.667721 systemd[1]: Started sshd@30-10.0.0.14:22-10.0.0.1:59416.service - OpenSSH per-connection server daemon (10.0.0.1:59416). May 16 00:23:19.668564 systemd-logind[1482]: Removed session 30. May 16 00:23:19.700998 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 59416 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:23:19.702346 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:19.706089 systemd-logind[1482]: New session 31 of user core. May 16 00:23:19.716388 systemd[1]: Started session-31.scope - Session 31 of User core. May 16 00:23:21.062839 containerd[1502]: time="2025-05-16T00:23:21.062795500Z" level=info msg="StopContainer for \"0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4\" with timeout 30 (s)" May 16 00:23:21.068571 containerd[1502]: time="2025-05-16T00:23:21.066218228Z" level=info msg="Stop container \"0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4\" with signal terminated" May 16 00:23:21.078461 systemd[1]: cri-containerd-0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4.scope: Deactivated successfully. May 16 00:23:21.081896 containerd[1502]: time="2025-05-16T00:23:21.081845553Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:23:21.101031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4-rootfs.mount: Deactivated successfully. May 16 00:23:21.113317 containerd[1502]: time="2025-05-16T00:23:21.113232423Z" level=info msg="shim disconnected" id=0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4 namespace=k8s.io May 16 00:23:21.113317 containerd[1502]: time="2025-05-16T00:23:21.113312981Z" level=warning msg="cleaning up after shim disconnected" id=0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4 namespace=k8s.io May 16 00:23:21.113317 containerd[1502]: time="2025-05-16T00:23:21.113321638Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:23:21.116273 containerd[1502]: time="2025-05-16T00:23:21.116222099Z" level=info msg="StopContainer for \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\" with timeout 2 (s)" May 16 00:23:21.116555 containerd[1502]: time="2025-05-16T00:23:21.116535645Z" level=info msg="Stop container \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\" with signal terminated" May 16 00:23:21.123414 systemd-networkd[1442]: lxc_health: Link DOWN May 16 00:23:21.123430 systemd-networkd[1442]: lxc_health: Lost carrier May 16 00:23:21.135427 containerd[1502]: time="2025-05-16T00:23:21.135380344Z" level=info msg="StopContainer for \"0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4\" returns successfully" May 16 00:23:21.139369 containerd[1502]: time="2025-05-16T00:23:21.139338644Z" level=info msg="StopPodSandbox for \"7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085\"" May 16 00:23:21.152151 systemd[1]: cri-containerd-e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8.scope: Deactivated successfully. May 16 00:23:21.152702 systemd[1]: cri-containerd-e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8.scope: Consumed 7.824s CPU time. May 16 00:23:21.158856 containerd[1502]: time="2025-05-16T00:23:21.139372751Z" level=info msg="Container to stop \"0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:23:21.161578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085-shm.mount: Deactivated successfully. May 16 00:23:21.168119 systemd[1]: cri-containerd-7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085.scope: Deactivated successfully. May 16 00:23:21.176447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8-rootfs.mount: Deactivated successfully. May 16 00:23:21.185674 containerd[1502]: time="2025-05-16T00:23:21.185599439Z" level=info msg="shim disconnected" id=e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8 namespace=k8s.io May 16 00:23:21.185674 containerd[1502]: time="2025-05-16T00:23:21.185665188Z" level=warning msg="cleaning up after shim disconnected" id=e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8 namespace=k8s.io May 16 00:23:21.185674 containerd[1502]: time="2025-05-16T00:23:21.185674957Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:23:21.210053 containerd[1502]: time="2025-05-16T00:23:21.209991080Z" level=info msg="shim disconnected" id=7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085 namespace=k8s.io May 16 00:23:21.210053 containerd[1502]: time="2025-05-16T00:23:21.210044876Z" level=warning msg="cleaning up after shim disconnected" id=7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085 namespace=k8s.io May 16 00:23:21.210053 containerd[1502]: time="2025-05-16T00:23:21.210055507Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:23:21.217072 containerd[1502]: time="2025-05-16T00:23:21.217022851Z" level=info msg="StopContainer for \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\" returns successfully" May 16 00:23:21.218124 containerd[1502]: time="2025-05-16T00:23:21.217882119Z" level=info msg="StopPodSandbox for \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\"" May 16 00:23:21.218124 containerd[1502]: time="2025-05-16T00:23:21.217927258Z" level=info msg="Container to stop \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:23:21.218124 containerd[1502]: time="2025-05-16T00:23:21.217971264Z" level=info msg="Container to stop \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:23:21.218124 containerd[1502]: time="2025-05-16T00:23:21.217983108Z" level=info msg="Container to stop \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:23:21.218124 containerd[1502]: time="2025-05-16T00:23:21.217995112Z" level=info msg="Container to stop \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:23:21.218124 containerd[1502]: time="2025-05-16T00:23:21.218005553Z" level=info msg="Container to stop \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:23:21.226037 systemd[1]: cri-containerd-2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960.scope: Deactivated successfully. May 16 00:23:21.237388 containerd[1502]: time="2025-05-16T00:23:21.237204958Z" level=info msg="TearDown network for sandbox \"7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085\" successfully" May 16 00:23:21.237388 containerd[1502]: time="2025-05-16T00:23:21.237243404Z" level=info msg="StopPodSandbox for \"7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085\" returns successfully" May 16 00:23:21.253449 containerd[1502]: time="2025-05-16T00:23:21.253372645Z" level=info msg="shim disconnected" id=2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960 namespace=k8s.io May 16 00:23:21.253449 containerd[1502]: time="2025-05-16T00:23:21.253439918Z" level=warning msg="cleaning up after shim disconnected" id=2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960 namespace=k8s.io May 16 00:23:21.253449 containerd[1502]: time="2025-05-16T00:23:21.253448976Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:23:21.269236 containerd[1502]: time="2025-05-16T00:23:21.269183111Z" level=info msg="TearDown network for sandbox \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" successfully" May 16 00:23:21.269236 containerd[1502]: time="2025-05-16T00:23:21.269221576Z" level=info msg="StopPodSandbox for \"2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960\" returns successfully" May 16 00:23:21.394467 kubelet[2678]: I0516 00:23:21.394322 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d7e2df9-2deb-4729-8b96-e49cf44d4178-cilium-config-path\") pod \"9d7e2df9-2deb-4729-8b96-e49cf44d4178\" (UID: \"9d7e2df9-2deb-4729-8b96-e49cf44d4178\") " May 16 00:23:21.394467 kubelet[2678]: I0516 00:23:21.394370 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvz65\" (UniqueName: \"kubernetes.io/projected/9d7e2df9-2deb-4729-8b96-e49cf44d4178-kube-api-access-tvz65\") pod \"9d7e2df9-2deb-4729-8b96-e49cf44d4178\" (UID: \"9d7e2df9-2deb-4729-8b96-e49cf44d4178\") " May 16 00:23:21.394467 kubelet[2678]: I0516 00:23:21.394390 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-lib-modules\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.394467 kubelet[2678]: I0516 00:23:21.394412 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-bpf-maps\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.394467 kubelet[2678]: I0516 00:23:21.394428 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-hostproc\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.394467 kubelet[2678]: I0516 00:23:21.394445 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-xtables-lock\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395091 kubelet[2678]: I0516 00:23:21.394447 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.395091 kubelet[2678]: I0516 00:23:21.394484 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.395091 kubelet[2678]: I0516 00:23:21.394462 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-etc-cni-netd\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395091 kubelet[2678]: I0516 00:23:21.394506 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.395091 kubelet[2678]: I0516 00:23:21.394515 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-hubble-tls\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395256 kubelet[2678]: I0516 00:23:21.394522 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-hostproc" (OuterVolumeSpecName: "hostproc") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.395256 kubelet[2678]: I0516 00:23:21.394532 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-run\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395256 kubelet[2678]: I0516 00:23:21.394539 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.395256 kubelet[2678]: I0516 00:23:21.394545 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cni-path\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395256 kubelet[2678]: I0516 00:23:21.394562 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-config-path\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395256 kubelet[2678]: I0516 00:23:21.394576 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-cgroup\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395501 kubelet[2678]: I0516 00:23:21.394601 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-host-proc-sys-net\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395501 kubelet[2678]: I0516 00:23:21.394615 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-clustermesh-secrets\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395501 kubelet[2678]: I0516 00:23:21.394628 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-host-proc-sys-kernel\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395501 kubelet[2678]: I0516 00:23:21.394642 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzjnq\" (UniqueName: \"kubernetes.io/projected/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-kube-api-access-gzjnq\") pod \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\" (UID: \"4297356c-9b1c-4b33-a55f-ab4a3bdb244e\") " May 16 00:23:21.395501 kubelet[2678]: I0516 00:23:21.394672 2678 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.395501 kubelet[2678]: I0516 00:23:21.394683 2678 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.395501 kubelet[2678]: I0516 00:23:21.394692 2678 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.395717 kubelet[2678]: I0516 00:23:21.394699 2678 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.395717 kubelet[2678]: I0516 00:23:21.394707 2678 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.399492 kubelet[2678]: I0516 00:23:21.399380 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d7e2df9-2deb-4729-8b96-e49cf44d4178-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d7e2df9-2deb-4729-8b96-e49cf44d4178" (UID: "9d7e2df9-2deb-4729-8b96-e49cf44d4178"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:23:21.399492 kubelet[2678]: I0516 00:23:21.399437 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.399492 kubelet[2678]: I0516 00:23:21.399466 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.399492 kubelet[2678]: I0516 00:23:21.399490 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cni-path" (OuterVolumeSpecName: "cni-path") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.399822 kubelet[2678]: I0516 00:23:21.399802 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:23:21.399895 kubelet[2678]: I0516 00:23:21.399801 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d7e2df9-2deb-4729-8b96-e49cf44d4178-kube-api-access-tvz65" (OuterVolumeSpecName: "kube-api-access-tvz65") pod "9d7e2df9-2deb-4729-8b96-e49cf44d4178" (UID: "9d7e2df9-2deb-4729-8b96-e49cf44d4178"). InnerVolumeSpecName "kube-api-access-tvz65". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:23:21.400000 kubelet[2678]: I0516 00:23:21.399960 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.400000 kubelet[2678]: I0516 00:23:21.399980 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:23:21.400242 kubelet[2678]: I0516 00:23:21.400217 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-kube-api-access-gzjnq" (OuterVolumeSpecName: "kube-api-access-gzjnq") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "kube-api-access-gzjnq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:23:21.402038 kubelet[2678]: I0516 00:23:21.402009 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:23:21.402935 kubelet[2678]: I0516 00:23:21.402912 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4297356c-9b1c-4b33-a55f-ab4a3bdb244e" (UID: "4297356c-9b1c-4b33-a55f-ab4a3bdb244e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:23:21.495551 kubelet[2678]: I0516 00:23:21.495494 2678 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495551 kubelet[2678]: I0516 00:23:21.495534 2678 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495551 kubelet[2678]: I0516 00:23:21.495544 2678 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495551 kubelet[2678]: I0516 00:23:21.495558 2678 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495551 kubelet[2678]: I0516 00:23:21.495568 2678 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495820 kubelet[2678]: I0516 00:23:21.495576 2678 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495820 kubelet[2678]: I0516 00:23:21.495584 2678 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495820 kubelet[2678]: I0516 00:23:21.495592 2678 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495820 kubelet[2678]: I0516 00:23:21.495599 2678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzjnq\" (UniqueName: \"kubernetes.io/projected/4297356c-9b1c-4b33-a55f-ab4a3bdb244e-kube-api-access-gzjnq\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495820 kubelet[2678]: I0516 00:23:21.495608 2678 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d7e2df9-2deb-4729-8b96-e49cf44d4178-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:23:21.495820 kubelet[2678]: I0516 00:23:21.495616 2678 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvz65\" (UniqueName: \"kubernetes.io/projected/9d7e2df9-2deb-4729-8b96-e49cf44d4178-kube-api-access-tvz65\") on node \"localhost\" DevicePath \"\"" May 16 00:23:22.059210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960-rootfs.mount: Deactivated successfully. May 16 00:23:22.059378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fea94de2de85b9879f9fe417b0a04318aeb3d60f99c9831f1e26d84a233c085-rootfs.mount: Deactivated successfully. May 16 00:23:22.059479 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d79f66b16488605651f24506ebcbac6f3c24785c9dd803cb5dadaac5cdae960-shm.mount: Deactivated successfully. May 16 00:23:22.059597 systemd[1]: var-lib-kubelet-pods-4297356c\x2d9b1c\x2d4b33\x2da55f\x2dab4a3bdb244e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgzjnq.mount: Deactivated successfully. May 16 00:23:22.059700 systemd[1]: var-lib-kubelet-pods-9d7e2df9\x2d2deb\x2d4729\x2d8b96\x2de49cf44d4178-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtvz65.mount: Deactivated successfully. May 16 00:23:22.059820 systemd[1]: var-lib-kubelet-pods-4297356c\x2d9b1c\x2d4b33\x2da55f\x2dab4a3bdb244e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:23:22.059913 systemd[1]: var-lib-kubelet-pods-4297356c\x2d9b1c\x2d4b33\x2da55f\x2dab4a3bdb244e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:23:22.213824 kubelet[2678]: I0516 00:23:22.213795 2678 scope.go:117] "RemoveContainer" containerID="0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4" May 16 00:23:22.220706 containerd[1502]: time="2025-05-16T00:23:22.220179744Z" level=info msg="RemoveContainer for \"0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4\"" May 16 00:23:22.224661 systemd[1]: Removed slice kubepods-besteffort-pod9d7e2df9_2deb_4729_8b96_e49cf44d4178.slice - libcontainer container kubepods-besteffort-pod9d7e2df9_2deb_4729_8b96_e49cf44d4178.slice. May 16 00:23:22.228081 containerd[1502]: time="2025-05-16T00:23:22.228032180Z" level=info msg="RemoveContainer for \"0de4535050c5f22db69edf9e7ad2c03eae25527edc6ee85c18342f77c64a86c4\" returns successfully" May 16 00:23:22.228427 kubelet[2678]: I0516 00:23:22.228393 2678 scope.go:117] "RemoveContainer" containerID="e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8" May 16 00:23:22.229887 containerd[1502]: time="2025-05-16T00:23:22.229789854Z" level=info msg="RemoveContainer for \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\"" May 16 00:23:22.234119 systemd[1]: Removed slice kubepods-burstable-pod4297356c_9b1c_4b33_a55f_ab4a3bdb244e.slice - libcontainer container kubepods-burstable-pod4297356c_9b1c_4b33_a55f_ab4a3bdb244e.slice. May 16 00:23:22.234415 systemd[1]: kubepods-burstable-pod4297356c_9b1c_4b33_a55f_ab4a3bdb244e.slice: Consumed 7.933s CPU time. May 16 00:23:22.237755 containerd[1502]: time="2025-05-16T00:23:22.237694061Z" level=info msg="RemoveContainer for \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\" returns successfully" May 16 00:23:22.238097 kubelet[2678]: I0516 00:23:22.237972 2678 scope.go:117] "RemoveContainer" containerID="322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7" May 16 00:23:22.239528 containerd[1502]: time="2025-05-16T00:23:22.239490963Z" level=info msg="RemoveContainer for \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\"" May 16 00:23:22.244739 containerd[1502]: time="2025-05-16T00:23:22.244706094Z" level=info msg="RemoveContainer for \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\" returns successfully" May 16 00:23:22.248533 kubelet[2678]: I0516 00:23:22.248412 2678 scope.go:117] "RemoveContainer" containerID="ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90" May 16 00:23:22.250471 containerd[1502]: time="2025-05-16T00:23:22.250423333Z" level=info msg="RemoveContainer for \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\"" May 16 00:23:22.254735 containerd[1502]: time="2025-05-16T00:23:22.254679920Z" level=info msg="RemoveContainer for \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\" returns successfully" May 16 00:23:22.255000 kubelet[2678]: I0516 00:23:22.254966 2678 scope.go:117] "RemoveContainer" containerID="2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24" May 16 00:23:22.261666 containerd[1502]: time="2025-05-16T00:23:22.261627145Z" level=info msg="RemoveContainer for \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\"" May 16 00:23:22.268008 containerd[1502]: time="2025-05-16T00:23:22.267287251Z" level=info msg="RemoveContainer for \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\" returns successfully" May 16 00:23:22.268135 kubelet[2678]: I0516 00:23:22.268057 2678 scope.go:117] "RemoveContainer" containerID="ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f" May 16 00:23:22.269027 containerd[1502]: time="2025-05-16T00:23:22.269006100Z" level=info msg="RemoveContainer for \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\"" May 16 00:23:22.272444 containerd[1502]: time="2025-05-16T00:23:22.272425481Z" level=info msg="RemoveContainer for \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\" returns successfully" May 16 00:23:22.272560 kubelet[2678]: I0516 00:23:22.272543 2678 scope.go:117] "RemoveContainer" containerID="e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8" May 16 00:23:22.272952 containerd[1502]: time="2025-05-16T00:23:22.272885306Z" level=error msg="ContainerStatus for \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\": not found" May 16 00:23:22.282579 kubelet[2678]: E0516 00:23:22.282537 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\": not found" containerID="e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8" May 16 00:23:22.282672 kubelet[2678]: I0516 00:23:22.282588 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8"} err="failed to get container status \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6359028d3a33193e34891621aec13ce9fcb219419b306b78eda72a24026c8f8\": not found" May 16 00:23:22.282713 kubelet[2678]: I0516 00:23:22.282674 2678 scope.go:117] "RemoveContainer" containerID="322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7" May 16 00:23:22.282967 containerd[1502]: time="2025-05-16T00:23:22.282920882Z" level=error msg="ContainerStatus for \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\": not found" May 16 00:23:22.283144 kubelet[2678]: E0516 00:23:22.283044 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\": not found" containerID="322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7" May 16 00:23:22.283144 kubelet[2678]: I0516 00:23:22.283070 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7"} err="failed to get container status \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"322fed089ae2ab94e8ac02449d06d639d71d247885c6915d62a90d33d01f10a7\": not found" May 16 00:23:22.283144 kubelet[2678]: I0516 00:23:22.283091 2678 scope.go:117] "RemoveContainer" containerID="ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90" May 16 00:23:22.283426 containerd[1502]: time="2025-05-16T00:23:22.283377980Z" level=error msg="ContainerStatus for \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\": not found" May 16 00:23:22.283619 kubelet[2678]: E0516 00:23:22.283584 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\": not found" containerID="ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90" May 16 00:23:22.283681 kubelet[2678]: I0516 00:23:22.283629 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90"} err="failed to get container status \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce1b064d898c9c2c569e74f5a37f9d683f9cc5fbb0c6578b75ef79594fa8db90\": not found" May 16 00:23:22.283681 kubelet[2678]: I0516 00:23:22.283660 2678 scope.go:117] "RemoveContainer" containerID="2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24" May 16 00:23:22.283868 containerd[1502]: time="2025-05-16T00:23:22.283835630Z" level=error msg="ContainerStatus for \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\": not found" May 16 00:23:22.284028 kubelet[2678]: E0516 00:23:22.283990 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\": not found" containerID="2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24" May 16 00:23:22.284028 kubelet[2678]: I0516 00:23:22.284026 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24"} err="failed to get container status \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c3d725d0d0647fb32c67d5c0af86b90e758b960a32e46ef8352b3d5649fef24\": not found" May 16 00:23:22.284134 kubelet[2678]: I0516 00:23:22.284040 2678 scope.go:117] "RemoveContainer" containerID="ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f" May 16 00:23:22.284242 containerd[1502]: time="2025-05-16T00:23:22.284189505Z" level=error msg="ContainerStatus for \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\": not found" May 16 00:23:22.284397 kubelet[2678]: E0516 00:23:22.284362 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\": not found" containerID="ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f" May 16 00:23:22.284437 kubelet[2678]: I0516 00:23:22.284397 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f"} err="failed to get container status \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff347c5b8bd02a8e472c8165471fd64563e14ed90183c85bb0bc2a6e06c1117f\": not found" May 16 00:23:22.417021 kubelet[2678]: I0516 00:23:22.416977 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4297356c-9b1c-4b33-a55f-ab4a3bdb244e" path="/var/lib/kubelet/pods/4297356c-9b1c-4b33-a55f-ab4a3bdb244e/volumes" May 16 00:23:22.417953 kubelet[2678]: I0516 00:23:22.417923 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d7e2df9-2deb-4729-8b96-e49cf44d4178" path="/var/lib/kubelet/pods/9d7e2df9-2deb-4729-8b96-e49cf44d4178/volumes" May 16 00:23:23.017624 sshd[4403]: Connection closed by 10.0.0.1 port 59416 May 16 00:23:23.017986 sshd-session[4401]: pam_unix(sshd:session): session closed for user core May 16 00:23:23.029181 systemd[1]: sshd@30-10.0.0.14:22-10.0.0.1:59416.service: Deactivated successfully. May 16 00:23:23.031049 systemd[1]: session-31.scope: Deactivated successfully. May 16 00:23:23.032455 systemd-logind[1482]: Session 31 logged out. Waiting for processes to exit. May 16 00:23:23.041575 systemd[1]: Started sshd@31-10.0.0.14:22-10.0.0.1:59422.service - OpenSSH per-connection server daemon (10.0.0.1:59422). May 16 00:23:23.042523 systemd-logind[1482]: Removed session 31. May 16 00:23:23.077097 sshd[4564]: Accepted publickey for core from 10.0.0.1 port 59422 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:23:23.078471 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:23.082427 systemd-logind[1482]: New session 32 of user core. May 16 00:23:23.093431 systemd[1]: Started session-32.scope - Session 32 of User core. May 16 00:23:23.585352 sshd[4566]: Connection closed by 10.0.0.1 port 59422 May 16 00:23:23.587182 sshd-session[4564]: pam_unix(sshd:session): session closed for user core May 16 00:23:23.596607 systemd[1]: sshd@31-10.0.0.14:22-10.0.0.1:59422.service: Deactivated successfully. May 16 00:23:23.599837 systemd[1]: session-32.scope: Deactivated successfully. May 16 00:23:23.604043 systemd-logind[1482]: Session 32 logged out. Waiting for processes to exit. May 16 00:23:23.619091 kubelet[2678]: E0516 00:23:23.618956 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4297356c-9b1c-4b33-a55f-ab4a3bdb244e" containerName="apply-sysctl-overwrites" May 16 00:23:23.619091 kubelet[2678]: E0516 00:23:23.619003 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4297356c-9b1c-4b33-a55f-ab4a3bdb244e" containerName="mount-bpf-fs" May 16 00:23:23.619091 kubelet[2678]: E0516 00:23:23.619010 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4297356c-9b1c-4b33-a55f-ab4a3bdb244e" containerName="clean-cilium-state" May 16 00:23:23.619091 kubelet[2678]: E0516 00:23:23.619017 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4297356c-9b1c-4b33-a55f-ab4a3bdb244e" containerName="mount-cgroup" May 16 00:23:23.619091 kubelet[2678]: E0516 00:23:23.619023 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d7e2df9-2deb-4729-8b96-e49cf44d4178" containerName="cilium-operator" May 16 00:23:23.619091 kubelet[2678]: E0516 00:23:23.619032 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4297356c-9b1c-4b33-a55f-ab4a3bdb244e" containerName="cilium-agent" May 16 00:23:23.619091 kubelet[2678]: I0516 00:23:23.619070 2678 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d7e2df9-2deb-4729-8b96-e49cf44d4178" containerName="cilium-operator" May 16 00:23:23.619091 kubelet[2678]: I0516 00:23:23.619077 2678 memory_manager.go:354] "RemoveStaleState removing state" podUID="4297356c-9b1c-4b33-a55f-ab4a3bdb244e" containerName="cilium-agent" May 16 00:23:23.620080 systemd[1]: Started sshd@32-10.0.0.14:22-10.0.0.1:59432.service - OpenSSH per-connection server daemon (10.0.0.1:59432). May 16 00:23:23.625879 systemd-logind[1482]: Removed session 32. May 16 00:23:23.631105 systemd[1]: Created slice kubepods-burstable-poddd87265d_a706_46d1_be43_9a43885c4105.slice - libcontainer container kubepods-burstable-poddd87265d_a706_46d1_be43_9a43885c4105.slice. May 16 00:23:23.662770 sshd[4577]: Accepted publickey for core from 10.0.0.1 port 59432 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:23:23.664374 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:23.668300 systemd-logind[1482]: New session 33 of user core. May 16 00:23:23.677451 systemd[1]: Started session-33.scope - Session 33 of User core. May 16 00:23:23.709580 kubelet[2678]: I0516 00:23:23.709525 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-cilium-run\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709580 kubelet[2678]: I0516 00:23:23.709561 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-host-proc-sys-kernel\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709580 kubelet[2678]: I0516 00:23:23.709583 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-host-proc-sys-net\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709847 kubelet[2678]: I0516 00:23:23.709677 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-bpf-maps\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709847 kubelet[2678]: I0516 00:23:23.709751 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd87265d-a706-46d1-be43-9a43885c4105-clustermesh-secrets\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709847 kubelet[2678]: I0516 00:23:23.709818 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-lib-modules\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709847 kubelet[2678]: I0516 00:23:23.709839 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dd87265d-a706-46d1-be43-9a43885c4105-cilium-ipsec-secrets\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709973 kubelet[2678]: I0516 00:23:23.709861 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-etc-cni-netd\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709973 kubelet[2678]: I0516 00:23:23.709878 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd87265d-a706-46d1-be43-9a43885c4105-hubble-tls\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709973 kubelet[2678]: I0516 00:23:23.709894 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-hostproc\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709973 kubelet[2678]: I0516 00:23:23.709909 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-cni-path\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709973 kubelet[2678]: I0516 00:23:23.709927 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd87265d-a706-46d1-be43-9a43885c4105-cilium-config-path\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.709973 kubelet[2678]: I0516 00:23:23.709942 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-896f2\" (UniqueName: \"kubernetes.io/projected/dd87265d-a706-46d1-be43-9a43885c4105-kube-api-access-896f2\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.710178 kubelet[2678]: I0516 00:23:23.709974 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-cilium-cgroup\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.710178 kubelet[2678]: I0516 00:23:23.710005 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd87265d-a706-46d1-be43-9a43885c4105-xtables-lock\") pod \"cilium-5r6d7\" (UID: \"dd87265d-a706-46d1-be43-9a43885c4105\") " pod="kube-system/cilium-5r6d7" May 16 00:23:23.728963 sshd[4579]: Connection closed by 10.0.0.1 port 59432 May 16 00:23:23.729404 sshd-session[4577]: pam_unix(sshd:session): session closed for user core May 16 00:23:23.741873 systemd[1]: sshd@32-10.0.0.14:22-10.0.0.1:59432.service: Deactivated successfully. May 16 00:23:23.744142 systemd[1]: session-33.scope: Deactivated successfully. May 16 00:23:23.745986 systemd-logind[1482]: Session 33 logged out. Waiting for processes to exit. May 16 00:23:23.751587 systemd[1]: Started sshd@33-10.0.0.14:22-10.0.0.1:59444.service - OpenSSH per-connection server daemon (10.0.0.1:59444). May 16 00:23:23.752532 systemd-logind[1482]: Removed session 33. May 16 00:23:23.784711 sshd[4585]: Accepted publickey for core from 10.0.0.1 port 59444 ssh2: RSA SHA256:+kdkuHVQO2815FAkL6VJi4ci9TYuwnXSDHYhvdGN2Uo May 16 00:23:23.786307 sshd-session[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 00:23:23.790923 systemd-logind[1482]: New session 34 of user core. May 16 00:23:23.800418 systemd[1]: Started session-34.scope - Session 34 of User core. May 16 00:23:23.938399 kubelet[2678]: E0516 00:23:23.938353 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:23.939002 containerd[1502]: time="2025-05-16T00:23:23.938910363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5r6d7,Uid:dd87265d-a706-46d1-be43-9a43885c4105,Namespace:kube-system,Attempt:0,}" May 16 00:23:24.273841 containerd[1502]: time="2025-05-16T00:23:24.273662343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:23:24.273841 containerd[1502]: time="2025-05-16T00:23:24.273715758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:23:24.273841 containerd[1502]: time="2025-05-16T00:23:24.273727100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:23:24.273841 containerd[1502]: time="2025-05-16T00:23:24.273792819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:23:24.293423 systemd[1]: Started cri-containerd-d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8.scope - libcontainer container d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8. May 16 00:23:24.315051 containerd[1502]: time="2025-05-16T00:23:24.314749470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5r6d7,Uid:dd87265d-a706-46d1-be43-9a43885c4105,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\"" May 16 00:23:24.315782 kubelet[2678]: E0516 00:23:24.315750 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:24.318795 containerd[1502]: time="2025-05-16T00:23:24.318755878Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:23:24.333537 containerd[1502]: time="2025-05-16T00:23:24.333477942Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb25bdad0a247304ed225c5237119e8f8ac7fadd223891c2ffe9e3e6583997b6\"" May 16 00:23:24.334176 containerd[1502]: time="2025-05-16T00:23:24.334007603Z" level=info msg="StartContainer for \"cb25bdad0a247304ed225c5237119e8f8ac7fadd223891c2ffe9e3e6583997b6\"" May 16 00:23:24.364406 systemd[1]: Started cri-containerd-cb25bdad0a247304ed225c5237119e8f8ac7fadd223891c2ffe9e3e6583997b6.scope - libcontainer container cb25bdad0a247304ed225c5237119e8f8ac7fadd223891c2ffe9e3e6583997b6. May 16 00:23:24.390289 containerd[1502]: time="2025-05-16T00:23:24.390208375Z" level=info msg="StartContainer for \"cb25bdad0a247304ed225c5237119e8f8ac7fadd223891c2ffe9e3e6583997b6\" returns successfully" May 16 00:23:24.399028 systemd[1]: cri-containerd-cb25bdad0a247304ed225c5237119e8f8ac7fadd223891c2ffe9e3e6583997b6.scope: Deactivated successfully. May 16 00:23:24.435430 containerd[1502]: time="2025-05-16T00:23:24.435345686Z" level=info msg="shim disconnected" id=cb25bdad0a247304ed225c5237119e8f8ac7fadd223891c2ffe9e3e6583997b6 namespace=k8s.io May 16 00:23:24.435430 containerd[1502]: time="2025-05-16T00:23:24.435407467Z" level=warning msg="cleaning up after shim disconnected" id=cb25bdad0a247304ed225c5237119e8f8ac7fadd223891c2ffe9e3e6583997b6 namespace=k8s.io May 16 00:23:24.435430 containerd[1502]: time="2025-05-16T00:23:24.435415573Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:23:25.229796 kubelet[2678]: E0516 00:23:25.229763 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:25.231343 containerd[1502]: time="2025-05-16T00:23:25.231169443Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:23:25.248695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3537506271.mount: Deactivated successfully. May 16 00:23:25.253063 containerd[1502]: time="2025-05-16T00:23:25.253020426Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8ffe2712ba923b17996dfebcdd090292a36fb47b8eae3b1569eea5ea4b2a4d20\"" May 16 00:23:25.253569 containerd[1502]: time="2025-05-16T00:23:25.253534748Z" level=info msg="StartContainer for \"8ffe2712ba923b17996dfebcdd090292a36fb47b8eae3b1569eea5ea4b2a4d20\"" May 16 00:23:25.279383 systemd[1]: Started cri-containerd-8ffe2712ba923b17996dfebcdd090292a36fb47b8eae3b1569eea5ea4b2a4d20.scope - libcontainer container 8ffe2712ba923b17996dfebcdd090292a36fb47b8eae3b1569eea5ea4b2a4d20. May 16 00:23:25.303678 containerd[1502]: time="2025-05-16T00:23:25.303639043Z" level=info msg="StartContainer for \"8ffe2712ba923b17996dfebcdd090292a36fb47b8eae3b1569eea5ea4b2a4d20\" returns successfully" May 16 00:23:25.310452 systemd[1]: cri-containerd-8ffe2712ba923b17996dfebcdd090292a36fb47b8eae3b1569eea5ea4b2a4d20.scope: Deactivated successfully. May 16 00:23:25.331889 containerd[1502]: time="2025-05-16T00:23:25.331815723Z" level=info msg="shim disconnected" id=8ffe2712ba923b17996dfebcdd090292a36fb47b8eae3b1569eea5ea4b2a4d20 namespace=k8s.io May 16 00:23:25.331889 containerd[1502]: time="2025-05-16T00:23:25.331880140Z" level=warning msg="cleaning up after shim disconnected" id=8ffe2712ba923b17996dfebcdd090292a36fb47b8eae3b1569eea5ea4b2a4d20 namespace=k8s.io May 16 00:23:25.331889 containerd[1502]: time="2025-05-16T00:23:25.331890561Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:23:25.509484 kubelet[2678]: E0516 00:23:25.509352 2678 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:23:25.816209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ffe2712ba923b17996dfebcdd090292a36fb47b8eae3b1569eea5ea4b2a4d20-rootfs.mount: Deactivated successfully. May 16 00:23:26.232670 kubelet[2678]: E0516 00:23:26.232629 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:26.233989 containerd[1502]: time="2025-05-16T00:23:26.233957287Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:23:26.251485 containerd[1502]: time="2025-05-16T00:23:26.251441086Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c70ffe68735c137a9b1c930681303d40a5b3b29c80f3fbddeb5b5e3b4e41457\"" May 16 00:23:26.252558 containerd[1502]: time="2025-05-16T00:23:26.251992300Z" level=info msg="StartContainer for \"7c70ffe68735c137a9b1c930681303d40a5b3b29c80f3fbddeb5b5e3b4e41457\"" May 16 00:23:26.291388 systemd[1]: Started cri-containerd-7c70ffe68735c137a9b1c930681303d40a5b3b29c80f3fbddeb5b5e3b4e41457.scope - libcontainer container 7c70ffe68735c137a9b1c930681303d40a5b3b29c80f3fbddeb5b5e3b4e41457. May 16 00:23:26.320085 containerd[1502]: time="2025-05-16T00:23:26.320036419Z" level=info msg="StartContainer for \"7c70ffe68735c137a9b1c930681303d40a5b3b29c80f3fbddeb5b5e3b4e41457\" returns successfully" May 16 00:23:26.322697 systemd[1]: cri-containerd-7c70ffe68735c137a9b1c930681303d40a5b3b29c80f3fbddeb5b5e3b4e41457.scope: Deactivated successfully. May 16 00:23:26.348726 containerd[1502]: time="2025-05-16T00:23:26.348661807Z" level=info msg="shim disconnected" id=7c70ffe68735c137a9b1c930681303d40a5b3b29c80f3fbddeb5b5e3b4e41457 namespace=k8s.io May 16 00:23:26.348726 containerd[1502]: time="2025-05-16T00:23:26.348715713Z" level=warning msg="cleaning up after shim disconnected" id=7c70ffe68735c137a9b1c930681303d40a5b3b29c80f3fbddeb5b5e3b4e41457 namespace=k8s.io May 16 00:23:26.348726 containerd[1502]: time="2025-05-16T00:23:26.348724310Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:23:26.816706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c70ffe68735c137a9b1c930681303d40a5b3b29c80f3fbddeb5b5e3b4e41457-rootfs.mount: Deactivated successfully. May 16 00:23:27.237619 kubelet[2678]: E0516 00:23:27.237559 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:27.239746 containerd[1502]: time="2025-05-16T00:23:27.239705248Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:23:27.255819 containerd[1502]: time="2025-05-16T00:23:27.255764258Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6f63f48498b9a79a10dbe40b41b35e1e44b4fde41495d7a7b85d7d37e96e9c70\"" May 16 00:23:27.256429 containerd[1502]: time="2025-05-16T00:23:27.256397714Z" level=info msg="StartContainer for \"6f63f48498b9a79a10dbe40b41b35e1e44b4fde41495d7a7b85d7d37e96e9c70\"" May 16 00:23:27.288454 systemd[1]: Started cri-containerd-6f63f48498b9a79a10dbe40b41b35e1e44b4fde41495d7a7b85d7d37e96e9c70.scope - libcontainer container 6f63f48498b9a79a10dbe40b41b35e1e44b4fde41495d7a7b85d7d37e96e9c70. May 16 00:23:27.316090 systemd[1]: cri-containerd-6f63f48498b9a79a10dbe40b41b35e1e44b4fde41495d7a7b85d7d37e96e9c70.scope: Deactivated successfully. May 16 00:23:27.318780 containerd[1502]: time="2025-05-16T00:23:27.318595527Z" level=info msg="StartContainer for \"6f63f48498b9a79a10dbe40b41b35e1e44b4fde41495d7a7b85d7d37e96e9c70\" returns successfully" May 16 00:23:27.342625 containerd[1502]: time="2025-05-16T00:23:27.342550245Z" level=info msg="shim disconnected" id=6f63f48498b9a79a10dbe40b41b35e1e44b4fde41495d7a7b85d7d37e96e9c70 namespace=k8s.io May 16 00:23:27.342625 containerd[1502]: time="2025-05-16T00:23:27.342614652Z" level=warning msg="cleaning up after shim disconnected" id=6f63f48498b9a79a10dbe40b41b35e1e44b4fde41495d7a7b85d7d37e96e9c70 namespace=k8s.io May 16 00:23:27.342839 containerd[1502]: time="2025-05-16T00:23:27.342635203Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 00:23:27.816375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f63f48498b9a79a10dbe40b41b35e1e44b4fde41495d7a7b85d7d37e96e9c70-rootfs.mount: Deactivated successfully. May 16 00:23:28.242357 kubelet[2678]: E0516 00:23:28.242324 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:28.244034 containerd[1502]: time="2025-05-16T00:23:28.243992566Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:23:28.264851 containerd[1502]: time="2025-05-16T00:23:28.264791321Z" level=info msg="CreateContainer within sandbox \"d7e70927bfcc8897ba4b0813bbdd7ea3128ae86fcb5dd96a634a00bba8da07f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e456af39cc2f878f7d0e07ada0d9572cf3beb242ec6e6a984416dedc98a98342\"" May 16 00:23:28.265380 containerd[1502]: time="2025-05-16T00:23:28.265339630Z" level=info msg="StartContainer for \"e456af39cc2f878f7d0e07ada0d9572cf3beb242ec6e6a984416dedc98a98342\"" May 16 00:23:28.301529 systemd[1]: Started cri-containerd-e456af39cc2f878f7d0e07ada0d9572cf3beb242ec6e6a984416dedc98a98342.scope - libcontainer container e456af39cc2f878f7d0e07ada0d9572cf3beb242ec6e6a984416dedc98a98342. May 16 00:23:28.542669 containerd[1502]: time="2025-05-16T00:23:28.542536727Z" level=info msg="StartContainer for \"e456af39cc2f878f7d0e07ada0d9572cf3beb242ec6e6a984416dedc98a98342\" returns successfully" May 16 00:23:28.949298 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 16 00:23:29.246991 kubelet[2678]: E0516 00:23:29.246858 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:29.266874 kubelet[2678]: I0516 00:23:29.266791 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5r6d7" podStartSLOduration=6.266764078 podStartE2EDuration="6.266764078s" podCreationTimestamp="2025-05-16 00:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:23:29.266458246 +0000 UTC m=+128.944496290" watchObservedRunningTime="2025-05-16 00:23:29.266764078 +0000 UTC m=+128.944802132" May 16 00:23:30.249126 kubelet[2678]: E0516 00:23:30.249075 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:30.534500 systemd[1]: run-containerd-runc-k8s.io-e456af39cc2f878f7d0e07ada0d9572cf3beb242ec6e6a984416dedc98a98342-runc.eC8FFv.mount: Deactivated successfully. May 16 00:23:32.179513 systemd-networkd[1442]: lxc_health: Link UP May 16 00:23:32.185133 systemd-networkd[1442]: lxc_health: Gained carrier May 16 00:23:33.219241 systemd-networkd[1442]: lxc_health: Gained IPv6LL May 16 00:23:33.940607 kubelet[2678]: E0516 00:23:33.940443 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:34.256711 kubelet[2678]: E0516 00:23:34.256566 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:35.258716 kubelet[2678]: E0516 00:23:35.258673 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:23:36.856124 systemd[1]: run-containerd-runc-k8s.io-e456af39cc2f878f7d0e07ada0d9572cf3beb242ec6e6a984416dedc98a98342-runc.TkiG5C.mount: Deactivated successfully. May 16 00:23:38.997395 sshd[4587]: Connection closed by 10.0.0.1 port 59444 May 16 00:23:38.997841 sshd-session[4585]: pam_unix(sshd:session): session closed for user core May 16 00:23:39.001298 systemd[1]: sshd@33-10.0.0.14:22-10.0.0.1:59444.service: Deactivated successfully. May 16 00:23:39.003148 systemd[1]: session-34.scope: Deactivated successfully. May 16 00:23:39.003826 systemd-logind[1482]: Session 34 logged out. Waiting for processes to exit. May 16 00:23:39.004742 systemd-logind[1482]: Removed session 34. May 16 00:23:39.414967 kubelet[2678]: E0516 00:23:39.414936 2678 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"